
( Ohad Zwigenberg / AP Photo )
Brian Lehrer: Brian Lehrer on WNYC. Good morning again, everyone. Disinformation on the Hamas attack on Israel and Israel's responding war has been circulating virtually unchecked on online social media platforms like X, formerly known as Twitter. Here's what some of it looks like. An image of Israeli war planes bombing an Orthodox church, the largest in Gaza, apparently didn't happen. A video of a young girl being lit on fire by Hamas, apparently it didn't happen. A White House announcement providing Israel with $8 billion in aid, all falsely attributed or made up whole cloth, and yet garnering millions of impressions.
While we're now all used to this, to some degree, journalists, researchers, and open source intelligence experts are sounding the alarm that this is happening faster and at a greater scale than ever, and perhaps with significant implications during an intense time like this. Yesterday, the European Union, which recently put forth new regulations under their Digital Services Act, went so far as to issue Elon Musk, owner of X, obviously, a warning to regulate the disinformation on his platform and contact the relevant law enforcement agencies within 24 hours.
The Guardian reports that if Musk doesn't comply, he could face a fine of 6% of his revenues from X or a total blackout in the EU. Fake information is filling people's timelines, and the risk is that it will obscure the evidence of the very real atrocities taking place as well as bias people's politics. Joining us now to break down what's behind the droves of false reports on the war and why the EU is giving Musk an ultimatum to curb it, is Wired reporter David Gilbert. One of his recent pieces is titled, The Israel-Hamas War Is Drowning X in Disinformation. David, thanks for coming on. Welcome to WNYC.
David Gilbert: Thanks for having me, Brian.
Brian Lehrer: Can you give us the lay of the land? We've been talking about misinformation and disinformation on the show, and generally people in the media certainly since the 2016 election with all that Russian backed stuff that was going on in support of Trump. What's new today?
David Gilbert: I suppose what's new with this crisis is that for years, even since 2016 and 2015, we've been aware of these disinformation campaigns. Twitter or X has always been the place where people went during breaking news events to find the latest verifiable information typically from wherever the event's happening, and in this case, people would've been looking for videos or images or reports directly from the ground in Gaza or in Israel. This weekend when people logged on to Twitter on Saturday morning as the Hamas attacks began, what they got instead was just a deluge of disinformation, misinformation, re-posted videos from years ago claiming to be footage from just hours ago.
All of this was boosted by the new algorithm that Twitter has implemented where people who pay for their subscription model are boosted to the top of the feed, even if they're not people who are trustworthy and people who have any experience in sharing breaking news or making sure that what they're sharing is accurate or truthful.
Brian Lehrer: Just to give a couple of examples from your article and how this is being aimed at both sides in the Israel and Hamas war and how X is a major player in this, you're right, rather than being shown verified and fact-checked information, X users were presented with video game footage passed off as footage of the Hamas attack, and images of fireworks celebrations in Algeria presented as Israeli strikes on Hamas. I guess one question I have is from your reporting, have you found that the disinformation on the Israel-Hamas war is skewing toward one side or the other?
David Gilbert: No. This isn't a partisan issue. This is both sides, people who are posting footage to try and boost Israel's credibility, people posting footage to boost Palestine's credibility, people in support of Hamas, people in support of all sides. It's coming from everywhere. It is not solely just coming from one direction. This is an issue that Twitter is facing that all of its users are having to deal with.
Brian Lehrer: Listeners, we can get you in on this too. I wonder if you've been experiencing disinformation or the impact of disinformation online regarding the Israel-Hamas war. Was there anything in particular that you initially believed that you then found out turned out to be false, or generally, how are you verifying information online about the war or in general, about politics or anything else, when we know there are always people trying to prank us for political purposes, maybe sometimes for profit or for other reasons, how are you verifying information online, especially on social media for yourself these days?
Give us a call or shoot us a text at 212-433-WNYC, 212-433-9692 for Wired reporter David Gilbert, and sure, go ahead and tweet us as well, @BrianLehrer. You quote one researcher who posted on X, credible links are now photos. On the ground news outlets struggle to reach audiences without an expensive blue checkmark. Xenophobic goons are boosted by the platform's CEO. End time folks. That's a quote of one researcher. Let's break that down a bit. For people who aren't on X, there are no longer captions on links. They're just photos without captions. What's the significance of that?
David Gilbert: This was only implemented, I think last week or the week before. Typically in the past when you post a link to an article, you'd see the headline and maybe the first line of the article to give you some context about what it was. Now what you see is a picture. They grab the main image and a tiny little line in the bottom corner, which is difficult to see, to tell you which publication it was coming from. There's no context, you don't know what you're clicking on, all you have to go on is the contents of the posts of whoever has written it, and they don't necessarily have to be telling you the truth or it doesn't even have to have any link to the article that they're posting.
What that does is it just makes it even more difficult to sift through the posts on Twitter. It makes it harder for regular people to understand what's happening because if you don't use Twitter all the time, you're not used to this, and therefore, when you're logging on when a crisis is hitting, you are hit by this and you're seeing it for the first time maybe, and you're going, "Why haven't I seen any headlines? Maybe I won't trust that, even if it's from a verified source."
Therefore, it makes it much, much harder for people to sift through the huge amount of content that's coming out around this war, both trusted and verified content, and also the disinformation that's coming out. It just means that people are having a more difficult time figuring out exactly what is truthful and what isn't truthful.
Brian Lehrer The other thing that that researcher referenced, the credible news outlets that don't have the expensive blue checkmark. I will say, WNYC, for one, is refusing to pay for the blue checkmark, which used to signal a verified account, but not be charged for. Talk about that side of the equation, the de-emphasis on credible news while at the same time the algorithm is making it easier for disinformation goons.
David Gilbert: I suppose there's two parts. The first part is, as you say, most mainstream credible news organizations today are not paying Twitter to get a blue checkmark. Therefore, for a lot of people who were used to seeing that for years and years, it was always a sign that this outlet had been verified. Now, we shouldn't go and say that just because you got a blue checkmark, you were sharing truthful information. There was a lot of cases where people in organizations weren't, but it was at least a baseline. You knew that that had been checked, that the named outlet was actually [crosstalk]
Brian Lehrer: At least who they were claiming to be, right? At least who they were claiming to be.
David Gilbert: Exactly. You don't have to believe what they were saying, but you knew that it was coming direct from that outlet. Because Elon Musk got rid of that and decided to charge for it, most places now no longer have a verified blue check, as I said, but who does have a blue check are anyone who's willing to pay €8 a month. In a lot of cases, they set themselves up to look as if they're news outlets and they create a badge, they create a name, and they have a blue checkmark that makes them look as if they are a legitimate legacy news outlet, which in a lot of cases they're not, and they're just peddling misinformation.
Because they pay €8 a month to Elon Musk, their posts are boosted in the newsfeed by the algorithm, which decides that rather than boosting stuff that it knows is truthful or verified, they will boost people who are paying them money. Because of that system, it just incentivizes those people who are paying money to post content that is engaging, that people will click on. That kind of system makes the type of misinformation we've seen this weekend and over the last four or five days extremely important because people click on it, people don't take time to fact-check it. People just want to see a video that they think is coming from Gaza, or is coming from Israel, and may have been posted from Syria three years ago, or may come from Egypt, or maybe a video game footage, as you said.
That's what the change in the blue check verified system has done. It has downgraded legacy media and mainstream media, and it has upgraded people who are eager to monetize the algorithm and therefore post information that just isn't true.
Brian Lehrer: Yes. Since you're in Ireland, you express that X verification fee in euros, €8. I guess that's €8 a month. I'm doing a quick calculation here. Let's see. €8 in dollars is, oh, $8. [chuckles]
David Gilbert: Yes. The euro is particularly strong against the dollar at the moment, but it is just $8 as well in America. They charge the same amount, but that's what it does. It incentivizes people to post quickly and post a lot of clips, but not take any time to fact-check them.
Brian Lehrer: Officially, what I'm seeing, €8 is worth $8.44. Get them while you can. Let me go to this news about the EU issuing Musk a warning to regulate the disinformation on his platform and contact the relevant law enforcement agencies within 24 hours. I'm getting this from The Guardian, that if Musk doesn't comply, he could face a fine of 6% of his revenues from X or a total blackout in the EU. Who's doing what there?
David Gilbert: It's a hard one to parse because Thierry Breton, who issued this statement, has gone on a solo run effectively because he's not really following procedure. For example, if you look at the letter, it's addressed to Elon Musk, who's the owner of X, but in fact, it's Linda Yaccarino, is the CEO and should therefore probably have been addressed to her, and the reply came from her.
A lot of people have criticized Breton, who's the EU commissioner, for what they call an overreach, effectively, that he's trying to censor Twitter and some people have compared it to this is how the Great Firewall of China began. The EU is obviously-- it has a new Digital Services Act, which came into force just over a month ago, and it seeks to hold social media companies to account. Social media companies like Twitter.
It says that within 24 hours, illegal content needs to be taken down or else the companies will face a fine. It also pushes companies to improve their moderation system where if people report content that's hateful, maybe not illegal, but hateful, that they will have sufficient resources in order to deal with it, which on Twitter's case, is absolutely not happening because they fired most of their content moderators and the trust and safety team who was there designed to look after this stuff.
It's an interesting case because on one hand, the EU seems to be overreaching, and on the other hand, Elon Musk just seems to not really care about what they're saying. He's dismissed it in his comments to Breton on X, where he said, "Look, send me a list of tweets that you're worried about and I'll see what I'll do about it." It was a bit dismissive from Musk.
Brian Lehrer: When you say the EU may be overreaching, you mean they can't really fine him 6% of his revenues or blackout X in the EU?
David Gilbert: They can. That's there in the Digital Services Act, but Breton isn't following the steps that he should in order to use that law to punish Twitter. A number of EU officials who spoke to one of my colleagues were critical of Breton because he was not following procedure, that he was taking things a bit too far, that he was-- Some of them accused him of electioneering because he had an election coming up and wanted to remain in the news, which is what's happened, because a lot of people are talking about this now.
When I contacted the EU and asked them exactly what the next steps would be, they sent me back a long, long list of what would happen and right down at the end is really big fines, and right at the end is when they would potentially take action to blackout Twitter in Europe. That's not really going to happen. The threats are pretty empty because the type of fines that Twitter would face in the short-term are pretty minuscule and wouldn't really bother Elon Musk. To be honest, 6% of X's revenue is probably not a huge amount of money anyway because of the amount of advertisers who have left the platform.
Brian Lehrer: Ali in Manhattan, you're on WNYC. Thank you for calling in, Ali. Hi.
Ali: Hello.
Brian Lehrer: Hello.
Ali: You can hear me?
Brian Lehrer: I can hear you, sir.
Ali: I just want to share with your guest, there's nothing you can do about X or Facebook. We're in the technology, people are smart, especially the people that they want to give misinformation, and they come-- I'm Palestinian, by the [unintelligible 00:16:40] we have inter-Jewish marriages. It's hard for me to talk even to my family member right now, whether they're in West Bank, a couple of them is stuck right now, or whether they're in United States.
It's not easy to speak with them. Other day I'm sitting out there, just yesterday, they're showing me a video. I said, "That's impossible." Interviewing a Jewish family that the wife of the Jewish family was saying that-- I'm sure your guest's seen this video, their feeding the Hamas people, but then Hamas people killed her son when they had a [unintelligible 00:17:15] in their hand.
It didn't make any sense. I said, "Why would you even watch these videos? They don't make any sense, whether they are true or not." This is what we are in the age, and as you can see that your last guest earlier-- I'm a long-time listener of your show. Second time I'm calling your show. Your first guest was talking about politician. When we have a crazy politician on both sides, how are we going to solve our misinformation? I appreciate what your guest does, pointing out all this misinformation. There's nothing we can do about it. The world is a different place right now.
Brian Lehrer: Ali, thank you very much for your call. What do you think we can do about it, David? In a way, it's an endless game of whack-a-mole for Elon Musk or anybody else who owns a social media platform to find every piece of disinformation and try to remove it from the site when billions of posts are coming all the time.
David Gilbert: I have to agree with your caller that it is come to the point where people just do not trust the media anymore. They do not trust journalists from the mainstream media. They automatically push back against any reports from the mainstream media now because-- or a large proportion of people do, at least, because it has been ingrained in them by politicians in the US but also across the world who are joining this right-wing populist wave where one of the main tenets of that is that the media is not to be trusted.
As a result, people live in these echo chambers online where they get content, be it on TV or online from where they're getting one worldview and therefore not opening themselves up to anything else. The result is what happened this weekend where people credulously shared videos that even if-- As Ali said, that video that he's talking about, if you just took two seconds to think about what was happening, you'd go, "Well, of course, that's not true. That that wouldn't happen," but people don't care in the moment.
People are so [unintelligible 00:19:34] needing to express their views that they decide to do so on social media. What can we do about it? I think the very baseline that should be done is that Twitter needs to remove the idea that if you post a lot of content and people click on it, whether it's true or not, you are going to be earning money through the ads revenue share program, which is what it's doing right now.
If you sign up for $8 a month, no matter who you are, no matter what kind of information you're pumping out, and you get the clicks, you're going to get paid by Twitter. That just automatically creates an ecosystem where disinformation is hugely valuable, and the person who's pumping out the most of it, and who does it quickest, and who posts the most extreme content is going to get the most money. I think, at the very least, that should be addressed. That means telling Elon Musk what to do, and that's not going to happen. Unfortunately, I don't have an answer about how can we make this better.
Brian Lehrer: George in Manhattan, you're on WNYC. Hi, George.
George: Yes, Brian. The LA Times retracted a story that they had seen regarding babies being beheaded by Hamas, if you can believe that. The same article you read in The Guardian, I understood from that article that the EU commissioner is basically making a demand to give up service within the EU because ever since Musk has softened basically his content restrictions, there has been a flood, as you know, of all kinds of disinformation and misinformation.
Now, you mentioned only Musk, but Zuckerberg, as that article suggests, was also given 24 hours' notice on what exactly he's doing to prevent the spread of this illegal content. Because this problematic content is-- I personally don't read it or look at it on any of these things. Regrettably, I've seen this played out even by some politicians who are talking about beheading babies, which has been unverified and it's simply not true.
Brian Lehrer: George, thank you very much for that. I want to address two things that he brought up. The Zuckerberg and Facebook point second. Yes, we had a congressman on the show the other day mentioning that claim as a fact that Hamas was beheading Israeli babies. Even President Biden said the same thing on Wednesday at a roundtable with Jewish community leaders, according to CNN. This morning, CNN reports that the Israeli government cannot officially substantiate that information.
The Israeli Defense Force did say that women, children, and the elderly were, "brutally butchered in an ISIS way of action," which is shocking and devastating enough. I think we've seen enough evidence of gruesome horrible things that they have done to know that this attack is gruesome and horrible, but specifics still matter. Do you think that even high-ranking officials like the President of the United States are falling for disinformation because of what's happening on social media, David?
David Gilbert: Yes. It's affecting everyone. The White House walked back that statement. They said it was based on media reports. This highlights an example of the pipeline of how these kind of things happen, not necessarily in this case, because the origin of that story was from one Israeli news outlet, I24, who spoke to soldiers in the area, and the soldiers told them that this is what had happened. No one else has been able to verify it. You have to take that with a grain of salt.
In a lot of cases, what we see happening is the videos or images are posted on Telegram, typically, which has zero content moderation, effectively. Then those images are taken from there out of context and posted on X, where they're boosted by the algorithm posted by people who are subscribing to Twitter Premium or X Premium. From there, they're picked up by new sources and taken as legitimate.
That pipeline just works so quickly these days that it can-- a lie can get out without being fact-checked and be repeated by the President of the United States, and then he has to walk it back again. It just shows how dangerous this kind of disinformation and misinformation is, and how the companies who are helping to supercharge it need to really stop and take some stock and try and figure out how to curb its spread on their platforms.
Brian Lehrer: We've been talking about Musk and X, but what about Zuckerberg and Facebook?
David Gilbert: Yes, Facebook, typically over the last six or seven years has been the place that's been criticized most for doing it. It still has a lot of misinformation and disinformation on it, but because Elon Musk has lowered the bar so much, people are not paying attention to it. Some of the human rights and civil society groups I was speaking to in the region, they say that when they ring Twitter or ring X, no one is picking up the phone. They have no one to contact there when they want some content taken down. At least when they speak to someone at Facebook, someone will answer the phone.
Now that's a very low bar, but it's at least doing something to try and address the situation. I have spent years reporting on Mark Zuckerberg and Facebook and their failings at dealing with content on the platform so I'm coming at this from a point of view of someone who's typically very critical of Facebook and still am, but it's at least doing a little bit better than Elon Musk.
Brian Lehrer: We're going to take one more call. Margaret in Savannah, Georgia is suggesting something to read to give ourselves tools for evaluating information that we come across. Hi, Margaret, you're on WNYC, what you got?
Margaret: I'm pushing strongly a book called True or False. It's by Cindy Otis. I think it should be required reading for all high school students. She's an ex-CIA analyst. She breaks down all the way back in history on how false information is presented to the public and believed to the present day. It's an excellent guide on how to understand the subtleties that are in whether it's print media or televised media. It's a true guide on how to differentiate between what you are reading, whether it is true, or whether it is false.
Brian Lehrer: Margaret, thank you very much. True or False by Cindy Otis. Is that a book you happen to know, David?
David Gilbert: Yes, absolutely. I absolutely agree. 100%. Cindy is really good at this stuff. You should definitely read that book.
Brian Lehrer: Good. To finish on one more thing about Musk, you allege in your article that Musk himself is spreading disinformation. It's not just the algorithm that's allowing intentionally bad actors to do it who have a stake in the war, it's Musk himself. Your most recent article is titled something we actually can't say on the air. Elon Musk is S posting his way through the Israel-Hamas war. How is Musk doing that?
David Gilbert: Yes. He's been doing this since he took over. He's increasingly got more extreme. On Saturday evening or Sunday morning, as the conflict was really just breaking and everything was still becoming clear, he posted a tweet or a message listing two accounts that he said people should follow for underground verified reporting. Of course, the two accounts that he suggested were people well known for spreading disinformation, one of them who had posted anti-Semitic comments just months earlier. Eventually, he deleted that after a lot of criticism. Then he went on as the crisis was unfolding on Monday, as we reported our piece on Monday about disinformation flooding Twitter.
Rather than spending his day dealing with us, he spent his whole evening on Monday interacting with far-right extremist accounts, conspiracy accounts, one [unintelligible 00:29:02] He talked to someone about this anti-Muslim no-go zone conspiracy theory in Sweden. He laughed at a post about a transphobic video. He interacted with someone who posted about how the mainstream media was hiding information about the Ukraine and Russia war and comparing it to the conflict in Gaza and how much so much more information came up. Effectively, he was just signal-boosting all these various different accounts, all of which had been spreading disinformation or conspiracy theories or hate speech.
Brian Lehrer: Trolling for Putin, it sounds like, in that context.
David Gilbert: Yes, absolutely. He does this where his excuse for doing it if he was asked about it will be that he was just joking, he was just trolling people, which he loves to do. In the middle of a crisis, on a platform who is the absolute center of disinformation and putting people's lives at danger, to be doing that is quite incredible, I feel, and should be something that he maybe needs to take a second look at.
Brian Lehrer: We'll give the last word to someone who tweets at us, "For my news absorption," writes this listener, "I wait to see what becomes the consensus of the facts before sharing or talking about what I read." Thank you, listener, for that. Thank you, David Gilbert, reporter at Wired. One of his recent pieces titled, The Israel-Hamas War Is Drowning X In Disinformation. That other one we really can't say on the air, but you get the idea. David, thank you so much for coming on. We really appreciate this segment.
David Gilbert: No problem. Thanks for having me.
Copyright © 2023 New York Public Radio. All rights reserved. Visit our website terms of use at www.wnyc.org for further information.
New York Public Radio transcripts are created on a rush deadline, often by contractors. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of New York Public Radio’s programming is the audio record.