When Lies Go Viral: Meta’s Retreat From Fact-Checking

Photo courtesy of Variety

In 2012, Meta’s algorithm pushed hate speech that fueled the massacre of the Rohingya people in Myanmar. This content, including a widely shared video from prominent anti-Rohingya figure U Wirathu, inflamed discrimination and animosity against the already marginalized group. Five years later, Burmese security forces launched an ethnic cleansing campaign, killing 6,700 Rohingya people and forcing 850,000 inhabitants to flee the war-torn country. Some say the genocide was only possible because of Facebook, now known as Meta. 

The spread of misinformation on Meta also fanned the flames of the Capitol riots on January 6th, 2020, with the algorithm feeding posts that questioned the legitimacy of Joe Biden’s victory. Additionally, this past July, violent anti-immigration protests inflicted by right-leaning groups that took advantage of Meta’s platform to promote greater violence rippled across the United Kingdom.

The Myanmar genocide and the violence in the United States and the United Kingdom underscore the grave harm misinformation can cause and the immense and dangerous power that platforms like Meta wield. In 2016, Meta attempted to combat misinformation by adopting the non-partisan International Fact-Checking Network, a global coalition that promotes accuracy and transparency in journalism. 

However, on January 7th, Meta announced an end to their partnership. The chief executive, Mark Zuckerberg, defended the decision, declaring, “It’s time to get back to our roots around free expression on Facebook and Instagram.” He continued, “Governments and legacy media have pushed to censor more and more… But the fact-checkers have just been too politically biased, and destroyed more trust than they’ve created, especially in the U.S.”

Many Republicans have long argued that the fact-checking disproportionately suppressed right-leaning ideologies. Dr. Megan Duncan, an associate professor in the School of Communication at Virginia Tech, recalled, “For years, we heard conservatives, especially in Congress and the Senate, complain that they were being discriminated against by these fact checks. We don’t have much evidence for that.”

Critics stress that Meta’s move signals a broader shift in social media philosophy—one that prioritizes user engagement over truth and appeases President Trump along with other conservatives. Dr. Duncan reflected, “It represents a change across social media platforms of their philosophy toward attracting a right-wing partisan audience.” 

Dr. Duncan also viewed Meta’s shift as part of a growing trend among media platforms of valuing public consensus over expert analysis. She described it as the belief that “whatever people think is the truth is the truth, and that the average social media user en masse can decide better than experts.” 

In place of the Fact-Checking network, Meta has adopted Community Notes, a crowdsourced system requiring user consensus before a post can be flagged as false information. However, research indicates this system is highly susceptible to political bias. 

Dr. Duncan’s study found that when subjects were asked whether or not to contribute to a post, only those with the most polarizing opinions opted to weigh in. The study also found that people were not persuaded to change their partisan opinions about what they deemed the truth, even when their beliefs were challenged by evidence offered in the Community Notes. 

Some studies report a 97% accuracy rate for Community Notes in COVID-related contexts, yet the system is far slower than traditional fact-checking, since it only flags misinformation once enough users agree. One report found that it took up to 70 hours for relevant notes to appear on a misinformation post about the Israel-Hamas conflict. Moreover, the report found that most notes are not even seen, and that most misinformation does not receive notes.

Dr. Duncan warned that bad actors can weaponize such systems: “We see so many people who view participating in crowdsourcing systems as a chance to play a game where they get to cheerlead for their own side,” she explained. “More than rewarding their own side, people love punishing the opposing party.”

Steven Brill, chief executive of NewsGuard—an organization that rates the reliability of news sources with the help of a team of journalists—was even more blunt in his criticism. Regarding Meta’s decision, he expressed that, “It wasn’t surprising, but it was also a non-event because their fact-checking program was just window dressing anyway. Of course, the thing they’ve replaced it with is even more absurd: Community Notes.” 

According to Brill, Meta’s core business model has always revolved around maximizing engagement and profit, not ensuring the integrity of information. He believes that Meta’s fact-checking program has never been reliable. Brill explained, Their business model wasn’t to catch inflammatory, false stuff. It was to get as many eyeballs as they could. And the way to do that is to publish as many hoaxes as you can.” 

For years, Russian actors launched disinformation campaigns promoting false claims about 5G technology causing brain cancer, which circulated digital spaces like Facebook, now known as Meta. Once the pandemic hit, they swapped out cancer for Covid. According to Brill, Facebook would wait several days to fact-check, so the propaganda would have already gone viral and been seen by millions before eventually getting flagged. He concluded, “they have the same priorities they have had since they founded the company: they just want to make money.” Meta profits from advertising, and the more inflammatory the posts, the higher the user engagement—and the more interactions with ads.

Meta’s decision sends mixed signals about its stance on misinformation, with one perspective interpreting it as the platform prioritizing free speech over content censorship, while another perceiving it as a retreat from accountability. Assistant Secretary of Commerce for Communications and Information under the Biden Administration, Alan Davidson, expressed, in an interview with The Politic, “On some level, we are now seeing Meta’s true colors. Mark Zuckerberg has made it clear that Meta does not want to take an active role in being accountable for the harmful content that appears on its network.”

Experts like Dr. Duncan fear that Meta’s decision will have dire consequences, especially in politically sensitive contests. Elections, public health crises, and moments of civil unrest have historically been breeding grounds for misinformation, with platforms like Meta playing a central role in its dissemination. 

Dr. Duncan recalled past election-related falsehoods: “We’ve seen people attempt to persuade others that election day is a different day, or you don’t have to go to the polls, or that you can vote by text.” She continued, “all of these falsehoods could slowly instill distrust in the government and lead to people not participating in democracy.” 

Davidson echoed these concerns: “it’s very frightening to think we will have few resources to stop the spread of very harmful misinformation during elections, public health crises, or in times of violence that we’ve seen around the world.”

Meta’s decision also triggers pressing questions about platform accountability. Social media companies in the United States are protected from liability under Section 230 of the Communications Decency Act—a provision that shields them from being held responsible for user-generated content. However, as Davidson notes, this legal framework may need to evolve if platforms continue to abdicate responsibility for the content they amplify. He explained, “There’s going to be a very healthy conversation that should happen about whether those liability protections make sense if platforms are not holding up their end of the market.”

He clarified, “I don’t think anybody expected that Meta would be the arbiter of truth and police every piece of content on its platforms, but what it had committed to do was, in egregious cases, to slow the spread of really harmful misinformation.” 

Emerging technologies—particularly artificial intelligence—may play a role in combating misinformation, but AI is still far from a perfect solution. “AI will be a really powerful tool, but there’s still a huge amount we have to learn about how AI will work in this space,” Davidson warned. “There have been prominent examples of AI systems hallucinating on their own and making up facts.” For example, one New York attorney representing a client in Mata v. Avianca used ChatGPT for his legal research and was found to have internal citations and quotes that did not exist. This raises concerns regarding AI’s reliability, at least for now, given its inaccuracies. 

Meta’s retreat from fact-checking provokes questions about what this means for the broader information ecosystem and whether other platforms will follow suit. Brill observed, “X (formerly Twitter), encouraged Meta, and it basically gave everybody an excuse, first, to save money on the few fact-checkers that they have, but second, to have content that is as inflammatory as possible.” 

Davidson summarized, “companies have every right to choose to create the systems that they want, but part of the bargain that they’ve held out to their users is that these will be safe environments that you want to participate in. Increasingly, that’s not going to be true.” 

Despite these bleak trends, Davidson hopes that Meta’s move “will be a wake-up call for people that they need to be really mindful about their news diet.” He suggested that users should look carefully and attentively at the source of their information. 

In an age of the heightened spread of misinformation, digital literacy is more important than ever. “People need to get a lot more savvy about digital literacy. Oftentimes, when things sound a little too good to be true or too weird to be true, they often are.” Davidson suggests, “I think we need a generational approach to helping people understand what’s real and what’s not.”

For policymakers, he offered a final plea: “Don’t give up on this issue. It’s too important for our country.”