“The meta has always been home to Russian, Chinese and Iranian disinformation,” says Gordon Crovitz, co-founder of NewsGuard, a company that provides a tool for assessing the credibility of information on the Internet. “Now Meta has apparently decided to open the floodgates completely.”
Once again, fact-checking is not perfect; Croviz says NewsGuard has already tracked several “false narratives” on Meta platforms. The community notes model that Meta will employ to replace its fact-checking battalions could still be quite effective. But tests from Mahavedan and others showed that crowdsourcing solutions miss huge areas of misinformation. And unless Meta is as limpid as possible about how its version is deployed and used, it will be impossible to tell whether the systems are working at all.
It’s also unlikely that switching to community notes will solve the “bias” problem that worries Meta executives so much, given that it seems unlikely that such a problem exists at all.
“The motivator behind this whole Meta policy change and Musk’s takeover of Twitter is to accuse social media companies of being biased against conservatives,” said David Rand, a behavioral scientist at MIT. “There’s just no good evidence for it.”
In a recently published paper in Nature, Rand and his co-authors found that while Twitter users who used Trump-related hashtags in 2020 were more than four times more likely to be ultimately suspended than users who used pro-Biden hashtags, they were significantly more likely to share “low-quality news” or misleading.
“Just because there’s a difference in how someone is played doesn’t mean there’s bias,” Rand says. “Audience ratings can replicate fact-checkers’ ratings quite well… You’ll still see more conservatives than liberals get sanctions.”
And while X is getting so much attention, thanks in part to Musk, it’s critical to remember that it’s an order of magnitude smaller than Facebook’s 3 billion monthly busy users, which will pose its own challenges once Meta installs its own community notes-like system. “There’s a reason there is only one Wikipedia in the world,” says Matzarlis. “It’s very difficult to crowdsource something at scale.”
When it comes to easing Meta’s policy on hateful conduct, that in itself is an inherently political choice. It’s still allowing certain things and not allowing others; pushing those boundaries to accommodate bigotry doesn’t mean they don’t exist. It just means that Meta feels better about it than the day before.
Much depends on how exactly the Meta system will work in practice. But between moderation changes and changes to community guidelines, Facebook, Instagram and Threads are heading towards a world where anyone can say that gay and transgender people suffer from a “mental illness,” where AI bugs will spread even more aggressively, where outrageous claims spread unchecked where truth itself is malleable.
You know: just like X.