Big internet companies are finally taking misinformation “superspreaders” seriously. (All it took was a global health crisis and the great lie of a rigged election.)
I’ve written about influential people, including former President Donald J. Trump, who have been instrumental in spreading false information online about important topics like election integrity and vaccine safety. Some of those same people have repeatedly twisted our beliefs — and internet companies have largely given them a pass.
Let’s dig into why habitual misinformation peddlers matter and how internet companies have begun to focus on them — including the new rules put in place by Facebook this week.
Facebook, Twitter and YouTube deserve credit for beginning to target repeat misinformation offenders. But I also want people to be aware of the limits of the companies’ actions and to understand the challenge of applying these policies fairly and transparently.
How big of a problem are people who repeatedly post untrue things?
A lot of stuff that people say online isn’t necessarily true or untrue. We want room for the messy middle. The concern is when information is outright false, and we know that some of the same people are responsible for amplifying that misinformation again and again.
Last fall, a coalition of misinformation researchers found that about half of all retweets related to multiple and widely spread false claims of election interference could be traced back to just 35 Twitter accounts, including those of Mr. Trump and the conservative activist Charlie Kirk. A research group recently identified the accounts of about a dozen people, including Robert F. Kennedy Jr., who repeatedly — sometimes for years — pushed discredited information about vaccines or, more recently, false “cures” for Covid-19.
Until recently, it mostly didn’t matter whether someone posted junk health information or a false election conspiracy theory once or 100 times, or whether the person was Justin Bieber or your cousin with five Facebook followers. Internet companies typically assessed the substance of each message only in isolation. That made no sense.
How policies are starting to focus on these habitual offenders
The riot at the U.S. Capitol on Jan. 6 showed the danger of falsehoods repeatedly uttered to a public inclined to believe them. Internet companies began to address the outsize influence of people with large followings who habitually spread false information.
Facebook on Wednesday said that it would apply stricter punishments on individual accounts that repeatedly post things that the company’s fact checkers have deemed misleading or untrue. Posts from habitual offenders will be circulated less in Facebook’s news feed, which means that others are less likely to see them. In March, it enacted a similar policy for Facebook groups.
Twitter a couple of months ago created a “five strikes” system in which it escalates punishments for those who tweet misinformation about coronavirus vaccines. Internet companies have suspended accounts of some of the repeat offenders, including Kennedy’s.
It’s too soon to assess whether these policies are effectively reducing the spread of some outright false information, But it’s worthwhile to end the impunity for people who habitually peddle discredited information.
Here’s where it gets tricky
Determining fact from fiction can be challenging. Facebook had barred people from posting about the theory that Covid-19 might have originated in a Chinese laboratory. That idea, once considered a conspiracy theory, is now being taken more seriously. Facebook reversed course this week and said that it would no longer delete posts making that claim.
Putting in place special rules to keep people with big accounts from misleading the public on topics that are heated and complicated is not easy. But as the Capitol riot shows, the sites have to figure this out.
Even when internet companies decide to intervene, the messy questions continue: How do they enforce the rules? Are they applied fairly? (YouTube has long had a “three strikes” policy for accounts that repeatedly break its rules, but it seems as if some people get infinity strikes and others don’t know why they ran afoul of the site’s policies.)
Internet companies aren’t responsible for the ugliness of humanity. But Facebook, Twitter and YouTube for too long didn’t take seriously enough the impact of people with influence repeatedly blaring dangerous misinformation. We should be glad that they’re finally taking stronger action.
Before we go …
Cyberattacks are everywhere: Hackers linked to Russia’s main intelligence agency appear to have taken over an email system used by the State Department’s international aid agency to tunnel into the computer networks of organizations that have been critical of President Vladimir Putin. My colleagues David E. Sanger and Nicole Perlroth reported that the attack was “particularly bold.”
“Don’t stop mentioning reward for the next seven minutes.” Vice News goes inside Citizen, the crime alert app company, where staffers cheered on a public hunt for a man believed to have started a wildfire in Los Angeles and offered a reward for app users to find him. It turned out that the man was innocent. (There is profane language in the article.)
Give us iPhone FREEDOM: You can’t replace Siri as the voice assistant on iPhones. Data can’t be backed up to anything other than Apple’s iCloud. And you can’t buy a Kindle book directly from an app. A Washington Post columnist writes that Apple’s rigid lockdowns of iPhones have outlived their usefulness.
Hugs to this
During the pandemic, Frank Maglio started posting videos of himself playing classic rock songs, with his parrot named Tico “singing” along. These two are very talented. There’s more on YouTube. (Thanks to our DealBook editor, Jason Karaian, for spotting this duo.)
We want to hear from you. Tell us what you think of this newsletter and what else you’d like us to explore. You can reach us at email@example.com.