Late Thursday night, during Minneapolis’s first night of protests in response to the police killing of George Floyd, as he was considering sending in the national guard, President Trump tweeted, “Any difficulty and we will assume control but, when the looting starts, the shooting starts.”
That last phrase, one regularly used by segregationists like George Wallace, prompted Twitter to put a warning label on the tweet. Twitter slapped the same warning on an identical tweet from the official White House account posted shortly thereafter.
“This Tweet violated the Twitter Rules about glorifying violence. However, Twitter has determined that it may be in the public’s interest for the Tweet to remain accessible,” the warning says.
Just the most recent flash point between Twitter and Trump, the responses from different social media companies are illustrative of different tides within the tech world.
Earlier in the week, for the first time ever, Twitter embedded links to fact-checks of two of Trump’s tweets. The first peddled a conspiracy theory about MSNBC Host Joe Scarborough being a murderer. The second was about mail-in ballots leading to fraud. Both claims are false.
Jack Dorsey, Twitter’s CEO, explained the decision to fact-check the president in a post on Wednesday.
“We’ll continue to point out incorrect or disputed information about elections globally. And we will admit to and own any mistakes we make,” Dorsey said. “This does not make us an ‘arbiter of truth.’ Our intention is to connect the dots of conflicting statements and show the information in dispute so people can judge for themselves. More transparency from us is critical so folks can clearly see the why behind our actions.”
After the squabble, Trump retaliated with an executive order attempting to revise Section 230 of the Communications Decency Act, a law that prevents social media companies from being liable for the content users post to their platforms. Without it, companies would be considered publishers who would be legally responsible for everything posted on their websites.
Critics have pointed out that Trump benefits immensely from Section 230. Kate Ruane, senior legislative council for the ACLU, told the New York Times, “If platforms were not immune under the law, then they would not risk the legal liability that could come with hosting Donald Trump’s lies, defamation and threats.”
Zeynep Tufekci, associate professor at the University of North Carolina, thinks Trump’s executive order is for one person: Mark Zuckerberg. Trump benefits a lot from Facebook. He has run millions of ads on their platform, where he gets millions of engagements from users.
Before COVID-19, much of the Trump campaign was centered around getting voters’ contact information through Facebook ads. Now, with the campaign almost entirely in the digital realm, Facebook is even more important.
Both Facebook and Instagram (which is owned by Facebook) left up Trump’s statement about shooting looters.
In an interview with Fox News, Zuckerberg reiterated his long-held position against fact-checking politicians, in light of the Trump-Twitter conflict. “We’ve been pretty clear on our policy that we think that it wouldn’t be right for us to do fact checks for politicians,” he told Fox News’ Dana Perino.
However, Facebook’s community guidelines are not much different from Twitter’s when it comes to inciting violence. During testimony before the House Financial Services Committee last October, Zuckerberg told Rep. Alexandria Ocasio-Cortez (D-NY) that if a politician incites violence, Facebook would take the content down.
“If anyone, including a politician, is saying things that can cause or is calling for violence, or could risk imminent physical harm…we will take that content down.”
"If anyone, including a politician, is saying things that can cause, that is calling for violence or could risk imminent physical harm…. we will take that content down."
Zuckerberg told @AOC a few months ago.https://t.co/jANwJdhgmz pic.twitter.com/sbq4gw86n7
— Donie O'Sullivan (@donie) May 29, 2020
In a post from Friday morning, Zuckerberg tried to explain the seeming contradiction.
“I know people are frustrated when we take a long time to make these decisions. These are difficult decisions and, just like today, the content we leave up I often find deeply offensive. We try to think through all the consequences, and we keep our policies under constant review because the context is always evolving,” he said. “People can agree or disagree on where we should draw the line, but I hope they understand our overall philosophy is that it is better to have this discussion out in the open, especially when the stakes are so high.”
The difference between the responses of Facebook and Twitter illustrate two competing attitudes in the tech world: a hands-off and hands-on approach to regulating political speech. Twitter is taking the latter, but not without internal dissent to Dorsey’s final decision.
Ethan Porter and Thomas J. Wood, two of the leading academics on media effects, think Facebook’s approach to fact-checks is based on outdated scholarship. In an essay for Wired Magazine, the scholars argued Facebook’s preoccupation with “backfire effects” was causing it to stumble.
Backfire effects are when a platform fact-checks a lie and the false claim becomes stronger. The theory was regularly accepted a few years back but has since fallen out as scholarship has progressed. Nevertheless, Facebook regularly invokes concerns over backfire effects when addressing disinformation.
“So why would a social media company like Facebook remain stubbornly attached to backfire?” Porter and Wood ask. “It’s possible that the company has evidence that we and our colleagues haven’t seen. Facebook rarely allows external researchers to administer experiments on their platform and publicize the results.”
They continue, “It could be that the best experimental designs for studying fact checks simply do not capture the actual experience of interacting on the platform from one day to the next. Maybe real-world Facebook users behave differently from the subjects in our studies. Perhaps they gloss over fact-checks or grow increasingly inured to them over time. If that’s the case, Facebook may have documented clear evidence of backfire on the platform, and never shared it.”