Tech giants have made significant changes for the 2020 presidential election to avoid getting accused of facilitating misinformation or fake news in a repeat of 2016.
To tamp down efforts, especially by Russia, to sway elections unfairly or troll voters, Facebook, Google, Twitter, and other major companies have set aggressive new content moderation rules with regard to misinformation, disinformation, and deep fakes. They have made it easier to self-report violations and have hired hundreds, sometimes thousands, of employees along with digital bots to monitor problematic activity on their platforms. And the social media behemoths have also adopted a new collaborative spirit with relevant government agencies and plan to communicate directly with many more campaigns and candidates in 2020.
“They are more vigilant now than they were in 2016,” said Daniel Kreiss, a political communications professor at the University of North Carolina, Chapel Hill. “The scope and scale of the problem is much more apparent now.”
Kreiss, who researches the effects of technology platforms on politics and the public, says that all the social media platforms have spent the intervening years since 2016 trying to think through misinformation and disinformation in all their forms.
“I’m a bit heartened. The platforms also have a lot more coordination and communications directly with campaigns than before. They’re not perfect — but much better now than four years ago,” said Kreiss.
The data speak for themselves. A 2018 study conducted on false news stories on Facebook and Twitter showed that the number of “fake news” engagements on Facebook went from 160 million at the end of 2016 to 60 million in July 2018.
“Facebook specifically has had several policy changes since 2016 designed to address the misinformation issues,” said Matthew Gentzkow, a co-author of the study and a professor at Stanford University.
“The timeline of the policy changes lines up well, as the engagement of those false stories peaked in 2016 and has been declining since. You don’t see a similar decline in engagement in other types of stories, mainstream stories, “said Gentzkow.
According to Gentzkow’s study, fake news engagement continued to rise on Twitter well into 2018. There doesn’t appear to be a clear explanation for the difference between engagement on Facebook and Twitter. Still, Gentzkow said that, anecdotally, there was less of an effort to combat misinformation on Twitter.
He said this is partly because there was a lot more pressure and focus on Facebook to change itself after the 2016 election and not as much on Twitter.
“There were major efforts by both platforms. Twitter isn’t doing something terrible,” he said.
According to Twitter, it recently turned on a tool on the website for key moments of the 2020 U.S. election that enables people to report deliberately misleading information about how to participate in an election or other civic events.
“As caucuses and primaries for the 2020 presidential election get underway, we’re building on our efforts to protect the public conversation and enforce our policies against platform manipulation,” a Twitter spokesperson said.
Multiple experts who study social media platforms’ effects on politics told the Washington Examiner that despite the significant improvements in content moderation by platforms such as Facebook and Twitter since 2016, many challenges still lie ahead.
Gentzkow said that most extreme instances of fake news, involving “crazy hoax stories” or conspiracy theories, are not the biggest problem anymore. He said the real issue lies in sensationalist, misleading content on both sides of the aisle that is partially accurate and partially false — misleading readers in a more nuanced fashion.
Such content is much harder to regulate, said Gentzkow, because “it’s a delicate thing — which content people should see and which content people don’t.”
There’s an important distinction between paid speech and organic speech. Content that comes from users themselves, posted for free, is typically organic speech. Paid speech comes in the form of ads from organizations, corporations, and campaigns.
Both types of speech can be used by organizations, campaigns, and foreign governments to try and deliberately misinform citizens to the manipulating entity’s benefit.
Kreiss, the professor from UNC, said paid speech is much easier to regulate since the platforms profit from advertisements, and so, they can much more easily regulate who can buy ads and what can and can’t be said. Twitter has banned all political ads, for example, but Facebook does not police the content of political ads at all, Kreiss explained. Google is known to be somewhere in the middle and evaluates ads on a case-by-case basis.
Organic speech is much more challenging to combat, Kreiss said, because it’s hard to differentiate between users posting false content because they are trying to manipulate or misinform others on the platform intentionally and users posting because they’ve been convinced to believe the false information themselves.
Ultimately, Kreiss said that the social media platforms “take their responsibility seriously, and it’s trending in the right direction, but there’s still a lot of work to be done because these are really tough problems.”

