Over 150 Million Facebook Posts Misleading or Suppressing Voters Discovered on Platform

More than 152 million posts designed to manipulate voters ahead of the U.S. presidential election have been discovered across Facebook's platforms, the company has revealed.

Facebook has had to tag warnings to 150 million misleading posts related to the election, and 2.2 million ad submissions have been rejected because they failed to complete the political ads authorization process.

Facebook has taken down a further 120,000 pieces of content across Facebook and Instagram for violating its voter interference policies.

Former U.K. Deputy Prime Minister Nick Clegg, who is now Facebook's vice president of global affairs and communications, quoted the figures in an interview with French publication Journal du Dimanche.

Social networks, but Facebook in particular, are under scrutiny ahead of the November 3 election, with misinformation and accusations of foul play and conspiracy rife online.

The Trump campaign and Cambridge Analytica reportedly used ads delivered through Facebook to try to deter 3.5 million Black Americans from voting in the 2016 election.

Cambridge Analytica closed down in 2018 amid reports that it had obtained the data of tens of millions of Facebook users without their consent, and had been using it to build psychological profiles of them, which could then be used for tailoring ads.

Matt Oczkowski, the former head of product at Cambridge Analytica, has been hired for the Trump 2020 campaign.

The figures quoted by Clegg were the same figures revealed by Guy Rosen, Facebook's vice president of integrity, in a blog post dated October 7.

Rosen wrote that those figures applied to the period spanning March 2020 to September 2020. A spokesperson for Facebook told Newsweek that these are still its latest statistics.

Clegg said that although Facebook employs 35,000 people for maintaining the security of its platforms, it is also relying on artificial intelligence, which has "made it possible to delete billions of posts and fake accounts, even before they are reported by users."

More detail on this can be found in Rosen's blog post. "Since 2016, we've built an advanced system combining people and technology to review the billions of pieces of content that are posted to our platform every day," it reads. "State-of-the-art AI systems flag content that may violate our policies, users report content to us they believe is questionable and our own teams review content."

The company has also built a tool called the Crisis Assessment Dashboard, which promises to detect "spikes in hate speech or voter interference content happening in Pages or Groups in near real-time across all 50 states."

Correction 10/19 6.40 a.m. ET: This article has been corrected to say 2.2 million ads were rejected because they did not complete the correct authorization process, rather than because they intended to obstruct voting.

Facebook logo
In this photo illustration, the Facebook logo is displayed on the screen of an iPhone in front of a TV screen displaying the Facebook logo on December 26, 2019 in Paris, France. The company has... Chesnot/Getty Images

Uncommon Knowledge

Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.

Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.

About the writer



To read how Newsweek uses AI as a newsroom tool, Click here.
Newsweek cover
  • Newsweek magazine delivered to your door
  • Newsweek Voices: Diverse audio opinions
  • Enjoy ad-free browsing on Newsweek.com
  • Comment on articles
  • Newsweek app updates on-the-go
Newsweek cover
  • Newsweek Voices: Diverse audio opinions
  • Enjoy ad-free browsing on Newsweek.com
  • Comment on articles
  • Newsweek app updates on-the-go