It indicates an expandable section or menu, or sometimes previous / next navigation options. Homepage

AI misinformation is rampant as election year looms. These startups think they can help fix the problem.

A presidential figure behind a podium with a question mark, stars, and other shapes in place of the head
Kent Nishimura/Getty, freestylephoto/Getty, Tyler Le/BI
  • AI-generated content is increasingly being used in disinformation campaigns.
  • To combat this, startups are developing content moderation and deepfake detection tools.
  • Stymied funding and a lack of regulation complicate the search for solutions.
Advertisement

In March, a photo of former US President Trump embracing a group of smiling Black voters started circulating online.

While the hyperrealistic image initially fooled swathes of social media users, many were quick to point out its uncanny qualities: missing fingers, extra teeth, and glossy skin — hallmark signs of artificial intelligence-generated images.

A BBC investigation later confirmed that these images were AI-generated, with one created and circulated by conservative radio show host Mark Kaye.

Deepfakes like these are the product of AI used for illicit purposes. They're often hard to distinguish from real photos, videos, or audio, so they've become an easy tool for bad-faith actors. According to verification platform Sumsub, which examined more than two million fraud attempts, the number of deepfakes increased tenfold from 2022 to 2023.

Advertisement

Since the launch of generative AI platforms like ElevenLabs and OpenAI's Sora, users can easily make AI-generated images, video, and audio content. This has increasingly galvanized some sinister disinformation campaigns targeted toward voters ahead of upcoming elections across the globe in 2024.

But political propagandists aren't the only ones manipulating the technology — scammers are increasingly using AI deepfakes to swindle enterprises. In May, engineering group Arup confirmed that it lost $25 million when fraudsters created a deepfake video of its senior manager supposedly authorizing a transaction, The Financial Times reported.

A problem as complex and evolving as deepfakes doesn't have a one-size-fits-all solution, nor can technology tackle it wholly. Still, a batch of new startups are attempting to combat AI-based misinformation. Some startups have developed deepfake detection tools for video and audio. Others are deploying AI for content moderation to clamp down on misinformation.

Deepfake detection is a burgeoning market: In 2023, startups combatting technology misinformation raised $313 million — up 67% from funding in 2021, per PitchBook data. And this year, deal count remains consistent with previous years, with 11 companies cinching investor funding so far, PitchBook data showed.

Advertisement

Content moderation and deepfake detection tools have taken off

Some startups are focused on curbing the spread of misinformation before it goes viral, like the Obama image.

Guillaume Bouchard launched AI content moderation platform Checkstep in 2020 to identify instances of misinformation on large platforms, such as Twitter and Facebook.

"We deal with everything from impersonation to political misinformation," Bouchard told Business Insider.

Checkstep's AI-based tool flags harmful, violent, or bigoted content, and it has a team of moderators to double-check if these detections are accurate. The startup also partners with NewsGuard, a company that gives readers tools to recognize misinformation and bias, and Logically, a startup that fact-checks and tackles fake news, to verify claims made online.

Advertisement

Having secured $5 million in seed funding for Checkstep in 2022, Bouchard acknowledged that "while it's a new domain to invest in, trust and safety is hard for VCs to understand."

A major pain point his startup has faced is accommodating the different needs a company may have for content moderation.

"For example, a dating app will need to have a means to verify chats, unlike Instagram, so the diversity of content is a big challenge," Bouchard told BI.

Guillaume Bouchard, CEO of Checkstep
Guillaume Bouchard, cofounder and CEO of Checkstep. Checkstep

Other startups are creating deepfake detection tools that identify everything from voice cloning to face-swapping. A few weeks after its launch in 2023, synthetic voice startup ElevenLabs was at the center of a misinformation controversy when 4chan used its tech to create racist and transphobic content using the voices of Emma Watson and Taylor Swift. This year, the startup announced an audio detection tool that can identify if any content was generated using ElevenLabs. Users can insert audio clips into ElevenLab's speech classifier platform on its website, and the tool flags if it has been modified using ElevenLabs' technology.

Advertisement

Founders are also tapping into misinformation tools for enterprise use. Reality Defender, a New York-based startup founded in 2021, aims to help enterprise clients identify deepfakes. It has developed an API and web app that enables users to analyze content and gauge if it's been modified by AI.

The startup doesn't outright discern if something is a deepfake. Instead, it gives users an "inference point" so they can understand the extent to which, and how, something could have been altered by AI, said founder Ben Colman.

"We found that education alone might not be enough for clients to buy this. So we typically make it very personal," Colman said. "We'll use operational tools to create examples of deepfakes at a conference. If you're a legislator and see a deepfake of yourself, you have that visceral reaction of 'Oh my goodness.'"

Colman raised a $15 million Series A round in October, driven in part by increased VC interest in the commercial applications of its technology.

Advertisement

VCs are more prudent about backing these emerging technologies

Not all startups have had a smooth time fundraising.

Dhruv Ghulati launched Factmata, an AI-powered startup that moderated the spread of misinformation and fake news online, in 2017. He understood that his startup wouldn't have the same resources as a Big Tech company to stamp out misinformation, so he hoped to build a product that could be incorporated into larger platforms.

Factmata was acquired by Cision in 2022. Post-acquisition, it pivoted from building a misinformation detection engine to offering businesses insights into "evolving trends and narratives," Ghulati previously told TechCrunch. The tough fundraising made scaling the business model hard, per the report, so the acquisition looked like the best deal.

Content moderation can be a tough sell for investors because there is little agreement on who is responsible for ensuring misinformation doesn't spread: the platform or the user.

Advertisement

While startups try to target sizable markets and customers willing to pay for this service, consumers are less likely to do so, one VC told BI.

"What is tricky about the strictly pro-social use cases is that governments and NGOs would potentially pay for it. But it's a smaller universe of deep-pocketed customers," the investor, who spoke on the condition of anonymity, said.

Regulation is part of the solution

Because so many tech companies are involved in the proliferation of AI misinformation, Bruna de Castro e Silva, an AI governance expert at AI safety startup Saidot, said there should be a "collective approach" to tackling the problem, which includes founders, policymakers, and tech conglomerates.

Developers of generative AI technology should be required to allow end users to identify and label any synthetic content, de Castro e Silva said.

Advertisement
Mark Zuckerberg.
Mark Zuckerberg's Meta says it's working on disinformation. Josh Edelson/AFP via Getty Images

Some big companies say they're working on it. For example, Meta is in talks to set up a team that tackles disinformation and the abuse of generative AI in the run up to the EU Parliament elections, Reuters reported in February.

Ultimately, the founders grappling with this issue understand that the human psyche is responsible for spreading disinformation just as much as technology is.

Bias is an overwhelming driver of AI-generated misinformation.

"People want to believe things — even if they have the solution," Checkstep's Bouchard said. "If you look at the academic literature on misinformation, both sides of the political spectrum also spread misinformation, and in the long term, this puts more fuel into the narrative of misinformation of one side."

AI Elections Startups
Advertisement
Two crossed lines that form an 'X'. It indicates a way to close an interaction, or dismiss a notification.

Jump to

  1. Main content
  2. Search
  3. Account