AI-generated voices in robocalls are now illegal

A new FCC rule comes as audio deepfakes have gone mainstream

We may earn a commission from links on this page.
No robocalls, please.
Photo: Jerome Miron-USA TODAY Sports via Reuters (Reuters)

A federal regulator said Thursday that the law banning unwanted calls made using artificial or prerecorded voices without the prior consent of the called party also includes robocalls made by AI.

The Federal Communications Commission cited the Telephone Consumer Protection Act, a 1991 law, in effectively banning unwanted AI-generated voices in robocalls. Callers must now receive permission before making an AI call.

The agency’s new rule comes as deepfakes using AI-generated voices have proliferated over the past year. Examples include a fake Tom Hanks hawking dental plans online or a fake President Joe Biden telling potential primary voters in New Hampshire to stay home and “save your vote” by skipping the state’s primary election last month.

Regulators have been cracking down on unwanted calls. The number of robocalls fell to 1.2 million in 2022, down from 1.8 million the year before, according to the Federal Trade Commission.

Not all uses of voice-cloning tools come with bad intentions, but they do wade into gray areas. New York Mayor Eric Adams, for example, has been sending robocalls with his voice to New Yorkers in a variety of languages—Spanish, Yiddish, Mandarin, Cantonese, Haitian Creole—that he does not speak. He has said it’s important to speak to all New Yorkers. But it also poses ethical concerns in leading people to believe he is fluent in so many languages.