Security expert warns of AI tools’ potential threat to democracy

Artificial intelligence has the potential to dramatically alter how we gather information, communicate and work. Experts are also raising questions about how it will affect governance and what it will mean for the future of our democracy. Bruce Schneier, a fellow at Harvard University's Berkman Klein Center for Internet and Society, joins William Brangham to discuss.

Read the Full Transcript

Notice: Transcripts are machine and human generated and lightly edited for accuracy. They may contain errors.

  • John Yang:

    Artificial intelligence and the popular new AI tool ChatGPT has the potential to influence our lives dramatically changing how we gather information, how we communicate, even how we work. There are also questions about how it will affect governance and what it means for the future of our democracy. William Brangham has that, and it's part of our periodic series, The AI Frontier.

  • William Brangham:

    Could AI be used to distort democracy not through voting, but using the technology's ability to mimic human communication and language through lobbying? That's a question raised in a recent New York Times opinion piece by security expert Bruce Schneier. Schneier is a fellow at Harvard University's Berkman Klein Center for Internet and Society and the Belfer Center at the Kennedy School of Government. He's the author of a new book just out called, A Hacker's Mind.

    Bruce Schneier, great to have you on the program. When you look at these AI technologies, what is it that most troubles you about its potential threat to democracy?

  • Bruce Schneier, Harvard University:

    Really where it mimics humans. I mean, democracy is fundamentally human way of organizing ourselves and where an AI, whether it's a ChatGPT that is writing human text, or another AI that is figuring out human strategy, can do that at a speed and scale that humans can't. It could take over processes and really subvert the intent of this very human system.

  • William Brangham:

    Can you give me some examples? Like, how would this be used to corrupt the system as you describe it?

  • Bruce Schneier:

    So, one of the things we have in our system is an ability to submit comments. When federal rulemaking agencies make draft rules, we are allowed to submit comments back, and we humans submit comments. If an AI can submit thousands, millions of comments, it could overwhelm human comments.

  • William Brangham:

    I mean, this is the ultimate fake astroturf campaign. It's sort of what the Department of justice accused the Russians of doing in the 2016 election.

  • Bruce Schneier:

    And the Russians had hundreds of people and a million dollar a month budget to do it. What this does is it brings the capabilities down to a lot of other actors. But yes, it's exactly that same thing.

  • William Brangham:

    Your assertion is that if you could suddenly flood the zone with all of these "fake comments or opinions," that you could distort what popular will really was about any given topic.

  • Bruce Schneier:

    That's right. That's how we figure out what people want is we ask them and they tell us. And we don't ask them in person, we ask them remotely and they tell us remotely. So having an artificial agent mimic people, subverts that process. Other AIs doing other types of analysis, could figure out what legislators are more susceptible to, to having their minds change. I mean, again, these are very human actions. Lobbyists do this, but having an automated process supplanting that just gives that capability more power.

  • William Brangham:

    What do you imagine happens if these AI tools are deployed and suddenly there's this overwhelming ocean of comments and notes bombarding our government officials?

  • Bruce Schneier:

    Two ways that can go. The first is the government officials start ignoring everything. Unless it's face to face, we assume that it's a bot. The other way it could go is that we require people to interact in ways that we know they're actual people.

  • William Brangham:

    I mean, has this happened? Are there any examples you could point to?

  • Bruce Schneier:

    We know that bots have generated fake tweets. Saudi Arabia did that to support its ruler. There was an instance where the Federal Communications Commission got millions of fake, you know, pretty lousy fake comments for rulemaking that were obviously generated automatically, not by sophisticated bot. What ChatGPT does is makes them all unique, makes them all seem human in a way you just can't do otherwise without an army of people.

  • William Brangham:

    Do you think that government officials are prepared for this potential onslaught? I mean, are there any guardrails or protections that they can put up against this?

  • Bruce Schneier:

    At this point, I don't think anyone's prepared. We're used to humans being the only agents that can do human things. We were all surprised when ChatGPT was like writing funny songs and smart commentary on things. I think we'd be surprised again and again by AIs like ChatGPT.

  • William Brangham:

    I mean, I know that there are school officials right now at my kids high school and college that are trying to develop technologies or deploy technologies that can spot the fake from the real. Do you think that as AI develops that our abilities to detect AI will also increase?

  • Bruce Schneier:

    They will, but it's an arms race. I think that the detectors are going to lose. The capabilities to use the technologies are going to outpace.

  • William Brangham:

    If a devil's advocate question, couldn't this also be used for good? I mean, let's just say that I really care about, I don't know, renewable technologies or the Second Amendment, couldn't this technology be used to help me get my opinion to legislators and, as you describe, help me figure out the right people, the most important people, to get my opinions too?

  • Bruce Schneier:

    So, I think that's right. And I think we want that. An assistive tool that helps people write or translate or put their ideas down will be phenomenal to the extent these tools help humans. It's good for society, it's good for democracy. Where it goes wrong is where it supplants humans, where it's a million fake people with fake opinions, rather than a million real people using these tools to be more articulate, that would be a great thing for society.

  • William Brangham:

    All right. Bruce Schneier, the new book is called, A Hacker's Mind. Thank you so much for being here.

  • Bruce Schneier:

    Thanks for having me.

Listen to this Segment