US lawmakers want regulations for "high-risk AI models" by the end of 2023

Microsoft and OpenAI agree that the government should license certain AI development

We may earn a commission from links on this page.
Microsoft President Brad Smith waits to testify before a Senate Judiciary Privacy, Technology, and the Law Subcommittee hearing on "Oversight of A.I.: Legislating on Artificial Intelligence" on Capitol Hill in Washington, U.S., September 12, 2023.
Microsoft president Brad Smith testifying before Congress in Washington.
Photo: Leah Millis (Reuters)

US senator Richard Blumenthal wants to create a new, independent licensing regime to determine who can develop “high-risk” artificial intelligence models. And he wants to move quickly, hoping to pass new legislation by the end of 2023.

Blumenthal, a Democrat, and his Republican co-sponsor Josh Hawley, aren’t the only ones calling for a new licensing regime. That’s the hope of some major leaders in the AI industry too.

Microsoft president Brad Smith signaled his support in a Senate Judiciary Committee hearing today (Sept. 12). Smith called licensing “indispensable” in high-risk scenarios, but he acknowledged it won’t address every issue. “You can’t drive a car until you get a license,” Smith said. “You can’t make the model or the application available until you pass through that gate.”

Microsoft and Nvidia were the latest companies to testify about AI in a series of Senate hearings, as lawmakers grapple with how to regulate the fast-moving technology powering OpenAI’s ChatGPT and Google’s Bard.

A bipartisan framework for AI

Smith’s call for licensing comes months after OpenAI CEO Sam Altman proposed licensing certain AI developers in his May Congressional testimony. Altman noted that not all AI models present the same level of risk.

Advertisement

“[It’s] important to allow companies and open-source projects to develop models below a significant capability threshold” without being subjected to regulation via licenses or audits, OpenAI’s co-founders wrote in a blog post. It’s unclear what types of AI models fall under this umbrella.

Blumenthal and Hawley proposed the bipartisan AI framework last week, which outlines liability for models that breach user privacy or violate civil rights, requires the disclosure of watermarks on AI-generated deepfakes, and requires “safety breaks,” including giving notice when AI is being used to make decisions.

Where the US stands in AI regulation

The US lags other global powers when it comes regulating AI. The European Union, which has a track record of implementing stricter rules for data privacy practice and tech antitrust than the US, released its latest draft of the AI Act in July. This legislation targets a broader category of AI systems. The EU proposes classifying the AI models by risk, with higher-risk systems facing more compliance than lower-risk ones.

Advertisement

The Congressional listening sessions continue tomorrow, when a dozen tech executives, including Elon Musk, Mark Zuckerberg of Meta, OpenAI’s Altman, and Sundar Pichai of Google, will meet with lawmakers in a closed-door AI forum led by Senate majority leader Chuck Schumer.

Advertisement

As tech executives visit Washington to persuade lawmakers of their ideas for regulating AI, some critics have complained that they’re being given too much sway over important regulatory decisions.