How Washington Missed the Boat on AI Regulation

The U.S. Congress missed an opportunity. Instead, it published a road map that fails to address the key challenges posed by new technologies.

By , the dean of global business at Tufts University’s Fletcher School of Law and Diplomacy.
OpenAI CEO Sam Altman (R) appears on a giant screen speaking remotely during a keynote with Nicholas Thompson, CEO of The Atlantic during the International Telecommunication Union (ITU) AI for Good Global Summit, in Geneva, on May 30.
OpenAI CEO Sam Altman (R) appears on a giant screen speaking remotely during a keynote with Nicholas Thompson, CEO of The Atlantic during the International Telecommunication Union (ITU) AI for Good Global Summit, in Geneva, on May 30.
OpenAI CEO Sam Altman (R) appears on a giant screen speaking remotely during a keynote with Nicholas Thompson, CEO of The Atlantic during the International Telecommunication Union (ITU) AI for Good Global Summit, in Geneva, on May 30. Fabrice Coffrini / AFP

“The longer we wait, the bigger the gap becomes.” With those wise words, Senate Majority Leader Chuck Schumer drew attention to an urgent need: closing the gulf between the pace of innovation and the pace of policy development regarding artificial intelligence in the United States.

“The longer we wait, the bigger the gap becomes.” With those wise words, Senate Majority Leader Chuck Schumer drew attention to an urgent need: closing the gulf between the pace of innovation and the pace of policy development regarding artificial intelligence in the United States.

But then he promptly released an AI policy road map guaranteed to widen that gulf. In May, the Schumer-led Bipartisan Senate AI Working Group released a report after a nearly yearlong process of educational briefings, nine “Insight Forums,” and engagement with more than 150 experts and produced a proposal with few specifics other than urging the federal government to spend $32 billion a year on (nondefense) AI innovation.

The task of formulating actual regulations and policy designed to build public trust in AI was pushed off to unnamed “relevant committees.”

Even on matters as weighty as national security, the road map concluded with a grab bag of disjointed advice. Overall, it appears that Congress is worried about throwing sand in the gears of the AI industry’s innovation agenda. Its operating mantra appears to be: move slowly and make sure not to break anything.

This raises two concerns. First, it places the United States in stark contrast to the regulation-friendly European Union, which has already enacted AI laws, as well as to China, which has more specific emerging policy frameworks building on its first-mover status on AI regulation. Second, it reinforces a grossly disproportionate reliance on the private sector for a technology with far-reaching public consequences.

Last year, businesses spent $67.2 billion on AI, while the value of AI-related federal contracts reached a measly $4.6 billion in the period leading up to August 2023, with virtually all of it on defense uses. The $32 billion public investment helps reduce the asymmetry, but the contributions of policy ought to go beyond money.

Public support of AI innovation is essential not only to unlock societal benefits and maintaining U.S. leadership in the emerging geography of top AI producers but also to build trust in the technology. There is a persistent AI trust gap—stemming from issues including disinformation and bias as well as environmental and labor market impacts—and this will hinder productive use of this expensive and extensively hyped technology. Narrowing this trust gap should have been a key goal of the road map, but it missed the opportunity.


The road map encourages a consideration of AI’s impact on the workforce to ensure that American workers are “not left behind.” But there is little guidance on how this can be done. The lead recommendation, that all stakeholders “are consulted as AI is developed and then deployed by end users,” is impractical considering the harsh realities of stakeholder pressures on U.S. firms. Some are in a confrontational mode on the issue of AI versus humans in the workplace, as was evident during the extended Hollywood writers’ strike.

In other instances, companies equivocate about whether it is AI pushing humans out of their jobs. UPS had the largest layoffs in its 116-year history recently due in part to AI replacing humans, according to the CEO on an earnings call with analysts; however, a UPS spokesperson, presumably fearing a public relations gaffe, later denied any connection with AI.

No mention is made in the road map of the need to protect those workers at disproportionate risk of replacement, such as Black and Hispanic workers, who are overrepresented in the 30 occupations with the highest exposure to automation, or women, 79 percent of whom are in occupations vulnerable to displacement by generative AI, as compared with 58 percent of similarly vulnerable working men.

In the absence of federal AI legislation, states are seeking to fill the vacuum. But the experiences of Connecticut and Colorado show how such locally driven initiatives can get derailed. Bills introduced in Connecticut were intended to address AI-aided discrimination in health care, employment, and housing, but the governor threatened to veto the move, worried that industry might just skip the state and go elsewhere. Industry lobbying helped raise the specter of a similar veto in Colorado as well. These failures are likely to deter other states.

To ensure that existing laws aren’t violated by AI-aided systems, there is a need for independent audits of AI models with enforceable penalties when there are violations. For example, New York City has an AI bias law that requires employers using AI in hiring to audit those tools for potential race and gender bias, publish the results, and notify employees and job candidates about such tools being used. The law was found to be toothless by a recent Cornell University study, as the absence of standards ensured it was riddled with loopholes.

The road map also avoids specific recommendations essential for building trust in AI’s use in key applications. It emphasizes the critical benefits in health care, for instance, but sidesteps proposing clear principles for striking a balance between protecting patient privacy and ensuring that essential health care-related data can be released from various silos to train algorithms for AI innovations in health care, from drug discovery to clinical practices.

In the criminal justice field, the road map ignores the harsh reality that law enforcement mechanisms have already fallen behind in key areas. For example, the road map raises concerns about blocking AI-aided online child sexual abuse material (CSAM), such as deepfake pornography involving real children’s likenesses. But the global volume of CSAM reports has already increased by 87 percent since 2019, according to the National Center for Missing & Exploited Children, and enforcement systems haven’t caught up. AI’s use will only make a bad situation worse.


As the 2024 U.S. elections loom, the road map encourages actions to mitigate AI-aided misinformation while still protecting First Amendment rights. But when it comes to actual federal regulations, again, there is a vacuum. Bills remain stalled in Congress; for example, the Senate Rules Committee last month passed three AI-related bills for safeguarding elections, but they haven’t made it to the House or the full Senate, and the road map does little to move them along.

Yet again, individual states have to fill the void, resulting in a patchwork of rules—or lack thereof. Some states have succeeded in passing laws to regulate preelection deepfakes, for example by disclosing AI use in political ads, some have them in process, while others have failed. With the elections only months away, of the seven commonly cited battleground states, only two—Michigan and Wisconsin—have bills enacted.

The road map supports comprehensive federal data privacy laws but, yet again, leaves it to the states to figure out specifics. Currently, 18 states have enacted comprehensive data privacy laws, while at least 16 states have had no bills to deal with the issue. This could mean that some companies might avoid certain jurisdictions, creating a situation where privacy rather than being enshrined as a citizens’ right ends up, ironically, as an issue on which the market votes with its feet by walking away from it.

The road map offers no rules on what data can be used to train algorithms. The question of copyrights on such data is being left to the courts, while the AI industry has created its own definition of transparency by declaring certain AI models to be “open source.”

Moreover, the minimal transparency levels needed to encourage AI’s adoption by relevant professionals in specific domains varies by domain—a practicality that the road map does not address. In some sectors, such as health care, transportation, defense, or financial services, the bar is especially high. For example, radiologists hesitate to embrace AI when they cannot understand how the algorithm makes decisions on medical image segmentation, survival analysis, or prognosis.

The road map encourages companies to “perform detailed testing and evaluation to understand the landscape of potential harms” prior to releasing AI systems but makes no reference to tests and principles proposed elsewhere, such as the EU’s AI Act, the White House executive order on AI safety, the U.K.-led Bletchley Declaration, or the Japan-led Hiroshima AI Process. Proposals elsewhere call for “red-team” attacks, for example, to identify vulnerabilities through simulations. The report could have established standards for exhaustive red-teaming methods and criteria but missed the opportunity.

The report remains silent on fundamental vulnerabilities of AI models that are beyond the reach of either regulators or industry actors, such as inescapable hallucinations that cause large language models to produce bizarre answers or the inevitability of errors that developers can’t understand giving rise to “unknown unknowns” problems of AI. The report doesn’t offer a recommendation on the positions that public policy ought to take in response.

Finally, the road map addresses the all-important hazards of the technology’s malicious use by U.S. adversaries. Undoubtedly, the risks are attention-grabbing: In the largest-ever survey of AI and machine learning experts, even 48 percent of net optimists put a probability of 5 percent on human extinction due to AI, while the chances of AI being made to follow illegal commands were expected to be high—the majority of respondents rated it “likely” or “very likely”—even in 2043.

In the meantime, the two largest AI powers, the United States and China, are at loggerheads with each other. FBI Director Christopher Wray has warned about Chinese hacking of essential infrastructure, and other U.S. officials have accused China of state sponsorship of hacking groups, while such hackers have already broken into numerous email systems.

With growing concerns about AI-fueled hacking and security threats, the Biden administration has ratcheted up its adversarial stance by further tightening rules in March to block China’s access to high-end chips and chipmaking tools, undermining its AI capacity. However, the United States and China actually need each other to advance the state of global AI development: for research collaborations and talent; agreement on standards, data access, and ensuring humans are in the decision-making loop in nuclear weapons deployment; and negotiating the necessary trade-offs between the enormous environmental impact of AI computation and the technology’s benefits.

Instead of providing guidance on these nuanced matters, the road map’s geopolitical proposals meander—from defining “artificial general intelligence” to combating the flow of illicit drugs to managing space debris—instead of articulating clear national security strategies and trade-offs.


Already, Americans have indicated they are increasingly pessimistic about AI’s impact. By not directly addressing the underlying causes and simply recommending billions of dollars be invested in AI innovation, the Senate missed an opportunity to ensure that those billions would result in wider adoption.

Regulation is not necessarily the enemy of innovation; legislation, done right, can complement the innovation process, by building trust and translating technology into trusted applications and improved productivity. By moving slowly, Congress might end up breaking something big: the U.S. lead in AI.

The country’s top-ranked status ought not to be taken for granted if Americans fall behind in its adoption and productive use. Remember that the lack of nationwide cellular standards in the United States caused it to trail behind Europe, which quickly adopted the broadly used GSM standard in an earlier tech innovation era.

It took the genius of Steve Jobs and the Apple iPhone to help the United States catch up. It may not be as lucky this time around.

Bhaskar Chakravorti is the dean of global business at Tufts University’s Fletcher School of Law and Diplomacy. He is the founding executive director of Fletcher’s Institute for Business in the Global Context, where he established and chairs the Digital Planet research program.

Join the Conversation

Commenting on this and other recent articles is just one benefit of a Foreign Policy subscription.

Already a subscriber? .

Join the Conversation

Join the conversation on this and other recent Foreign Policy articles when you subscribe now.

Not your account?

Join the Conversation

Please follow our comment guidelines, stay on topic, and be civil, courteous, and respectful of others’ beliefs.

You are commenting as .

More from Foreign Policy

A ripped and warped section from the side of a plane rests in the foreground of a broad expanse of a grassy field against a cloudy sky.
A ripped and warped section from the side of a plane rests in the foreground of a broad expanse of a grassy field against a cloudy sky.

How the West Misunderstood Moscow in Ukraine

Ten years ago, Russia’s first invasion failed to wake up a bamboozled West. The reasons are still relevant today.

Chinese soldiers in Belarus for military training.
Chinese soldiers in Belarus for military training.

Asian Powers Set Their Strategic Sights on Europe

After 500 years, the tables have turned, with an incoherent Europe the object of rising Asia’s geopolitical ambitions.

Malaysian King Sultan Abdullah Sultan Ahmad Shah observes track laying of the East Coast Rail Link in Kuantan, Malaysia on Dec. 11, 2023.
Malaysian King Sultan Abdullah Sultan Ahmad Shah observes track laying of the East Coast Rail Link in Kuantan, Malaysia on Dec. 11, 2023.

The Winners From U.S.-China Decoupling

From Malaysia to Mexico, some countries are gearing up to benefit from economic fragmentation.

Fighters from a coalition of Islamist forces stand on a huge portrait of Syrian President Bashar al-Assad on March 29, 2015, in the Syrian city of Idlib.
Fighters from a coalition of Islamist forces stand on a huge portrait of Syrian President Bashar al-Assad on March 29, 2015, in the Syrian city of Idlib.

Another Uprising Has Started in Syria

Years after the country’s civil war supposedly ended, Assad’s control is again coming apart.