Skip to Main Content

Artificial intelligence has the potential to revolutionize how drugs are discovered and change how hospitals deliver care to patients. But AI also comes with the risk of irreparable harm and perpetuating historic inequities.

Would-be health care AI regulators have been spinning in circles trying to figure out how to use AI safely. Industry bodies, investors, Congress, and federal agencies are unable to agree on which voluntary AI validation frameworks will help ensure that patients are safe. These questions have pitted lawmakers against the FDA and venture capitalists against the Coalition for Health AI (CHAI) and its Big Tech partners.

advertisement

The National Academies on Tuesday zoomed out, discussing how to manage AI risk across all industries. At the event — one in a series of workshops building on the National Institute of Standards and Technology (NIST)’s AI Risk Management Framework — speakers largely rejected the notion that AI is a beast so different from other technologies that it needs totally new approaches.

STAT+ Exclusive Story

STAT+

This article is exclusive to STAT+ subscribers

Unlock this article — and get additional analysis of the technologies disrupting health care — by subscribing to STAT+.

Already have an account? Log in

Already have an account? Log in

Monthly

$39

Totals $468 per year

$39/month Get Started

Totals $468 per year

Starter

$30

for 3 months, then $39/month

$30 for 3 months Get Started

Then $39/month

Annual

$399

Save 15%

$399/year Get Started

Save 15%

11+ Users

Custom

Savings start at 25%!

Request A Quote Request A Quote

Savings start at 25%!

2-10 Users

$300

Annually per user

$300/year Get Started

$300 Annually per user

View All Plans

Get unlimited access to award-winning journalism and exclusive events.

Subscribe

STAT encourages you to share your voice. We welcome your commentary, criticism, and expertise on our subscriber-only platform, STAT+ Connect

To submit a correction request, please visit our Contact Us page.