By Invitation | Artificial intelligence

A bioethicist and a professor of medicine on regulating AI in health care

Effy Vayena and Andrew Morris offer three approaches

THE ARTIFICIAL INTELLIGENCE (AI) sensation ChatGPT, and rivals such as BLOOM and Stable Diffusion, are large language models for consumers. ChatGPT has caused particular delight since it first appeared in November. But more specialised AI is already used widely in medical settings, including in radiology, cardiology and ophthalmology. Major developments are in the pipeline. Med-PaLM, developed by DeepMind, the AI firm owned by Alphabet, is another large language model. Its 540bn parameters have been trained on data sets spanning professional medical exams, medical research and consumer health-care queries. Such technology means our societies now need to consider the best ways for doctors and AI to best work together, and how medical roles will change as a consequence.

The benefits of health AI could be vast. Examples include more precise diagnosis using imaging technology, the automated early diagnosis of diseases through analysis of health and non-health data (such as a person’s online-search history or phone-handling data) and the immediate generation of clinical plans for a patient. AI could make care cheaper as it enables new ways to assess diabetes or heart-disease risk, such as by scanning retinas rather than administering numerous blood tests, for example. AI has the potential to alleviate some of the challenges left by covid-19. These include drooping productivity in health services and backlogs in testing and care, among many other problems plaguing health systems around the world.

For all the promise of AI in medicine, a clear regime is badly needed to regulate it and the liabilities it presents. Patients must be protected from the risks of incorrect diagnoses, the unacceptable use of personal data and biased algorithms. They should also prepare themselves for the possible depersonalisation of health care if machines are unable to offer the sort of empathy and compassion found at the core of good medical practice. At the same time, regulators everywhere face thorny issues. Legislation will have to keep pace with ongoing technological developments—which is not happening at present. It will also need to take account of the dynamic nature of algorithms, which learn and change over time. To help, regulators should keep three principles in mind: co-ordination, adaptation and accountability.

Explore more

More from By Invitation

Rachael “Raygun” Gunn on the new sport that will invigorate the Olympics

The Australian breaker hopes we’ll all soon be talking about B-Girls, B-Boys and double airflares

A former adviser to Keir Starmer on what his victory can teach the global left

You don’t have to splurge to woo back working people, says Claire Ainsley


Justice Sotomayor was right for the wrong reasons

The Supreme Court’s ruling on prosecuting presidents is mistaken, says Eric Nelson, but not because the founding fathers were anti-monarchists


More from By Invitation

Rachael “Raygun” Gunn on the new sport that will invigorate the Olympics

The Australian breaker hopes we’ll all soon be talking about B-Girls, B-Boys and double airflares

A former adviser to Keir Starmer on what his victory can teach the global left

You don’t have to splurge to woo back working people, says Claire Ainsley


Justice Sotomayor was right for the wrong reasons

The Supreme Court’s ruling on prosecuting presidents is mistaken, says Eric Nelson, but not because the founding fathers were anti-monarchists


A big donor says Joe Biden’s team has gone all Trumpian

The president is deluding himself. Democrats are better than that, says Ari Emanuel

The West’s values are important, but so is realism, says Finland’s president

The Global South must be courted, even if that means compromising interests, argues Alexander Stubb

This needn’t be France’s Brexit moment, says its business envoy

Pascal Cagni explains why foreign investors should not panic