Zoe Kleinman’s Post

View profile for Zoe Kleinman, graphic

Technology Editor, BBC News

I've just ploughed through a 132 page report on AI safety so you don't have to (you're welcome!) and it offers a fascinating snapshot of how quickly the conversation has moved on. When I attended the world's first AI Safety Summit in Bletchley Park, in November, I asked whether short-term threats like job losses and bias were in scope for the inaugural discussions between world leaders and tech giants. I was told no, this would be an event focusing on the absolute worst case scenarios: the big, blockbusting, doomsday-style existential threats. Fast forward to May 2024 and the word "safety" has been removed entirely from the title of the gathering, which is now called the AI Seoul Summit (it is being held in S Korea). The latest report, which has just been published by the AI Safety Institute that was set up following Bletchley, says from the get-go that there is an "uncertain" likelihood of any doomsday scenarios. There is "no evidence yet" that gen AI could automate a sophisticated cyber attack or develop a lethal bio weapon, it goes on, and the risk of the complete loss-of-control of an AI system by humans has a "highly contentious" plausibility. I'm slowly climbing back out of my imaginary bunker. That said, the threats the Institute does focus on do rather resonate more with what I was saying six months ago. Issues with bias (the report asks "can AI ever be completely fair?"), copyright and the resulting lack of quality data to train systems on, job losses, and sustainability (AI is a power-hungry beast. The US "may struggle" with the AI sector's demand for electricity, it says). The report, written by AI "godfather" Yoshia Bengio who himself has warned in the past of existential threats, concedes that nobody, including AI developers, really understands why AI generates the output it does. There is currently no good enough way of doing it. It also states that the current go-to process of safety-testing AI tools by "red-teaming" them - that means assembling people to try to force them to do bad things, has no official best practice guidance. Red Team evaluators can display their own bias, it admits. Overall, bias and representation is described as "an unsolved problem" - unfortunately I think we all know that by now. The conclusion of this report isn't exactly a headline writer like mine's dream. The future is uncertain and nothing is inevitable, it basically says. Colour me not surprised! It will be very interesting to see how the conversation is shaped six months from now. Maybe I'll be back in the bunker by then....

Isabella Barbara Tisenhusen

Outlier Ventures | C-level Legal & Operations Executive | New Technologies | Entrepreneur | Speaker & Aspiring Author

2mo

Why did you not use AI to provide you a summary of key points of a long form report? :-)

David Crawford

C-Level Tech Leader and NED

2mo

I never went to the bunker Zoe Kleinman, but then I lived through dot com; open source; 3G; social media; and a bunch of other things that people thought might ‘ruin us’ as a society (although Social Media is giving it a blooming good go!) 😂

Tanya Goodin

Entrepreneur • Tech & AI Ethicist • Bestselling Author • Keynote Speaker.

2mo

Literally every AI conference, panel and report right now can basically be summarised as “this could be great or this could be very bad…we need more time to decide ..”

Dr. Rumman Chowdhury

US Science Envoy, Artificial Intelligence | CEO, Humane Intelligence | Investor | Board Member | Startup founder |TIME 100 AI | ex- Twitter, ex- Accenture

2mo

Would love to chat, Zoe. I was there in November and am in Seoul currently. My concern is that the men who pushed the x risk narrative are now assuming they have the background, skill, and expertise to talk about AI bias and discrimination, impact on jobs - you know, all the things the women and minorities did that we were dismissed for as everyone else wasted the last year preparing for a Terminator that never came. I hope that journalists like yourself figure out that computer scientists do not, in fact, know much about society, economics, geopolitics, civil rights, or global human rights. Instead of featuring yet another “godfather” who discovered yesterday that actually, societal impact is real, and suddenly has an unbaked opinion, I hope you instead seek the women and POC who have spent their careers in this field focusing on the real problems, even when it had tangible consequences.

David Rose

Senior Data Consultant

2mo

The shift in vocabulary around AI risk management is certainly interesting, with "explainable", "responsible", "safe", "ethical" all highlighting slightly different parts. I think we're starting to see a bit of a shift from just talking about "AI needs to be safe/responsible" to governance models, of which there are now a couple (NIST in US, Model Framework from Singapore) along with the EU's nascent AI Act. One of the problems I see is that the space moves very quickly with short feedback loops and no way of putting the genie back in the bottle. If AI gives us net-good or net-bad is one question: the other is how quickly things can change. As you point out with the need for (and problems with) Red Teaming, I think there should be more work on establishing dynamic governance wrappers for these technologies. Despite "safety" chat, it's still a free-for-all and that didn't work so well with social media..

What you've described is the issue I had with the Summit too - fine, get the existential prognosis out on the table but we are living in the here and now. From a cyber perspective alone, while we're talking 5 years into the future, adversaries are using it to tool up, get better, faster, more targeted and ultimately do more harm. Quit the dinner party conversation about worst case scenarios (just me? OK!), governments need to collaborate on today's reality

Mateusz Matusiak

Keep the Place Afloat Officer no more...

2mo

As a non AI expert what I am hearing is: This AI thing could make us trillions of $$$ in profit so lets focus more on that. Our assessment went from "this could be rather detrimental to society" to "we are not sure and need to do more research to see if there is anything there there". While at the same time we the normal people see an explosion of AI tools and articles about incorporating AI into everything. We even have now vacuum cleaners with AI. At this rate my spoon and fork will be AI powered and nobody will bat an eye.

Jeff Jarvis

Author of six books, founder of a magazine (Entertainment Weekly), creator of three journalism degrees (at CUNY). Now (air quotes) "retired" and soon to announce what's next. Available for speaking, boards, consulting

2mo

It is important to put the doomerism in the context of the faux philosophies of #TESCREAL, for this considerably muddies the important discussion of actual risk and cost. The entire discussion about the "safety" team at OpenAI, for example, has turned into a hall of mirrors, where the "safety" people are, it would seem, the more dogmatic about x-risk but the entire company is filled with believers in what I believe is the BS of AGI. Thus the term "safety" is coopted (thus my quotes) such that is loses meaning.

David Oliver

Geopolitical Specialist, Executive Coaching, Development & Leadership.

2mo

I recommend listening to this week's 'The President's Inbox' from the Council on Foreign Relations which covers some of the Pentagon's developing thinking on the use of AI in war. Its a good jargon-buster for understanding how the Pentagon is thinking about when human's are 'in the loop', 'out of the loop', and 'on the loop' in weapons systems and when it is a human or a machine that has final decision making ability in the use of a 'kinetic' (i.e. lethal) response. As referenced from Bletchley, and in an age where all prior nuclear limitations are being consigned to the bin, the only place there seems to be any useful multi-lateral dialogue is in the role of AI (in the use of nuclear weapons). Its one of the few places where the US and China are actually talking to each other at the moment. https://www.cfr.org/podcasts/impact-ai-warfare-andrew-reddie

Jamil El-Imad

My Interests - Metaverse Media for Live Events, Brain Computer Interfaces, Digital Health, Medical Remote Diagnostics, Focus Training, Mindfulness and AI

2mo

The notion of AI safety, in my view, is exaggerated. It’s like advocating for the regulation of scientific calculators or computer programs. They equally can operate autonomously like AI systems and use data that may be bias to make decisions. If an AI system malfunctions or misbehave switch the machine off. As for Yoshia Bengio conceding that nobody, including AI developers, really understands why AI generates the output it does. That is true as AI does not have reasoning hence it should not be given the final say. If anything needs regulating it should be VR since excessive exposure to VR is mind altering especially to children and teenagers.

See more comments

To view or add a comment, sign in

Explore topics