Azaz Rasool’s Post

View profile for Azaz Rasool, graphic

juggling | its fun when you have a clear view of your north star 💫

An apt representation …and that’s true for any technology innovation. However risks are much higher and its impact is much broader in case of ai #aiforgood #artificialintelligence #ai

View profile for Andrew Ng, graphic
Andrew Ng Andrew Ng is an Influencer

Founder of DeepLearning.AI; Managing General Partner of AI Fund; Founder and CEO of Landing AI

The effort to protect innovation and open source continues. I believe we’re all better off if anyone can carry out basic AI research and share their innovations. Right now, I’m deeply concerned about California's proposed law SB-1047. There are many things wrong with this bill, but I’d like to focus here on just one: It defines an unreasonable “hazardous capability” designation that may make builders of large AI models liable if someone uses their models to do something that exceeds the bill’s definition of harm (such as causing $500 million in damage). That is practically impossible for any AI builder to ensure. If the bill is passed, it will stifle AI model builders, especially open source developers. Some AI applications, for example in healthcare, are risky. But as I wrote previously, regulators should regulate applications rather than technology.  - Technology refers to tools that can be applied in many ways to solve various problems. - Applications are specific implementations of technologies designed to meet particular customer needs. For example, an electric motor is a technology. When we put it in a blender, an electric vehicle, dialysis machine, or guided bomb, it becomes an application. Imagine if we passed laws saying, if anyone uses a motor in a harmful way, the motor manufacturer is liable. Motor makers would either shut down or make motors so tiny as to be useless for most applications. If we pass such a law, sure, we might stop people from building guided bombs, but we’d also lose blenders, electric vehicles, and dialysis machines. In contrast, if we look at specific applications, like blenders, we can more rationally assess risks and figure out how to make sure they’re safe, and even ban classes of applications, like certain types of munitions. Safety is a property of the application, not a property of the technology (or model), as Arvind Narayanan and Sayash Kapoor have pointed out. Whether a blender is a safe one can’t be determined by examining the electric motor. A similar argument holds for AI.  SB-1047 doesn’t account for this distinction. It ignores the reality that the number of beneficial uses of AI models is, like electric motors, vastly greater than the number of harmful ones. But, just as no one knows how to build a motor that can’t be adapted to harmful applcations. For open models, there’s no known defense to fine-tuning to remove RLHF alignment. And jailbreaking work has shown that even closed-source, proprietary models can be prompted harmful responses. Indeed, the sharp-witted Pliny the Prompter regularly tweets about jailbreaks for closed models. Kudos also to Anthropic’s Cem Anil and collaborators for publishing their work on many-shot jailbreaking, an attack seems hard to defend against. I hope you will speak out against SB-1047 it if you get a chance to do so. [Original text (with links): https://lnkd.in/gtn4H7YK ]

The AI PC Arrives, OpenAI Used For Disinformation, and more

The AI PC Arrives, OpenAI Used For Disinformation, and more

deeplearning.ai

To view or add a comment, sign in

Explore topics