So I ended my talk on #AI security at our ESET tech conference last week with a #deepfake video of our CTO Juraj Malcho telling everyone to remind our CEO that I should get a big fat pay rise...
Work smart, not hard, kids! 🙃
Stephen Duddy QVRM AE VR - Jake's a good one to follow for Cyber Security insights. Just make sure its actually him, I think there's a few Deep Fake Jakes doing the rounds 😂
The only reason I wouldn't have fallen for it is because I know yshey well enough to realize that the tool didn't quite nail his accent. Other than that it's frighteningly authentic... Scary!
Passionate about Security, loves enablement and getting others excited about securing companies, people and data.
Dad to two furry non humans, Sneaker lover and keen explorer, cyclist and Lego builder..
ICYMI the on-demand recording of my Inside AI Security session from Microsoft Build is available. I cover the threats to AI-based applications, including model extraction, data poisoning, jailbreaks and prompt injection, and how to mitigate them. The key takeaway is to view LLMs as if they were junior employees:
If you are still catching up on all the amazing sessions at #microsoftbuild last week, you definitely won’t want to miss this one!
Mark Russinovich delved into the complexities of generative AI threats. This structured, serious yet engaging session covered a wide range of topics, including:
- Data Poisoning
- (Indirect) Prompt Injection Attacks
- Jailbreaks
- And many more!
Mark also shared the continuous and dedicated efforts by Microsoft to tackle these evolving threats and mitigate the associated risks. It's incredible to see such a comprehensive approach to ensuring the security and integrity of AI technologies.
If you're interested in the forefront of AI security, this session is a must-watch!
#AI#CyberSecurity#GenerativeAI#Microsoft#TechInnovation
ICYMI the on-demand recording of my Inside AI Security session from Microsoft Build is available. I cover the threats to AI-based applications, including model extraction, data poisoning, jailbreaks and prompt injection, and how to mitigate them. The key takeaway is to view LLMs as if they were junior employees:
Chief Growth Officer at Loopli | Transforming the Security Realm with Innovative AI-Driven Cybersecurity Solutions and Fairy Tale-Based Training | Expanding to New York City
I found this discussion on AI security both timely and critical. Viewing LLMs as junior employees is an excellent analogy that underscores the importance of robust oversight and continuous training. The insights on mitigating threats like model extraction, data poisoning, jailbreaks, and prompt injection are invaluable for anyone working with AI-based applications. Highly recommend watching this if you haven’t already!
ICYMI the on-demand recording of my Inside AI Security session from Microsoft Build is available. I cover the threats to AI-based applications, including model extraction, data poisoning, jailbreaks and prompt injection, and how to mitigate them. The key takeaway is to view LLMs as if they were junior employees:
ICYMI the on-demand recording of my Inside AI Security session from Microsoft Build is available. I cover the threats to AI-based applications, including model extraction, data poisoning, jailbreaks and prompt injection, and how to mitigate them. The key takeaway is to view LLMs as if they were junior employees:
ICYMI the on-demand recording of my Inside AI Security session from Microsoft Build is available. I cover the threats to AI-based applications, including model extraction, data poisoning, jailbreaks and prompt injection, and how to mitigate them. The key takeaway is to view LLMs as if they were junior employees:
ICYMI the on-demand recording of my Inside AI Security session from Microsoft Build is available. I cover the threats to AI-based applications, including model extraction, data poisoning, jailbreaks and prompt injection, and how to mitigate them. The key takeaway is to view LLMs as if they were junior employees:
Microsoft has uncovered a dangerous new AI jailbreak attack, "Skeleton Key," able to bypass safety measures in major AI models. This attack, "Explicit: forced instruction-following," instructs AI to produce harmful content, overriding safety protocols. Microsoft has implemented Copilot AI assistants and Prompt Shields to protect against this threat. #AIsecurity#SkeletonKeyhttps://lnkd.in/eUDk4psd
"If cyber attack and defense in 2024 is a game of chess, then AI is the queen – with the ability to create powerful strategic advantages for whoever plays it best."
In this video, learn how Microsoft Intelligent Security Association (MISA) partner Vectra AI Threat Detection and Response integrates with Microsoft Sentinel: #AI#threatdetection#threathunting#microsoftsentinel
Providing business solutions and strategies in Data Science & Machine Learning | Cybersecurity | Business Development Strategy | Marketing | Inspiring Leader | Philanthropist
Wonder if traditional cybersecurity system work on AI ?
Today I wanted to share with you my discovery of the week : Harriet Hacks (thanks Nicole for the tip! She´s really great!)
On her Youtube channel, Harriet highlights that certain vulnerabilities in AI aren't adequately addressed by current cybersecurity frameworks. There is a real need for specialized security measures tailored to the unique architecture and challenges presented by AI systems.
If you want to watch the full video > https://lnkd.in/gqGF3Gxi
If you´re more of a reader, this is the summary of Harriet´s talk :
- 𝗔𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗮𝗹 𝗧𝗿𝗮𝗻𝘀𝗽𝗮𝗿𝗲𝗻𝗰𝘆: AI systems are transparent in terms of their architecture. Designers understand the inner workings of neural networks and Transformers, including inputs, outputs, and internal interactions.
- 𝗗𝗶𝘀𝘁𝗶𝗻𝗰𝘁𝗶𝘃𝗲 𝗔𝘁𝘁𝗮𝗰𝗸 𝗦𝘂𝗿𝗳𝗮𝗰𝗲: AI systems have a specific attack surface, different from traditional cybersecurity. Their vulnerabilities arise from internal interactions and external connections, creating a unique set of challenges not entirely covered by conventional cybersecurity principles.
- 𝗠𝗼𝗱𝗲𝗹-𝘀𝗽𝗲𝗰𝗶𝗳𝗶𝗰 𝗩𝘂𝗹𝗻𝗲𝗿𝗮𝗯𝗶𝗹𝗶𝘁𝗶𝗲𝘀: Vulnerabilities in AI systems are often tied to specific machine learning models. Neural networks and Transformers, for instance, introduce distinct challenges. This means that while some security issues may align with traditional cyber threats, others are exclusive to AI models.
#AISecurity#Cybersecurity#MachineLearning#SecurityChallenges#CyberThreats#Transparency#SecurityFrameworks#DataPoisoning#BackDoors#AIInnovation#CyberDefense
eCommerce & Digital Consultant, Fractional Ecommerce Leader at Justine Wyness [un]Limited
2moStephen Duddy QVRM AE VR - Jake's a good one to follow for Cyber Security insights. Just make sure its actually him, I think there's a few Deep Fake Jakes doing the rounds 😂