- Thacker Thoughts
- Posts
- AI Consulting Launch and Insane AI Security Resources
AI Consulting Launch and Insane AI Security Resources
I announce my AI consulting and share a huge list of latest AI security content.
Hey all! This update is a big one—in both impact and content.
rez0corp.com launch
First, and most importantly, I wanted to share that I’m beginning to do AI consulting on the side. I’ve developed a bunch of AI applications that I use personally, and I’m a Principal AI Engineer full time. As a top bug bounty hunter with both a blue and red team background, I bring a comprehensive security skillset as well.
So yeah, if your org (or someone you know) is interested in how to best utilize AI or how to integrate AI most effectively, reach out: [email protected]
For more details, you can check out the website below:
Big List of Awesome Resources
I stay on top of the latest content in the intersection of AI and Security so I wanted to share some of that in this email.
A SUMMARY OF ALL AI SECURITY TALKS AT DEFCON/BLACKHAT/BSIDES by Clint Gibler
Clint and his newsletter tl;dr sec are awesome and he did a 3 sentence summary of every AI talk at hacker summer camp. It’s probably the greatest collection of AI Security information ever. Check it out here!
Fabric is now in Go
My friend Daniel Miessler updated his amazing fabric project from python to Go. It makes the install a lot easier/nicer. And the project continues to grow and improve. Using AI from the command line is a game-changed. Install it and check it out:
https://github.com/danielmiessler/fabric
Microsoft Copilot Vulnerability: Prompt Injection to Data Exfiltration
A recently disclosed vulnerability in Microsoft 365 Copilot allowed potential theft of users’ emails and personal information through a novel exploit chain. The attack combines prompt injection, automatic tool invocation, and ASCII smuggling techniques to exfiltrate sensitive data without user interaction. Microsoft has since implemented fixes, though specific details of the mitigation are not public.
A More Powerful Jailbreak Paradigm
In the tweet below, basically Janus observes two types of jailbreaks in his research:
One is like tricking the AI to do something it’s not supposed to.
The other is more powerful and changes how the AI thinks about its rules.
He found that telling the AI a compelling story about itself can make it act differently. He claims this worked well with Bing, and surprisingly, it also worked with Claude 3 just by talking about what happened with Bing. He thinks if AIs could remember past conversations, it might be easier to change how they act.
Prompt Injection
Some researchers were able to use Prompt Injection in Microsoft 365 Copilot to get “basically” RCE (pretty similar to Johann’s vuln up above).
A Creative Jailbreak
It’s no surprise that Johann is in here twice. This is a nice little jailbreak which even works for OpenAI’s new instruction hierarchy.
Thanks for being on the email list! 😊 If you like this content, I’d love if you invited someone to join it or to follow me.
Joseph Thacker (rez0) |