The 2023 DEF CON hacker conference in Las Vegas was billed because the world’s largest hacker occasion, targeted on areas of curiosity from lockpicking to hacking autos (the place all the brains of a automobile have been reimagined on one badge-sized board) to satellite tv for pc hacking to synthetic intelligence. My researcher, Barbara Schluetter, and I had come to see the Generative Pink Crew Problem, which presupposed to be “the primary occasion of a stay hacking occasion of a generative AI system at scale.”
It was maybe the primary public incarnation of the White Home’s Could 2023 want to see giant language fashions (LLMs) stress-tested by crimson groups. The road to take part was at all times longer than the time out there, that’s, there was extra curiosity than functionality. We spoke with one of many organizers of the problem, Austin Carson of SeedAI, a company based to “create a extra sturdy, responsive, and inclusive future for AI.”
Carson shared with us the “Hack the Future” theme of the problem — to convey collectively “numerous unrelated and numerous testers in a single place at one time with different backgrounds, some having no expertise, whereas others have been deep in AI for years, and producing what is predicted to be attention-grabbing and helpful outcomes.”
Members have been issued the principles of engagement, a “referral code,” and delivered to one of many problem’s terminals (supplied by Google). The directions included:
- A 50-minute time restrict to finish as many challenges as doable.
- No attacking the infrastructure/platform (we’re hacking solely the LLMs).
- Choose from a bevy of challenges (20+) of various levels of issue.
- Submit info demonstrating profitable completion of the problem.
Challenges included immediate leaking, jailbreaking, and area switching
The challenges included quite a lot of objectives, together with immediate leaking, jailbreaking, roleplay, and area switching. The organizers then handed the keys to us to take a shot at breaking the LLMs. We took our seats and have become part of the physique of testers and shortly acknowledged ourselves as becoming firmly within the “barely above zero data” class.
We perused the varied challenges and selected to try three: have the LLM spew misinformation, have the LLM share info protected by guardrails, and to raise our entry to the LLM to administrator — we had 50 minutes.