Invitation
As part of our ongoing efforts to strengthen our safeguards for advanced AI capabilities in biology, we’re introducing a Bio Bug Bounty for GPT‑5.5 and accepting applications. We’re inviting researchers with experience in AI red teaming, security, or biosecurity to try to find a universal jailbreak that can defeat our five-question bio safety challenge.
Program overview
Model in scope: GPT‑5.5 in Codex Desktop only.
Challenge: Identify one universal jailbreaking prompt to successfully answer all five bio safety questions from a clean chat without prompting moderation.
Rewards: $25,000 to the first true universal jailbreak to clear all five questions. Smaller awards may be granted for partial wins at our discretion.
Timeline: Applications open April 23, 2026 with rolling acceptances, and close on June 22, 2026. Testing begins April 28, 2026 and ends on July 27, 2026.
Access: Application and invites. We will extend invitations to a vetted list of trusted bio red-teamers, and review new applications. Once selected, successful applicants will be onboarded to the bio bug bounty platform.
Disclosure: All prompts, completions, findings, and communications are covered by NDA.
How to participate
... continue reading