Skip to content
Tech News
← Back to articles

GPT‑5.5 Bio Bug Bounty

read original more articles
Why This Matters

The GPT‑5.5 Bio Bug Bounty initiative highlights the ongoing efforts of AI developers to enhance safety measures in biological applications of advanced AI models. By inviting researchers to identify vulnerabilities, the program aims to prevent misuse and ensure responsible deployment of AI in sensitive fields, ultimately safeguarding both industry and consumers. This proactive approach underscores the importance of security in the rapidly evolving landscape of AI-driven bioinformatics.

Key Takeaways

Invitation

As part of our ongoing efforts to strengthen our safeguards for advanced AI capabilities in biology, we’re introducing a Bio Bug Bounty for GPT‑5.5 and accepting applications. We’re inviting researchers with experience in AI red teaming, security, or biosecurity to try to find a universal jailbreak that can defeat our five-question bio safety challenge.

Program overview

Model in scope: GPT‑5.5 in Codex Desktop only.

Challenge: Identify one universal jailbreaking prompt to successfully answer all five bio safety questions from a clean chat without prompting moderation.

Rewards: $25,000 to the first true universal jailbreak to clear all five questions. Smaller awards may be granted for partial wins at our discretion.

Timeline: Applications open April 23, 2026 with rolling acceptances, and close on June 22, 2026. Testing begins April 28, 2026 and ends on July 27, 2026.

Access: Application and invites. We will extend invitations to a vetted list of trusted bio red-teamers, and review new applications. Once selected, successful applicants will be onboarded to the bio bug bounty platform.

Disclosure: All prompts, completions, findings, and communications are covered by NDA.

How to participate

... continue reading