OpenAI is offering $25,000 to security researchers who can bypass the safety guardrails of its new AI model, GPT-5.5, through a "bio bug bounty" programme. This initiative invites vetted experts to find universal "jailbreak" prompts, marking a significant step in external adversarial testing for AI safety.