OpenAI has launched GPT-5.4-Cyber, a specialized AI model designed to enhance cybersecurity defenses, following mounting scrutiny over AI safety protocols. The company claims the new model’s safeguards ‘sufficiently reduce cyber risk’ for enterprise applications, though independent analysts remain cautious.
The release comes weeks after Anthropic’s Mythos model raised industry-wide debates about AI alignment risks. OpenAI’s move signals a strategic pivot toward security-focused AI tools, with sources confirming the model underwent red-team testing by third-party cybersecurity firms.
‘This isn’t just about patching vulnerabilities—it’s about rebuilding trust in generative AI systems,’ said a tech analyst familiar with the development. Internal documents seen by WIRED suggest the model incorporates novel threat detection algorithms trained on classified malware datasets.
However, some experts warn that no AI system can guarantee absolute security. The White House AI Council is reportedly drafting new guidelines that may require mandatory cybersecurity audits for advanced models.