A report published in May 2026 by Google Threat Intelligence Group (GTIG) puts a definitive end to the era of theoretical discussions about the future of online security. For the first time, researchers documented a case in which an active criminal group used a language model to plan and carry out a successful attack. The system created a fully functional exploit program that exploited a zero-day vulnerability—that is, a software flaw that nobody previously knew about and for which no patch yet exists. Moreover, the target wasn’t a trivial bug in code but the foundation of today’s digital trust—the two-factor authentication (2FA) mechanism, a popular system that, in addition to a password, requires an additional code from an SMS or an app.
The model carried out work that until now required weeks of painstaking analysis by highly skilled engineers. As summarized by John Hultquist, GTIG’s chief analyst: “It’s already here. The era of AI-driven vulnerability discovery and exploitation has just arrived.”





