AI, Vulnerabilities, and the Illusion of Control
Why Identity and Resilience Matter More Than Ever
Recent headlines around Anthropic’s latest AI model have sparked predictable reactions — concern, alarm, and in some quarters, outright fear.
The narrative is compelling: AI systems capable of discovering vulnerabilities at unprecedented scale, identifying weaknesses across operating systems, applications, and infrastructure — in some cases exposing issues that have existed, unnoticed, for decades.
This is being framed as a breakthrough. In reality, it is something else entirely.
It is an inevitability.
The Real Issue Isn’t AI — It’s Exposure
The vulnerabilities being discovered are not new. They already exist — embedded in complex systems, legacy code, and interconnected platforms that underpin modern organisations.
What AI changes is not the existence of risk, but the speed, scale, and accessibility of discovering and exploiting it.
Anthropic appear to be handling this responsibly — working with governments and industry, controlling access, and coordinating remediation.
But we should be clear:
the next organisation — or actor — may not.
This Is an Asymmetry Problem
Cybersecurity has always been asymmetric. Attackers need to find one weakness; defenders must secure everything.
AI accelerates that imbalance:
-
Vulnerability discovery becomes automated
-
Exploit development becomes faster
-
The barrier to entry lowers
This is not a future risk. It is happening now.
And it renders one assumption obsolete:
That we can find and fix everything before it is exploited.
Why Identity Becomes the Control Plane
In this environment, traditional security approaches — perimeter defence, vulnerability management, even patching — are necessary but no longer sufficient.
Because ultimately, every successful attack converges on one objective:
Access.
-
Escalate privilege
-
Move laterally
-
Impersonate users or services
-
Gain control of identities
This is why Identity and Access Management is no longer a supporting function.
It is the primary control plane for resilience.
Strong identity controls do not eliminate vulnerabilities — but they limit the blast radius when they are exploited.
From Prevention to Resilience
We need to shift the mindset:
From:
-
Prevent everything
-
Patch everything
-
Trust that known risks are the only risks
To:
-
Assume vulnerabilities exist
-
Assume some will be exploited
-
Design systems that contain, withstand, and recover
This is what resilience looks like in practice:
-
Least privilege by default
-
Strong authentication everywhere
-
Segmentation enforced through identity
-
Control of non-human identities (increasingly critical)
-
Continuous verification, not implicit trust
Fighting Fire with Fire
There is a temptation to respond to developments like this with caution or even resistance to AI.
That would be a mistake.
If attackers can use AI to discover and exploit vulnerabilities at scale,
then defenders must use AI to:
-
identify exposure faster
-
prioritise risk more intelligently
-
detect anomalous behaviour earlier
-
automate response and containment
This is not about fearing AI.
It is about matching capability with capability.
A Moment of Clarity, Not Crisis
Anthropic’s announcement should not be seen as a breaking point for cybersecurity.
It is a moment of clarity.
It exposes what has always been true:
-
systems are more fragile than we assume
-
vulnerabilities are more widespread than we see
-
and control is, to some extent, an illusion
The organisations that will succeed are not those that try to eliminate all risk.
They are the ones that:
-
understand it
-
design for it
-
and build the capability to operate through it
The Bottom Line
AI has accelerated the problem — but it has also made the solution clearer.
Resilience, anchored in strong identity control, is now the foundation of effective cybersecurity.
And the organisations that embrace this — using AI as part of the solution, not fearing it — will be the ones best placed to navigate what comes next.




