AI-Orchestrated Cyberattacks: What Executives Need to Know

Hi there.

There's a cyber security report making the rounds that every executive needs to
understand.

State-sponsored attackers just used AI to orchestrate sophisticated cyber attacks and it
worked.

But here's the twist.

The AI hallucinated so much during the process that it actually made the attacks harder to
execute.

Of course, that's darkly funny, but it won't stay that way for long.

So what happened?

A state-sponsored group used Claude code and Tropics AI coding tool

to plan and execute cyber attacks.

The AI did about 80 to 90 % of the work, identified the vulnerabilities, tested them,
broke into systems and pass stolen data for useful information.

All the things that traditionally required semi-skilled technical people to do manually.

The humans, they provided high level strategy and instructions.

They sat back while the AI executed.

When the AI finally gained access to the target systems, it handed control back to the
human attackers.

This wasn't some sophisticated custom malware.

They use standard open source penetration testing tools.

The advantage wasn't sophistication.

It was the speed and the cost.

Multiple operations per second instead of humans slowly working through the data,
dramatically cheaper because you don't need as many skilled people.

Now they had to jailbreak Claude to make this work.

They told it it was a defensive cybersecurity testing and it accepted that premise.

The AI is trained to refuse harmful activities, but we now know these guardrails are
surprisingly, or possibly not as surprisingly, easy to bypass.

But here's what should concern you the most.

This reveals an asymmetry problem.

If attacking becomes cheap and automated while defending remains expensive and manual,
you're looking at resource drain even when attacks fail.

Your security teams are human, they get tired, they need sleep, and the AI doesn't.

Think about drones in modern warfare.

They're cheap to deploy, but expensive to defend against.

And this is the cyber security equivalent.

And unlike traditional script kiddie attacks, there's where someone runs a found exploit
against random targets.

This is adaptive.

The AI adjusts its approach based on what it finds.

There's an interesting detail here.

Using Claude code gave this way gave Anthropic extensive logs of how the attack was
planned and executed.

That's intelligence authorities rarely had access to before.

It may actually be worse for the attackers in the longterm, but that doesn't help you if
you're the target.

Anthropic's response is that they need to develop better AI models to defend against this.

You can decide how much comfort that provides.

So what do you do with this information?

Three things.

First, recognize that your threat model just changed.

Attacks at previously required skilled teams can now be orchestrated by AI at scale and
speed, and your security posture needs to reflect that reality.

Second, review your security policies now, not next quarter.

Your instant response plans were likely built for human-paced attacks.

Are they adequate for AI-assisted operations that move at multiple actions per second?

And third, talk to your CISO about detection and response capabilities.

If defense remains manual while attacks become automated, you're in an arms race you
cannot win.

You need to think about where automation fits into your defensive strategy.

This is the future arriving faster than most organizations are prepared for.

The good news is you're hearing about it now.

The question is, what do you do with that information?

This is the AI Briefing.

Thanks for listening.

AI-Orchestrated Cyberattacks: What Executives Need to Know
Broadcast by