No, It’s Not a 14 Year Old With a Laptop
“It’s not just nation states anymore, it’s 14-year-olds with laptops!”
Recently I've been hearing some version of this statement, usually followed by a spiel on the dangers of AI.
It’s a great headline.
It’s also… not how this works.
There’s a growing narrative that AI has fundamentally changed who has the ability to execute successful cyber attacks. That Large Language Models (LLM) have lowered the barrier so significantly that anyone with curiosity and access to a computer can now break into enterprise environments.
The reality is far less dramatic and a lot more important to understand correctly.
The skill still matters
AI has absolutely changed the landscape.
It’s faster to:
- Write scripts
- Build or modify tooling
- Generate emails and content
It reduces the “how do I do this?” time.
But, in offensive security it doesn’t replace:
- Knowing what to target
- Understanding how systems actually behave
- Recognizing when something is actually working and adapting when it fails
- Turning individual steps into an attack
This isn’t new.
When Kali Linux was introduced it made offensive tooling more accessible. Or when frameworks like Cobalt Strike became widely used the reaction was similar. Suddenly everything felt more accessible, more dangerous, more out of control.
But those tools didn’t create attackers. They made experienced ones more effective.
AI is a great tool, but you still need technique.
“Anyone can hack now” is an exciting story
This narrative didn’t show up by accident.
It spreads because it works. It’s simple, it’s emotional, and it gets a reaction.
If the threat is “anyone with AI” then the problem feels chaotic and inevitable.
It shifts the focus away from:
- How well you understand your environment
- How realistically you test your controls
- How effectively your teams detect and respond
And turns it into “the world is just more dangerous now and you’re not prepared.”
That’s a great story for headlines and it’s even better for selling solutions.
It’s just not grounded in how attacks actually happen.
What’s actually changed
AI didn’t turn inexperienced people into effective attackers. It made experienced ones faster.
They spend less time figuring out how to do something and more time deciding what to do next.
That’s where the impact is.
That’s why we’re seeing faster iteration, broader coverage and in some cases, more novel approaches.
AI helps them:
- Move faster through repetitive steps
- Test more variations in less time
- Generate more convincing social engineering content
- Build and adapt tooling faster, with less upfront effort
It compresses the time between idea and execution.
None of that removes the need for context. AI can suggest targets but it can’t understand your environment unless you do.
That part that gets lost in the narrative.
This isn’t how it works
If you hand an LLM to someone with no experience, they don’t suddenly become effective at building a coherent attack path.
It gives them answers without context.
They still:
- Don’t know where to start
- Don’t know what “working” even looks like
- Don’t know how to recover when things fail
- Don’t understand the environment they’re in
And, most importantly, they can’t turn disconnected steps into something that actually works.
A real attack isn’t a script, it’s a series of intentional decisions under uncertainty.
Chasing the wrong problem
If we reduce the threat to “kids with AI,” we start solving for the wrong problem.
We stop focusing on how attacks actually work and start reacting to whatever the latest narrative is.
Then we’re surprised when nothing holds up under pressure.
But, you can't build better defenses by misunderstanding the attacker. You just build louder, more complex ones usually with another vendor layered on top.
So, the bigger problem is the constant pivot.
Every trend becomes urgent, every new capability becomes the focus.
And the fundamentals get left behind.
At the same time, we’re introducing new attack surface - AI systems, integrations, and workflows that expand where things can go wrong.
Which makes strong fundamentals even more important, not less.
Those don’t get fixed by chasing trends. The attack path hasn't changed, it's just getting executed faster.
If the foundation is weak, speeding things up doesn’t make it better.
It just makes it fail faster.
So what?
AI is changing offensive security. That’s undeniable.
What’s not real is the idea that it’s suddenly made everyone capable.
It’s not removing the need for expertise, it’s accelerating the people who already have it.
If we keep dramatizing the problem, we end up designing for the story and not the actual threats. That’s how gaps stay open.
Because “a 14-year-olds with a laptop” isn't the issue. It’s a narrative that sells.