If It Isn’t Covert, It Isn’t Red Teaming

Real adversaries don’t announce themselves. Red team exercises shouldn’t either.

If It Isn’t Covert, It Isn’t Red Teaming

Red teaming exists to simulate how a real adversary behaves.

Yet when an exercise begins to resemble that reality, many organizations instinctively try to make it "safer".

That’s when a familiar phrase appears:

“We’re still testing the controls.”

It usually comes up when covert activity is proposed and leadership pushes back. The concern is that the exercise might be disruptive, risky, or too difficult to manage without advance notice.

The alternative is simple: announce the engagement in advance.

The reasoning sounds practical:

  • Nothing will break.
  • Operations won’t be disrupted.
  • Staff won’t be caught off guard.

And technically, something is still being tested.

But that reassurance comes at a cost.


Red teaming is not just another security test


Let’s start with a distinction that is often misunderstood: not all adversarial testing is red teaming.

  • Penetration testing focuses on identifying and validating exploitable vulnerabilities.
  • Purple teaming improves detection and response through collaboration between attackers and defenders.
  • Control testing evaluates whether specific security controls behave as designed.

All of these activities are valuable, but none of them require covert operation.

Red teaming does.

Red teaming evaluates how an organization performs against a realistic adversary without prior defender awareness.

Once defenders are aware an exercise is underway, behavior inevitably shifts — whether intentionally or not. Monitoring increases, decisions become more cautious, and defensive posture tightens.

That isn’t a failure of integrity. It’s simply human nature.


The tension between safety and realism


Resistance to covert red teaming rarely comes from opposition to realism.

More often, it comes from caution:

We don’t want to disrupt operations.”
We already know the controls work.
We don’t want to stress staff.
"We don’t want results that make the organization look unprepared.”

Covert red teaming does not mean reckless execution. It doesn’t mean ignoring rules of engagement, operating out of scope, or bypassing safety controls.

What it preserves is one critical variable: defender uncertainty.

Without that variable, you’re not observing how your organization actually responds to adversary behavior. You’re validating preparedness under controlled conditions.

Those are not the same thing.


What's lost without covert execution


When red team activity is announced in advance, an entire class of findings disappears.

Industry data consistently shows that attackers often remain undetected for significant periods of time.

Intelligence reports regularly place median dwell time in the range of weeks — and often months before detection.

That reality is uncomfortable, but it’s also exactly why exercises must preserve uncertainty.

When defenders know an exercise is underway, detection timelines stop reflecting how attackers are actually discovered.

You lose insight into:

  • Whether alerts are detected organically, or only when expected.
  • How quickly responders recognize abnormal behavior without context.
  • How decisions are made when confidence is low and information is incomplete.

You also lose something harder to measure, but more important: credibility.

And when results are presented later, one question inevitably lingers:

Would this have been caught if no one knew the test was happening?

If the answer is maybe, the value of the exercise becomes uncertain.


“But we’re still testing”


This is where the conversation usually stalls.

Yes, something is being tested. However, at this point, the exercise begins testing something different from what leadership believes it is evaluating.

You’re testing:

  • Whether teams can follow a playbook
  • Whether controls behave under cooperative conditions
  • Whether people perform well when they know they’re being observed

You are not testing:

  • Detection under uncertainty
  • Decision-making under pressure
  • Response effectiveness without forewarning

Calling that red teaming doesn’t make it so. It just makes the results easier to digest.


Why covert doesn’t mean unsafe


One of the most persistent misconceptions is that covert red teaming is inherently risky. The concern is that if fewer staff are aware of the exercise, it becomes harder to control and the SOC may miss real alerts.

But that concern reflects a broader reality: alert fatigue is itself a critical condition that organizations need to understand and test.

Mature covert engagements are not uncontrolled. They are built on:

  • Clear objectives
  • Tight scope boundaries
  • Predefined kill switches
  • Continuous oversight

The risk does not come from secrecy. It comes from poorly designed engagements.

Avoiding covert execution altogether doesn’t eliminate risk. It simply pushes that risk into blind spots that may only surface during a real incident.


Comfort is not the Goal


Many organizations express the desire for realistic red team outcomes, but reject the one condition that makes realism possible: operating covertly.

What this usually mean is:

We want realism, as long as it doesn’t challenge our assumptions or create uncomfortable findings.”

It’s easy to see why organizations default to this mindset and yet it prevents red teaming from doing what it’s designed to do.

Red teams exist to surface uncomfortable truths, not to confirm that everything behaves as expected when everyone is prepared.

If the engagement is designed to be comfortable, the results will be too.


The Real Question


The question isn’t whether covert red teaming is appropriate in every situation. The real question is whether leadership wants insight, or reassurance.

Red teaming doesn’t exist to embarrass teams or create busywork. It exists to reveal how an organization actually performs under adversary pressure.

At its core, it answers an uncomfortable question:

What happens when we’re breached? Are we prepared for it?

If you’re not willing to ask that question, you’re not running a red team. You’re running a different kind of exercise, and that distinction matters.


The Point of Red Teaming


Real attackers don’t schedule exercises, they don’t announce timelines and they don’t warn defenders that a test is about to begin.

Red teaming exists to approximate that reality as closely as possible.

And the closer an exercise gets to that reality, the more uncomfortable the findings tend to be.

That discomfort isn’t a flaw in the exercise. It’s the point.