Deception as a System
How to Red Team Adversarial Intent (Not Just TTPs)
Most cyber “deception” talk is tool-shaped: honeypots, honeytokens, fake credentials, decoys. Don’t get me wrong, I absolutely love deception engineering from both an offensive and defensive perspective, it’s useful—but incomplete.
Military deception doctrine treats deception as something more uncomfortable and more powerful: a disciplined method for shaping an adversary decision cycle. Not lying for its own sake. Not novelty for defenders. Decision engineering: define the decision you want the opponent to make, build a coherent story, control what evidence they can observe, and measure whether their behavior changes. As I’ve said in the past, not only being aware of the atmospherics, but controlling them.
If you want your red team to feel like an adversary (and to produce learning that actually changes the org), this structure will get you there far more quickly than if you were to ignore this.
TTP catalogs are not a theory of adversarial intent. If you only emulate techniques, you test controls. If you emulate intent, you test organizations.
Deception isn’t “a trick.” It’s a target, a story, and a feedback loop.
In doctrine, the target is not a network segment. It’s not a user. It’s not a sensor.
The target is a decision maker (and the system of people and processes around them): the SOC lead, the incident commander, the threat intel analyst, the CISO, the fraud team, the on-call SRE, the legal/comms chain. Whoever decides what’s real, what’s urgent, and what to do next.
A deception plan typically includes:
Objective: what decision/outcome you want.
Target: who must decide.
Story: what you want them to believe.
Evidence plan: what observable signatures will reinforce that belief.
Execution + control: how you manage risk, prevent spillover, and stop.
Assessment: what indicators show the target is adopting the story.
So if your red team plan doesn’t explicitly state the belief you’re trying to create or test, you’re not doing deception—you’re doing a technical exercise with vibes. (let’s make #vibeteaming a thing?)
The West’s doctrine bias: categories, rigor, and “measure the effect.”
Western doctrine (US/NATO flavor) is often described as:
compartmentalized (MILDEC vs OPSEC vs influence, etc.)
process-heavy
measurable
That can read bureaucratic. But it contains a positive that cybersecurity routinely lacks: a requirement to define effects and collect evidence that the effect happened.
In red teaming, we love “we got domain admin” because it’s crisp. But it’s often a poor proxy for what matters:
Did the defender recognize the true objective?
Did they misallocate time and attention?
Did they prematurely contain the wrong thing?
Did they delay the right escalation?
If you can’t answer those, you didn’t test the organization’s decision quality. You tested whether a path existed. As a side-note, anyone that knows me, knows that whenever I hear that a red team was successful because they “got domain admin”, I die just a little bit on the inside.
The Russian framing: maskirovka, reflexive control, and continuous shaping
Western writing on Russian concepts often emphasizes that deception isn’t a discrete “operation.” It’s a continuum.
Key terms you’ll see:
Maskirovka: broad concealment/misdirection (not just camouflage; a system).
Reflexive control: shaping an opponent’s decisions by feeding cues, frames, and “inevitable” conclusions.
Active measures: influence, disinformation, subversion (especially in the political/social domain).
Translate this to cyber: mature adversaries don’t separate intrusion from narrative.
They may:
make you think it’s “commodity ransomware” to mask a longer strategic objective
create “obvious” IOCs that point to the wrong actor (false flags)
generate alert overload to force shallow triage
manipulate incident tempo so you contain too early (destroying evidence) or too late (allowing objectives)
A red team can emulate this ethically in a controlled way by treating “what defenders believe is happening” as a primary deliverable.
The Chinese framing: stratagems, indirectness, and advantage accumulation
China’s strategic tradition is often described (in Western analysis) as emphasizing:
indirect approaches
long-horizon “condition setting”
shaping the situation so outcomes become “natural” (often discussed as shi—the configuration of advantage)
Even if you never use the vocabulary, the operational lesson for defenders and red teams is clear:
Campaigns matter. Dwell matters. Pre-positioning matters. The most important “move” may happen months before the incident anyone notices. Threat emulation shouldn’t just be “Day 1: phishing, Day 2: lateral movement.”
It should include:
credential seeding and patient reuse
vendor/channel access and trust exploitation
dormant footholds
selective signaling (only enough noise to steer a response)
A practical model: the Deception Stack (what to red team)
Here’s a useful way to think about what we’re building.
Strategic intent (the “WHY”)
Operational objective (WHAT decision/outcome)
Story (WHAT they should believe)
Evidence plan (WHAT they will observe)
Channels/signals (WHERE the evidence appears)
TTPs (HOW you generate evidence)
Assessment / I&W (HOW you know it’s working)
MITRE ATT&CK is a library of layer 6. Great. But real-world adversaries win at layers 1–5. That’s why orgs can be “ATT&CK-mature” and still get strategically surprised.
Indications & Warning is the missing measurement discipline in most cyber exercises
If you want to know whether deception works, don’t look at your implants.
Look at defender behavior.
Indications & Warning (I&W) is essentially structured thinking about:
what “normal” looks like
what deviations mean
what thresholds trigger a different decision
In a red team exercise, define I&W as: observable defender actions that indicate what they believe.
Examples:
Ticket routing changes (who owns it?)
Severity level changes (how bad do they think it is?)
Containment choice (do they isolate a host vs rotate credentials vs call legal?)
Communication patterns (who do they notify and when?)
Hypotheses in writing (what story are they telling themselves?)
This is why deception is such a leverage point: it tests the “thinking layer” of security.
A red team recipe: build an adversary-intent test plan
Here’s a repeatable template you can literally paste into a planning doc.
Step 1: Name the target decision
“During the first 6 hours, will the org treat this as ransomware containment or as identity compromise?”
Step 2: Write the defender priors (their default story)
“We mostly see commodity phishing.”
“This vendor is trusted.”
“If EDR didn’t alert, it’s not serious.”
Step 3: Choose the adversary model
Not “APT vs crime.” Something more precise:
patient access, long dwell, minimal noise
misdirection and false attribution
focus on business process leverage (identity, integrations, finance)
Step 4: Draft the story
“What we want them to believe is happening.”
Step 5: Build the evidence plan
“What will they see that supports the story?”
log artifacts
file names
timing/tempo
fake IOCs
benign-but-suspicious admin actions
Step 6: Control and safety
no real customer impact
kill switch
abort criteria
post-exercise transparency
Step 7: Define I&W and assessment
“What defender behaviors demonstrate belief adoption?”
Step 8: Debrief like an intelligence postmortem
What was the intended belief?
What belief actually formed?
Which signals drove it?
Which processes corrected it?
This is how you produce a report that changes the org.
A stance (and a warning)
Stance: Red teaming should be judged primarily on whether it reveals decision failure modes, not whether it demonstrates access.
Warning: deception inside an organization can poison trust if it’s not handled with explicit consent and tight boundaries. Use it as a training and measurement tool, not a culture war.
The point isn’t to “trick the SOC.” The point is to make the organization more resilient against adversaries who will absolutely try to shape what you believe.

When someone asks me about the topic of deception, I’m referring them to this article.
Great article Dan! And I agree with the previous comment, any conversation on deception and red teaming should start after having read this post.
Here's my answer to the post at Modern Adversary: https://modernadversary.com/on-deception-and-unconventional-warfare