opsec
What Is OPSEC? The Five Steps Explained
OPSEC (operational security) is a five-step process for protecting critical information from adversaries. Learn its military roots and civilian applications.
Operational security is the discipline of keeping critical information away from adversaries who could weaponize it. The term has military roots, but the five-step process applies to journalists, activists, researchers — anyone with something worth protecting.
OPSEC isn't a product. You can't buy it and install it. It's a habit, a way of evaluating every action and every piece of information you expose before you expose it.
OPSEC Defined: Military Origin, Civilian Adaptation
The term "OPSEC" was coined during the Vietnam War in Operation Purple Dragon (1966), when US military analysts discovered that the enemy wasn't breaking sophisticated codes — they were piecing together unclassified scraps of information (troop movements, supply logs, scheduling signals) to anticipate operations. The fix wasn't encryption. It was discipline about what got communicated at all.
The US military formalized OPSEC in NSDD-298 (National Security Decision Directive, 1988), which established it as a five-step process. The process has since been adopted well beyond defense — law enforcement, corporations, journalists, and privacy-aware individuals all use variations of it.
The civilian version strips out the military framing and asks the same core question: what do I know, who wants to know it, and what happens if they find out?
The Five OPSEC Steps
These five steps are sequential. Each one feeds the next.
Step 1: Identify critical information. What information, if compromised, would harm you or someone you're responsible for protecting? Write it down. Be specific. "The identity of my confidential source" is critical information. "My Netflix password" probably isn't.
Step 2: Analyze threats. Who wants your critical information? This is your adversary identification step. It requires the same specificity — "a criminal organization that has previously targeted journalists in this country" is a threat. "Bad actors" is not.
Step 3: Analyze vulnerabilities. How could an adversary get to your critical information? Consider every exposure vector: your devices, your communications, your associates, your routines. A well-constructed threat model maps vulnerabilities systematically rather than guessing.
Step 4: Assess risk. Not all vulnerabilities are equally likely to be exploited, and not all critical information has equal consequences if compromised. Risk = likelihood × consequence. This step helps you prioritize.
Step 5: Apply countermeasures. Now you match defenses to risks, in order of priority. Countermeasures should be proportionate — the right tool for the actual threat, not the theoretical worst-case threat.
This process isn't a one-time checklist. Run it regularly. Situations change.
OPSEC Is a Habit, Not a Tool
The most common OPSEC failure isn't a technical one. It's the person who uses encrypted messaging for sensitive work but posts their location on Instagram in real time. The tool was right; the habit was wrong.
Real OPSEC is reflexive. Before you share anything — a file, a location, a fact about your work, a name — you ask: who can see this, and could seeing it harm me or someone I'm protecting? That reflex has to apply even when you're tired, distracted, or feel safe.
This is why the "assume you'll get something wrong, plan for it" posture matters. Perfect OPSEC is a fantasy. Your countermeasures should include damage limitation — so that one mistake doesn't cascade into full exposure.
Compartmentalization is the primary tool for containment. Keeping identities, devices, and accounts strictly separated means a breach in one compartment doesn't compromise the others.
Common Civilian OPSEC Failures
These patterns appear repeatedly across breach reports, doxxing incidents, and investigative journalism on source exposure. Names and specifics are generalized.
Cross-account email reuse. An activist uses their personal Gmail to sign up for a political forum that gets subpoenaed. The link between the pseudonym and the real identity exists in Google's records. Countermeasure: separate email providers per identity, never your real name.
Photo EXIF data. A whistleblower photographs a document and sends the image file. The photo's EXIF metadata contains GPS coordinates of the room, the device model, and a timestamp. Forensic analysis of the document plus the EXIF data narrows the leak to a single building. Countermeasure: strip EXIF data before sending any image (ExifTool does this in one command); use a camera or scanner with GPS disabled.
Social-graph inference. An anonymous account on a protest forum interacts only with a small cluster of five other accounts. Even without identifying any individual account, law enforcement can identify the group structure. When one person in the group is later identified through an unrelated channel, the relationships narrow the suspect pool to six. Countermeasure: limit interactions between your anonymous accounts and your real-identity accounts to zero. Not "almost zero."
Stylometric analysis. A pseudonymous researcher's writing style — sentence rhythm, specific vocabulary choices, punctuation habits — is analyzed against a corpus of their public-identity writing and linked with high confidence. This is an active technique. Countermeasure: awareness at minimum; consistent style modification for sensitive writing.
Threat Model — OPSEC as Practice
OPSEC doesn't define a single threat model — it's the framework for building one. The five-step process forces you to articulate your threat model explicitly rather than assume it.
A journalist's OPSEC practice looks different from a researcher's, which looks different from a privacy-aware civilian's. What they share is the discipline of asking the same questions: what's worth protecting, who's trying to get it, what can they do, and what do I do about it?
The EFF's Surveillance Self-Defense provides an accessible, well-maintained framework for civilians building their first OPSEC practice. We recommend starting there before adding tool complexity.
The tools — Tor, Tails, PGP, Signal — are countermeasures in step five. They're meaningless without steps one through four. Most people start with the tools and skip the model. That's backward.
Frequently Asked Questions
Is OPSEC only for journalists and activists?
No. The five-step process applies to anyone who has information they'd prefer to keep private. A small business owner protecting trade secrets, a domestic abuse survivor concealing their location, a researcher accessing politically sensitive material — all have OPSEC concerns. The adversary and countermeasures differ; the framework is the same.
What's the most common OPSEC mistake?
Inconsistency. Most breaches don't come from sophisticated attacks on well-hardened systems; they come from the one time someone got lazy, tired, or overconfident. Using a personal account for one task that was supposed to stay pseudonymous. Sending an unencrypted file because it "probably doesn't matter." The discipline has to be consistent or it doesn't protect you.
How does OPSEC relate to privacy and anonymity?
They overlap but aren't identical. Privacy is the right to control what others know about you. Anonymity is the ability to act without your identity being known. OPSEC is the practice of protecting both by systematically identifying and mitigating exposure. Compartmentalization is where all three meet operationally.
Does good OPSEC require technical skills?
Basic OPSEC — consistent behavior, account separation, metadata awareness — requires almost no technical skill. Advanced OPSEC — running Tails OS, managing PGP keys, configuring Whonix on Qubes — does require technical knowledge. Start with behavioral discipline, then layer in technical tools as your threat model demands.