Thursday, June 12, 2025

Burn Your Risk Register

I mean it. Print it out, set it on fire, and watch the illusion of control go up in smoke. Because that’s mostly all it is—an illusion for those with responsibility without authority.

You know the one I’m talking about: a spreadsheet, 247 rows deep, each line item scored on some flavor of likelihood and impact, color-coded for banal digestion. Orange, yellow, green. Maybe even red, for the brave or foolish. Each “risk” carefully documented so that someone—anyone—can point to it later and say, “We knew.”

But here’s the problem: no one’s acting on it. No one’s funding decisions from it. And no one with actual authority is reading past line three.

Risk registers don’t drive strategy. They satisfy audits. They cover your rear end—but not really. They sit in GRC platforms and rot while the business moves on without them.

And let’s not pretend those scores mean anything. What’s “high likelihood”? What’s “moderate impact”? You think the business actually cares how you weighted “supply chain compromise” versus “legacy DNS exposure”? It’s all manufactured. A math-y placebo. And worst of all—it’s passive.

Here’s what I’ve seen over and over again: smart people build these registers thinking they’re helping prioritize, when all they’re really doing is deflecting. They offload judgment into a grid against a color gradient no one has interrogated. They assume rigor is the same as relevance.

But if everything is a risk and every risk is scored, you’re not prioritizing. You’re documenting. There’s no hierarchy of urgency against opportunity—the real driver of business. No framing of what’s at stake. No narrative thread that tells a decision-maker what matters now, what can wait, and what is blocking strategic momentum.

That’s the job. Not listing risks. Surfacing tradeoffs. Connecting exposures to opportunity cost. Translating security concerns into outcomes that matter outside your team.

But the register makes us feel responsible. It makes us feel like we’ve “captured” the landscape. Like we’ve “done the work.” And if something goes sideways, we can always say: “See? It was right there. Row 56.”

That’s not leadership. That’s liability management.

So burn it. Or at least stop pretending it’s strategy. You don’t need a better list—you need a better lens.


Tuesday, June 3, 2025

People Don’t Compare Speeding to Skipping Toothbrushing

Nobody compares the risk of driving fast to the risk of not brushing their teeth before bed. That would be idiotic. People don’t think that way in real life.

You speed because the thrill outweighs the danger—for you, in that moment. You skip brushing because you’re already in bed and the hassle feels bigger than the consequence. You’re making tradeoffs. Risk versus opportunity. That’s how normal humans operate.

But somehow, in cybersecurity, we built entire frameworks—FAIR being the biggest offender—on a completely unnatural idea: that we compare risks to other risks in a vacuum, as if ranking bad things by modeled loss value somehow leads to good decisions. 

It doesn’t.

FAIR tells you that data breach #1 has a 90th percentile loss of $4.2M, while data breach #2 is $1.8M. You rank them. You prioritize. You feed it into a dashboard. But nowhere in that process does the model encourage—or require—you to ask the most important question: what does this risk sit in opposition to? And because it doesn’t, most practitioners treat the FAIR model alone as sufficient; job done.

Not brushing your teeth isn’t about oral hygiene. It’s about sleep. Speeding isn’t about safety—it’s about the adrenaline, the shortcut, the impulse. In business, it’s the same. We don’t ask “which risk is bigger?” We ask, “what are we trying to achieve, and what are we willing to risk to get there?”

FAIR alone doesn’t answer that. It has no mechanism to encode what we’re trying to achieve. It assumes that if we just quantify risk precisely enough—thus giving the impression of relevance and mathematical authority—the answer will reveal itself to anyone who sees the output. But even when FAIR introduces ranges or Monte Carlo simulations, it’s still building on shaky assumptions and sidestepping the actual decision logic executives use: what objective is at risk? is this worth it? Not, is this the worst thing on the list?

The deeper problem is this: CISOs who can’t articulate business tradeoffs often fall back on models. FAIR becomes a crutch for leaders who haven’t built executive fluency. It replaces relevance with charts. Influence with simulations.

That’s not FAIR’s fault. That’s misuse—or overreliance. I advocate for judgment and communication layered on top of FAIR, not to displace it. But if judgment and communication fail to surface tradeoffs, then time spent on FAIR would be time that would have been better spent elsewhere.

Boards don’t want simulated precision. They want to know the trade-offs, and they want judgment in that context. They want to know what they stand to gain—not just what they stand to lose.

If your risk model isn’t helping you tell that story—risk versus opportunity, not risk versus risk—then your model isn’t helping. It’s noise. It’s spreadsheet theater. And it’s keeping cybersecurity in the basement when it should be at the strategy table.

If we want to lead, we need to talk like people who make decisions—not like spreadsheet jockeys.