Thursday, June 12, 2025

Burn Your Risk Register

I mean it. Print it out, set it on fire, and watch the illusion of control go up in smoke. Because that’s mostly all it is—an illusion for those with responsibility without authority.

You know the one I’m talking about: a spreadsheet, 247 rows deep, each line item scored on some flavor of likelihood and impact, color-coded for banal digestion. Orange, yellow, green. Maybe even red, for the brave or foolish. Each “risk” carefully documented so that someone—anyone—can point to it later and say, “We knew.”

But here’s the problem: no one’s acting on it. No one’s funding decisions from it. And no one with actual authority is reading past line three.

Risk registers don’t drive strategy. They satisfy audits. They cover your rear end—but not really. They sit in GRC platforms and rot while the business moves on without them.

And let’s not pretend those scores mean anything. What’s “high likelihood”? What’s “moderate impact”? You think the business actually cares how you weighted “supply chain compromise” versus “legacy DNS exposure”? It’s all manufactured. A math-y placebo. And worst of all—it’s passive.

Here’s what I’ve seen over and over again: smart people build these registers thinking they’re helping prioritize, when all they’re really doing is deflecting. They offload judgment into a grid against a color gradient no one has interrogated. They assume rigor is the same as relevance.

But if everything is a risk and every risk is scored, you’re not prioritizing. You’re documenting. There’s no hierarchy of urgency against opportunity—the real driver of business. No framing of what’s at stake. No narrative thread that tells a decision-maker what matters now, what can wait, and what is blocking strategic momentum.

That’s the job. Not listing risks. Surfacing tradeoffs. Connecting exposures to opportunity cost. Translating security concerns into outcomes that matter outside your team.

But the register makes us feel responsible. It makes us feel like we’ve “captured” the landscape. Like we’ve “done the work.” And if something goes sideways, we can always say: “See? It was right there. Row 56.”

That’s not leadership. That’s liability management.

So burn it. Or at least stop pretending it’s strategy. You don’t need a better list—you need a better lens.


Tuesday, June 3, 2025

People Don’t Compare Speeding to Skipping Toothbrushing

Nobody compares the risk of driving fast to the risk of not brushing their teeth before bed. That would be idiotic. People don’t think that way in real life.

You speed because the thrill outweighs the danger—for you, in that moment. You skip brushing because you’re already in bed and the hassle feels bigger than the consequence. You’re making tradeoffs. Risk versus opportunity. That’s how normal humans operate.

But somehow, in cybersecurity, we built entire frameworks—FAIR being the biggest offender—on a completely unnatural idea: that we compare risks to other risks in a vacuum, as if ranking bad things by modeled loss value somehow leads to good decisions. 

It doesn’t.

FAIR tells you that data breach #1 has a 90th percentile loss of $4.2M, while data breach #2 is $1.8M. You rank them. You prioritize. You feed it into a dashboard. But nowhere in that process does the model encourage—or require—you to ask the most important question: what does this risk sit in opposition to? And because it doesn’t, most practitioners treat the FAIR model alone as sufficient; job done.

Not brushing your teeth isn’t about oral hygiene. It’s about sleep. Speeding isn’t about safety—it’s about the adrenaline, the shortcut, the impulse. In business, it’s the same. We don’t ask “which risk is bigger?” We ask, “what are we trying to achieve, and what are we willing to risk to get there?”

FAIR alone doesn’t answer that. It has no mechanism to encode what we’re trying to achieve. It assumes that if we just quantify risk precisely enough—thus giving the impression of relevance and mathematical authority—the answer will reveal itself to anyone who sees the output. But even when FAIR introduces ranges or Monte Carlo simulations, it’s still building on shaky assumptions and sidestepping the actual decision logic executives use: what objective is at risk? is this worth it? Not, is this the worst thing on the list?

The deeper problem is this: CISOs who can’t articulate business tradeoffs often fall back on models. FAIR becomes a crutch for leaders who haven’t built executive fluency. It replaces relevance with charts. Influence with simulations.

That’s not FAIR’s fault. That’s misuse—or overreliance. I advocate for judgment and communication layered on top of FAIR, not to displace it. But if judgment and communication fail to surface tradeoffs, then time spent on FAIR would be time that would have been better spent elsewhere.

Boards don’t want simulated precision. They want to know the trade-offs, and they want judgment in that context. They want to know what they stand to gain—not just what they stand to lose.

If your risk model isn’t helping you tell that story—risk versus opportunity, not risk versus risk—then your model isn’t helping. It’s noise. It’s spreadsheet theater. And it’s keeping cybersecurity in the basement when it should be at the strategy table.

If we want to lead, we need to talk like people who make decisions—not like spreadsheet jockeys.

Thursday, May 29, 2025

Why Risk Quantification Isn’t Strategy

Models that pretend to replace judgment are just avoiding responsibility


 

It doesn’t matter how sophisticated your model is—if you’re using it to replace actual decision-making, it’s holding you back.

Let’s talk about the math—the world of risk quantification. FAIR, Monte Carlo simulations, PERT distributions, all of it. These tools promise clarity. They suggest that if we run the numbers enough times, insight will naturally emerge. But the issue isn’t the math itself. It’s how we choose to apply it.

Too often, cybersecurity teams reach for risk quantification at the very moment they’ve lost the plot. It’s what happens when we can’t articulate what matters, to whom, or why. We substitute analysis for clarity. We generate charts when we should be crafting a narrative.

Let’s be honest with ourselves: the math won’t save you. It often obscures a leadership gap by cloaking it in probability. We plug in subjective inputs, wrap them in simulation logic, and hope the opacity will pass for credibility. But seasoned executives see through that. Your CEO, your CFO, your board—they know when something doesn’t feel right.

A Monte Carlo simulation based on shaky assumptions just gives you a statistically ornate version of your own uncertainty.

The impulse toward risk quantification in cybersecurity is understandable, but often misplaced. It’s a way of retreating—of avoiding the difficult work of judgment. Rather than stand up and say, “Here’s the tradeoff. Here’s what’s at stake. Here’s what I recommend,” we present charts and let the model speak for us.

That isn’t leadership. That’s evasion.

If you can’t explain your reasoning without a model, it’s a sign you haven’t internalized it. If you can’t connect with executives unless you’re wielding loss exceedance curves, you’re not speaking in their language of relevance and shared goals.

And here’s the irony: we lean on risk quantification in the name of credibility. But the moment someone looks closely, the veneer cracks. Our loss estimates are educated guesses. Our frequency assumptions are gut instinct. Our asset valuations might have been agreed to in a meeting three months ago by someone who couldn’t define “residual risk” if their bonus depended on it.

Sure, it looks impressive on a dashboard—until the question comes: “What are we getting for this spend?” Then what?

Math has its place. But it’s not a stand-in for strategy. Models can guide, but they don’t decide. And confidence intervals—useful as they are—aren’t a substitute for confidence itself.

If your models can’t stand on their own unless hidden behind the aura of certainty, they’re not supporting you. They’re concealing you.


Friday, April 4, 2025

Why Risk Heat Maps Are Bad And Fail Leaders

(Also known as a risk matrix, heat map, or risk heat matrix.)
 

1. They Strip Risk of Context

Risk is never standalone. It’s entangled in timing, competitive dynamics, stakeholder posture, opportunity cost, and internal politics. Heat maps flatten this complexity—abstracting risks into sterile, color-coded boxes. They decouple decisions from the real-world pressures shaping them.

Executives aren’t making moves off color blocks. They need to see why this risk matters now, in this moment, given what’s at stake.

2. They Rely on Fabricated Scores

“Likelihood” and “impact” scores are often little more than structured guessing—rarely grounded in evidence, scenario modeling, or operational input. Most aren’t validated with those who’d carry the impact when the risk plays out.

These aren’t business consequences—they’re estimates dressed up as data.

Worse: shifting from a 4 to a 5 in likelihood changes nothing in reality, but redraws the map like it’s a turning point.

3. They Imply Action Without Earning It

The red-yellow-green spectrum suggests urgency—but offers no rationale. It’s a visual trigger with no logic behind it. There’s no clarity on thresholds, tradeoffs, or what shifts a risk’s status. The implication: the color should speak for itself.

But color doesn’t move decisions. Understanding does. Tradeoffs do. Timing does.

4. They Frame Risk as the Endpoint

This is the most strategic misstep: presenting risk as something to avoid, rather than to navigate in pursuit of value. The heat map frames risk as the problem—stripped of its connection to growth, innovation, or strategic positioning.

Smart leaders ask: “What are we trying to achieve—and what risks are worth taking to get there?”

Missing entirely: the cost of inaction, or the upside being risked.

Toward Decision-Relevant Risk Framing

Executives don't need decoration from CISOs. They need decision tools. Tools that:

  • Anchor in business consequences, not assumptions
  • Reveal opportunity cost and reward potential
  • Model uncertainty, velocity, or fragility
  • Provide narratives, not dashboards
  • Create dialogue, not just reporting

A better model might look like a risk-reward portfolio, an strategic options map, or something akin to a Benefit-Harm Analysis—not a compliance heat map.