Nobody compares the risk of driving fast to the risk of not brushing their teeth before bed. That would be idiotic. People don’t think that way in real life.
You speed because the thrill outweighs the danger—for you, in that moment. You skip brushing because you’re already in bed and the hassle feels bigger than the consequence. You’re making tradeoffs. Risk versus opportunity. That’s how normal humans operate.
But somehow, in cybersecurity, we built entire frameworks—FAIR being the biggest offender—on a completely unnatural idea: that we compare risks to other risks in a vacuum, as if ranking bad things by modeled loss value somehow leads to good decisions.
It doesn’t.
FAIR tells you that data breach #1 has a 90th percentile loss of $4.2M, while data breach #2 is $1.8M. You rank them. You prioritize. You feed it into a dashboard. But nowhere in that process does the model encourage—or require—you to ask the most important question: what does this risk sit in opposition to? And because it doesn’t, most practitioners treat the FAIR model alone as sufficient; job done.
Not brushing your teeth isn’t about oral hygiene. It’s about sleep. Speeding isn’t about safety—it’s about the adrenaline, the shortcut, the impulse. In business, it’s the same. We don’t ask “which risk is bigger?” We ask, “what are we trying to achieve, and what are we willing to risk to get there?”
FAIR alone doesn’t answer that. It has no mechanism to encode what we’re trying to achieve. It assumes that if we just quantify risk precisely enough—thus giving the impression of relevance and mathematical authority—the answer will reveal itself to anyone who sees the output. But even when FAIR introduces ranges or Monte Carlo simulations, it’s still building on shaky assumptions and sidestepping the actual decision logic executives use: what objective is at risk? is this worth it? Not, is this the worst thing on the list?
The deeper problem is this: CISOs who can’t articulate business tradeoffs often fall back on models. FAIR becomes a crutch for leaders who haven’t built executive fluency. It replaces relevance with charts. Influence with simulations.
That’s not FAIR’s fault. That’s misuse—or overreliance. I advocate for judgment and communication layered on top of FAIR, not to displace it. But if judgment and communication fail to surface tradeoffs, then time spent on FAIR would be time that would have been better spent elsewhere.
Boards don’t want simulated precision. They want to know the trade-offs, and they want judgment in that context. They want to know what they stand to gain—not just what they stand to lose.
If your risk model isn’t helping you tell that story—risk versus opportunity, not risk versus risk—then your model isn’t helping. It’s noise. It’s spreadsheet theater. And it’s keeping cybersecurity in the basement when it should be at the strategy table.
If we want to lead, we need to talk like people who make decisions—not like spreadsheet jockeys.
No comments:
Post a Comment