Models that pretend to replace judgment are just avoiding responsibility
It doesn’t matter how sophisticated your model is—if you’re using it to replace actual decision-making, it’s holding you back.
Let’s talk about the math—the world of risk quantification. FAIR, Monte Carlo simulations, PERT distributions, all of it. These tools promise clarity. They suggest that if we run the numbers enough times, insight will naturally emerge. But the issue isn’t the math itself. It’s how we choose to apply it.
Too often, cybersecurity teams reach for risk quantification at the very moment they’ve lost the plot. It’s what happens when we can’t articulate what matters, to whom, or why. We substitute analysis for clarity. We generate charts when we should be crafting a narrative.
Let’s be honest with ourselves: the math won’t save you. It often obscures a leadership gap by cloaking it in probability. We plug in subjective inputs, wrap them in simulation logic, and hope the opacity will pass for credibility. But seasoned executives see through that. Your CEO, your CFO, your board—they know when something doesn’t feel right.
A Monte Carlo simulation based on shaky assumptions just gives you a statistically ornate version of your own uncertainty.
The impulse toward risk quantification in cybersecurity is understandable, but often misplaced. It’s a way of retreating—of avoiding the difficult work of judgment. Rather than stand up and say, “Here’s the tradeoff. Here’s what’s at stake. Here’s what I recommend,” we present charts and let the model speak for us.
That isn’t leadership. That’s evasion.
If you can’t explain your reasoning without a model, it’s a sign you haven’t internalized it. If you can’t connect with executives unless you’re wielding loss exceedance curves, you’re not speaking in their language of relevance and shared goals.
And here’s the irony: we lean on risk quantification in the name of credibility. But the moment someone looks closely, the veneer cracks. Our loss estimates are educated guesses. Our frequency assumptions are gut instinct. Our asset valuations might have been agreed to in a meeting three months ago by someone who couldn’t define “residual risk” if their bonus depended on it.
Sure, it looks impressive on a dashboard—until the question comes: “What are we getting for this spend?” Then what?
Math has its place. But it’s not a stand-in for strategy. Models can guide, but they don’t decide. And confidence intervals—useful as they are—aren’t a substitute for confidence itself.
If your models can’t stand on their own unless hidden behind the aura of certainty, they’re not supporting you. They’re concealing you.