Showing posts with label risk assessment. Show all posts
Showing posts with label risk assessment. Show all posts

Sunday, October 20, 2024

Driving Change in Risk Management with Stakeholder-Enhanced Risk Assessments (SERA)

Driving impactful change in risk management starts with engaging the right people. Stakeholder-Enhanced Risk Assessments (SERA) reshape how organizations understand and address risk by involving business stakeholders and cybersecurity specialists in the conversation. This collaboration transforms dry, technical risk data into relatable and relevant business insights. The result? Early, pragmatic solutions that cut costs, reduce complexity, and secure buy-in from decision-makers.

SERA involves managers and directors from both risk-generating and risk-impacted departments. This integrated approach uncovers how cybersecurity or technical risks affect business objectives, operations, and processes, with the functions creating those risks in the room when they are discovered.

Core elements of SERA for an effective risk dynamic:

  • Engagement and Insight Gathering: By incorporating stakeholder perspectives, SERA reveals how risks intersect with broader business interests—even when they appear contradictory.
  • Tailored Risk Discussions: Facilitators connect cybersecurity risks with business outcomes, embedding risk awareness into the organization's mindset.
  • Collaborative Planning: Techniques like 'Pre-Mortem Assessments' help stakeholders identify risks early by examining potential failure points. These insights are then integrated into a comprehensive team-wide risk assessment process.

The benefits of SERA extend beyond traditional risk management approaches, providing several key advantages:

  • Tailored Risk Communication: SERA reframes risks in ways that resonate with each department and decision-makers, presenting them in the context of their impact on key business priorities. This approach makes risk discussions more persuasive, relevant, and actionable.
  • Shared Risk Discovery: Collaborative discussions uncover risks that gain visibility and become impossible to ignore, offering far-reaching and deeper insight than a traditional risk register.
  • Stakeholder-Driven Risk Acceptance: Early engagement empowers stakeholders with responsibility and knowledge, leading to more well-defined and reliable risk acceptance while reducing the need for continuous oversight.
  • Cybersecurity Steps Out of the Middle: SERA removes cybersecurity from the role of approving or rejecting actions, shifting that responsibility to the business stakeholders who are directly impacted. This allows cybersecurity to focus on advising rather than gatekeeping.
  • Early Action on Risks: Early identification of risks leads to faster response times, often allowing remediation to begin before the final report is delivered. This accelerates the process and helps secure timely approval from senior leadership.

Stakeholder-Enhanced Risk Assessments (SERA) shift risk management from technical details to business relevance, fostering collaboration and uncovering practical, cost-effective solutions. By engaging stakeholders early, SERA strengthens support from decision-makers and simplifies the path to mitigation.

How will deeper stakeholder involvement transform your approach to core cybersecurity challenges and elevate your risk management strategy?

Sunday, September 11, 2016

Mitigating Catastrophic Risks of Smart Meters

I posed a question on quora.com about whether it was feasible to design smart meters so that the power couldn’t be entirely shut off by remote control. In particular, I wanted to know if some small portion of the power could be on a separate mechanism that would require a physical visit by the utility company to shut it off.

The question received an articulate, passionate, and well-reasoned response that explained why such a design should not be pursued. The author made good points about motor burnouts, damaged equipment, and so on - it was apparent that he was familiar with the topic of electricity. He also implied that my design suggestion was downright dangerous, and perhaps indicated a lack of thoughtfulness on my part for even considering such a design.

I explained that I was concerned that a cyber attacker could remotely shut off power to tens or hundreds of thousands of electrical utility customers and then brick (render inoperable) the remote control unit. This of course would require the utility company to physically visit every customer to restore power. This would especially affect those who need power for medical devices.

The passionate respondent replied:
"That’s a good point, I never thought of that."
This is the nature of risk in complex systems: risks have a tendency to emerge from unexpected places when evaluated solely by experts in specific domains. Experts tend to bring specific training, specialized ethics, implied goals, and certain types of experiences. This narrow focus is good for daily work, but can negatively affect risk assessments. In the case of the passionate respondent it's clear that the electrical system and equipment were front and center, perhaps summarized by a mantra of "don't damage equipment!" However, in the broader context, this important principle/ethic was too narrow to address broad needs and stakeholder expectations. In this case it came down to a framing issue. 

How should risk assessments be designed so that they avoid the issues of being too narrowly focused? Here are a few ideas:

  1. Define the target of the risk assessment in terms that have relevancy to stakeholders.
  2. Invite stakeholders who can, as directly as possible, represent the interests and perspectives of the diverse set of groups who derive benefits(1) from the target.
  3. Solicit these stakeholders to identify their key benefits or needs relative to the target.
  4. Focus the risk assessment on risks to benefits rather than on technical risks. 
  5. Assume that intellectually "ideal" technical solutions are probably flawed from various non-technical perspectives, and elicit those flaws from all participants (especially the non-technical and non-experts).
  6. Design the process and conduct the assessment so that unpopular or unorthodox ideas are not unduly suppressed or swamped.

To risk assessors everywhere: happy hunting!

(1) Benefits in this context is meant to be all-inclusive, but could be substituted with value, dependencies, goal achievement, meeting objectives, accomplishing mission, and so on.  

Tuesday, November 18, 2014

Three Marks of Risk Assessment

Risk assessment has been addressed extensively in available security and risk assessment literature and in public and semi-public standards1. Despite this coverage, and perhaps because of the scope and complexity of it, the essence of risk assessment is often lost. In a recent article, I spoke of widespread and significant misunderstanding about risk assessment, a misunderstanding which leads many to think that they are performing risk assessment when they are doing something more akin to a compliance assessment or a controls assessment.

I will provide three litmus tests to differentiate a risk assessment from other types of assessments, to help you determine if your organization is on the right track. Keep in mind that this is not a how-to and it is not comprehensive. Simply stated, if you are not doing all of the following, then you are not doing risk assessment. It's likely that you need to re-evaluate your entire approach, for which I provide further suggestions.

The three litmus tests:


1. Your organization has answered the question, "What business assets or capabilities are we trying to protect?”. The discussion about assets and capabilities implies the closely related question, “Why do we exist?”. It should be obvious, but the reason for existence must be other than, “to be compliant.” While it is true that non-compliance could pose financial risks, or in extreme cases an existential risk, it not a reason to exist, even if related to the ability to continue to exist. In other words, non-compliance is but one of many possible risks to the organization. (Here’s a hint for those in healthcare attempting HIPAA Security Risk Assessment: you’ve been told your asset, and it’s electronic Protected Health Information.)

2. Your organization has identified threats to its assets and possibly threats to the organization’s mission at a broader level. Each item on your threat list has implicit or explicit threat actors such as employees, hacktivists, mother nature, competitors, and so on. (If the only or primary threat actors are regulators, that’s a compliance assessment.) These threats are documented, and they are used throughout the risk assessment process. If the question you are asking is of the form, “Are we fulfilling this particular regulatory requirement for X?”, then you are not actually doing a risk assessment, you’re doing a compliance assessment. The questions should be of the form, “How could these threats act on our assets to cause harm?”.

3. Your organization’s discussion is focused mostly on possible future negative events, and current facts contribute information to determine aspects of risk such as probability and impact. Risks are stated in terms of impact to organizational mission, objectives, operations, or value. (If your "risks" are each equivalent to, "We are not compliant”, then that is a compliance assessment.2) Risks may manifest as lost revenue, diminished reputation, direct customer impact, financial impact, possible regulatory action, inability to conduct business, and so on.

To reiterate, the above three items are not meant to be comprehensive and are not all that is required for a risk assessment. If you are doing the above, your risk assessment process may still not be as complete or as mature as your organization needs. However, if you are not doing the above three things, then you are not on the right track and need to re-evaluate your approach.

Next Steps


At this point you may be asking, "What do I do if I missed one or more of these?” Here are recommended next steps:

1. Do research on available and industry-appropriate risk assessment methodologies and approaches. If you have access to ISO 31010, this standard contains a comprehensive list and comparative analysis of various risk analysis methods. Having access to this list is generally not necessary to get started and is more important for maturing risk assessment processes, but is still a useful reference. Also, look to industry-specific standards, high-performing peers, and qualified and experienced consultants to provide guidance and assistance.

2. Share your concerns and a proposed risk assessment approach with senior management. Provide plausible business rationale for your concerns and business-based justification for your proposed approach. This is another area where an experienced information security risk consultant can help, particularly one familiar with your industry. Such an advisor can bring to light specific business requirements, risks and benefits related to conducting a proper risk assessment.

3. Select your people, methods, and tools - in that order. Risk assessment benefits from multiple business and technical perspectives. Include various IT specialities, lines of business, and specialists, depending on the particular assessment.

4. Conduct your risk assessment(s).

5. Track, manage, and report risks on an ongoing basis. Risks should be documented and explained in non-technical, relevant terms in such a way that organizational leaders can understand them. This step is technically risk management. I include it because risk assessment has little purpose and negligible impact without some level of risk management.


1 Examples include:

  • NIST Special Publication series: SP 800-30 Guide for Conducting Risk Assessments;
  • ISO IEC 31010:2009 Risk management -- Risk assessment techniques;
  • ISO/IEC 27005:2011 Information technology -- Security techniques -- Information security risk management;
  • CERT OCTAVE (Operationally Critical Threat, Asset, and Vulnerability Evaluation) Allegro
  • PCI Data Security Standard (PCI DSS) Information Supplement: PCI DSS Risk Assessment Guidelines

2 Compliance issues are not excluded because non-compliance impacts may include loss of license, fines, additional oversight; all of which have operational or financial implications. These, in turn, have consequences on the mission, business objectives, operations, and value of an organization.

Tuesday, October 7, 2014

Risk assessment vs risk analysis, and why it doesn't matter

Over the last decade I have witnessed heated debates about the terms risk assessment and risk analysis. In most cases, the outcome of these debates is not a richer understanding of the risk domain but rather a fruitless exercise in politics and getting (or not getting) along. This got me to thinking about the circumstances under which these and other risk definitions are important and those under which they are not.

On Audience

We could speak the truest sentence ever spoken by using exactly the correct words, but in doing so with a non-native speaker visiting in a foreign land, it would be futile. This may sound absurd if we think of foreign tourists, but I have seen security and risk people do this often enough with non-practitioners to cringe. It’s as if shouting ‘risk analysis’ over and over is any more effective than shouting ‘go 1.2 miles west’ to a tourist over and over. Hanging your communication hat on the necessity of others' understanding of your specialized vocabulary is a sure-fire way for your audience to get lost.

I propose that when dealing with audiences who are not risk practitioners you should do as you would with a non-native speaker: don’t expect them to know the nuances of a particular word or phrase and base everything you’re saying on that understanding. Instead, use a greater variety of words, use examples, draw pictures and use your gestures. Keep on doing that until it’s apparent that everyone in the room gets it and wants you to move on to the discussion and decisions at hand.

Of course, when communicating with risk peers in your sub-specialty, it is acceptable and necessary to use the terms and concepts appropriate to that sub-specialty.

On Authority

After I drafted this article, I happened to pick up the July 2014 issue of the Risk Analysis Journal. It contains the special series, “Foundational Issues In Risk Analysis”. The first paragraph of the first article, “Foundational Issues in Risk Assessment and Risk Management”, states, in part, “Lack of consensus on even basic terminology and principles, lack of proper scientific support, and justification of many definitions and perspectives lead to an unacceptable situation for operatively managing risk with confidence and success.” This statement comes from authors who are researchers in the field, one of whom is an area editor for the Risk Analysis Journal - in short, knowledgable people. With this situation being the case for a field that had its beginnings in the 1980’s, how likely and how important is it that your organization develops the perfect definition for these terms? It is probably not.

What I have seen work reasonably well is to settle on working terms collectively, under the leadership of the highest level risk management function in your organization. Yes, that means that the terms and principles they propose and that are ultimately adopted do not account for the nuances of your specialized risk area, but the alternative is that parts of the organization won’t effectively communicate with one another. That is worse, overall, than being stymied in your effort to translate the details of your speciality into business concerns.

Summary

Pick basic and simple definitions and move forward. In a few years, your organization just might iterate enough to arrive at rigorous and thorough definitions and, more importantly, to achieve an organization-wide understanding. Who knows? The field could settle on formal definitions for basic terms that work across organizations and sub-specialties at about the same time.

Wednesday, August 6, 2014

Top 5 meta-findings from 12 years of security risk assessments in healthcare


My background: I have performed over 150 security risk assessments over the last 12 years, for organizations large and small, and for scopes as broad as an entire enterprise to as narrow as a single application, system, or vendor. Some of these assessments occurred within a day, some took months.

I’m writing this post in the hopes that:
* it can serve as a useful starting point for dialog within your organization about these issues
* enough people will read this that the prevalence of these findings will decrease over time
* my work performing risk assessment becomes more interesting and challenging over time
* I can remove all these meta-findings from my list 15 years from now

Risk assessments can contain all manner of findings, from the high-level policy issues to detailed technical issues. Corrections of the meta-findings that follow would significantly improve the effective management of all information security risks:

1. The "risk assessments” performed to date are actually compliance or control assessments. The organization (1) hasn’t complied with the HIPAA Security Rule and Meaningful Use requirements to perform risk assessment, and (2) has skipped the step that forms the fundamental basis for planning, thereby missing opportunities to efficiently use the organization's resources to appropriately and effectively protect patient and/or member data.

2. About 1/3 of the activities that are either universally important to effective security programs or needed to address the organization’s unique environment were overlooked because the consideration started and ended with an interpretation of the HIPAA Security Rule. The consideration only included the more directly worded CFR § 164.308 though 164.312. Specifically, the HIPAA Security Rule was misconstrued and misinterpreted because the entire preamble and CFR § 164.306 (a)(1) through (3) was skipped in the rush to quickly “be compliant.” 1

3. IT, Information Security, Facilities, Materials/Procurement, HR, Audit, and Compliance have distinct perspectives about information security, and these perspectives have not been harmonized, formalized, and agreed to. The organization as a whole lacks a uniform and coordinated approach and is missing a well-considered set of roles and responsibilities.

4. A large portion of the technical issues that the organization is experiencing is a result of processes or architectures that either do not exist or are poorly designed or implemented or are supported by functions that are understaffed. Technical tools intended to support security are under-utilized or improperly utilized. Much time is spent chasing specific resulting technical issues. The focus should be on identifying and correcting the organizational systems, business processes, personal incentives and (mis-aligned) accountabilities that create and perpetuate the technical issues.

5. Employed physicians, nurses and staff are not supporting security activities and policies because no one has explained in the language of their professions how their personal and individual missions can be put in jeopardy. Leaders, physicians with privileges, and sponsoring organizations have decision-making influence on business goals and risks. In the process, the information security risks are under-factored because they are explained in technical terms rather than in business terms.

In future posts, I will tackle some of these issues and provide recommendations for addressing them in your organization.

1 For those not familiar, CFR § 164.306 establishes "risks to ePHI" (not compliance) as the basis for all decision making related to security under the HIPAA Security Rule.

Wednesday, April 23, 2014

The Difficulties of Inherent Risk

The concept of inherent risk is occasionally mentioned by information security and information risk practitioners. Inherent risk is difficult to conceptualize, and an even more difficult idea to apply in practice.

The typical equation is: inherent risk + controls = residual risk.

It is easy to mask poor models when they are applied in theoretical fields such as information security and information risk assessment. The problem with these approaches can be illustrated by attempting to apply them to examples that have a physical reality. Here is one:

A city-dweller is considering going to a grocery store 10 blocks away, and whether to get there by walking, bicycling, driving or public transportation. As he considers his options, he decides to determine the inherent risk of staying home, and the inherent risk of each of the transportation options. He considers each choice as if conducted with eyes closed and ears plugged and with an ignorance of the neighborhood, vehicular traffic laws and physics. He will pretend to have no knowledge of the local culture around pedestrians or cyclists and pretend not to feel curbs as he stumbles over them. He will imagine that no one will adjust their behavior upon encountering him; that no one will act to protect him; that most vehicles in cities have low profiles, travel at low speeds and have few catastrophic consequences when impacting a person; that vehicles will likely only be present on streets and not sidewalks; that building facades won't come loose and fracture his skull; that he won't get hit by lighting by virtue of being outside; and so on. These considerations might seem ridiculous, but all of them, and a nearly infinite number more, must be eliminated to arrive at inherent risk. If even one is left in, it's no longer inherent risk.

On top of that conundrum, the process requires that “controls” are added back into the equation. So, once “inherent risk” is determined, the next step is to add back traffic laws, citizen good will, building codes, a possible use of seat belts or helmets, pedestrian crossings, a general sense that thunder implies rain and a likely seeking of shelter, general awareness and competence, and so on.

How does one even begin to calculate "inherent risk"? Is this how people think about risk? Clearly not. Is this type of calculation even feasible? Not really. (We haven't even considered benefits, which are addressed in this blog in the post on risk matrixes.) The concept of inherent risk has been conspicuously absent from security and risk standards and methods. Most experienced practitioners long ago dropped it from their approaches. The attempt to address inherent risk confuses and complicates the fields of risk assessment and risk management, adding little value in the process. It's reasonable to expect that inherent risk no longer be promoted or used. Yet, within the last year, I have become aware of initiatives in risk assessment and modeling which include, and are dependent upon, the definition and determination of inherent risk. The stories of these initiatives were painful to hear. It was even more painful to find out that the idea was being promoted by a group believed to be expert in the field of information security management programs.

To be clear: aside from situations in which inherent risk is rigorously determined as the best approach, it should not be used by information security and information risk practitioners. If they insist on using the constructs of inherent risk, practitioners will have a Sisyphean task ahead of them.