Thursday, November 16, 2017

The Trap of Risk Assessment Tools

Humans understand and respond to narrative innately. Impactful events are explained as stories, successful calls to action are delivered as a descriptions of a desired or undesired future, and people make decisions every day through explanations.

We do this because narratives are a natural form for communicating information and insight. Narratives are therefor also a natural form for risk communication. In regards to risk, narratives help listeners visualize and develop internalized models of risk, which in turn represent the truth in a way that numbers alone won't (and can't) for most individuals and most situations.

Much of the current cybersecurity and enterprise risk management world is premised on the idea (or ideal) that risks can be meaningfully summarized in ordinal form, such as “very high likelihood”, “moderate impact.” These english ordinals are then often converted to numbers. Sometimes math is performed, and a magic risk number is derived. Some models even use dollars or dollar ranges in their outputs. However, focusing on getting to the “right number” - and reporting that is focused on numbers - denies the reader the benefit of context. It makes it hard for decision makers to object to the assumptions embodied in the inputs and the calculations. Numbers or dollars alone convey objectiveness and authority, even while that is not necessarily the case. The work of "getting to the numbers" behind closed doors only serves to exacerbate this issue. In the best case systematizes the subtle biases of the assessors, and in the worst case distorts the model to fit preconceived notions about organizational priorities, or conform to personal risk worldviews. Caution should be advised for those using numbers alone to represent risk.

In contrast, risk assessment communication that focuses on narrative and which embraces dialog allows for discoveries and insights by the decision makers, provides an opportunity to question assumptions, and enables sharing and alignment of perspectives. Risk decisions, everyday and tough risk decisions alike, are best borne of discussion, and will be for the foreseeable future.

If you are responsible for risk assessment at your organization, don’t fall pray to a tool’s promise, or the unquestioned illusion of objectivity and certainty that comes from numbers. If you are performing risk assessment where the focus is populating fields in a spreadsheet or application, no matter how advanced it is, you run the risk that the one thing that everyone needs to consider is getting lost through reductionism. Do the numbers if required, but wrap it in an informative discussion, and design and facilitate the discussion to convey risks in business terms that supports business decision making.

Friday, April 14, 2017

You Can Keep Your Compliance, I Have a Mission

Doctors practice very real, very tangible risk management every day when caring for patients. The decisions they make affect the wellbeing and the very lives of their patients. The trade-off between various treatment options, judgements about future patient behavior based on historical behavior, the upsides and downsides of surgical and pharmacological treatments vs the likelihood of behavior changes, and long list of considerations are based on risk assessment and are themselves part of risk assessment.

Patient care is a complicated and nuanced field, and risk management is core to managing the complexity. As such, doctors have a seasoned perspective on risk management that gives them a unique perspective on information security and compliance. What they do is telling. They reframe compliance and security with a question derived from their mission: What is the impact to patient care?

While it should be fairly clear to most that patient care is more important than compliance or information security, what is less often clear to practitioners is that the only framing in which to consider either compliance or information security is that of impact to patient care (so long as patient care is defined broadly enough). This highlights the need for something to bridge from compliance and information security to patient care; and that bridge is risk management.

Most doctors understand risk management innately, at least as well as, and perhaps even more intimately than those in the compliance, technology, and security fields. They haven’t been steeped in the myth of the “choice between security and functionality” and they are willing to have substantive conversations, often leading to “let’s do both” solutions. For all organizations, healthcare or otherwise, it comes down to mission and risks to the mission. If you are having a conversation where your ultimate goal is compliance or “great” security, you’re doing yourself and your organization a disfavor and a disservice.

Instead, ask “what is the impact to our mission?”

Sunday, September 11, 2016

Mitigating Catastrophic Risks of Smart Meters

I posed a question on quora.com about whether it was feasible to design smart meters so that the power couldn’t be entirely shut off by remote control. In particular, I wanted to know if some small portion of the power could be on a separate mechanism that would require a physical visit by the utility company to shut it off.

The question received an articulate, passionate, and well-reasoned response that explained why such a design should not be pursued. The author made good points about motor burnouts, damaged equipment, and so on - it was apparent that he was familiar with the topic of electricity. He also implied that my design suggestion was downright dangerous, and perhaps indicated a lack of thoughtfulness on my part for even considering such a design.

I explained that I was concerned that a cyber attacker could remotely shut off power to tens or hundreds of thousands of electrical utility customers and then brick (render inoperable) the remote control unit. This of course would require the utility company to physically visit every customer to restore power. This would especially affect those who need power for medical devices.

The passionate respondent replied:
"That’s a good point, I never thought of that."
This is the nature of risk in complex systems: risks have a tendency to emerge from unexpected places when evaluated solely by experts in specific domains. Experts tend to bring specific training, specialized ethics, implied goals, and certain types of experiences. This narrow focus is good for daily work, but can negatively affect risk assessments. In the case of the passionate respondent it's clear that the electrical system and equipment were front and center, perhaps summarized by a mantra of "don't damage equipment!" However, in the broader context, this important principle/ethic was too narrow to address broad needs and stakeholder expectations. In this case it came down to a framing issue. 

How should risk assessments be designed so that they avoid the issues of being too narrowly focused? Here are a few ideas:

  1. Define the target of the risk assessment in terms that have relevancy to stakeholders.
  2. Invite stakeholders who can, as directly as possible, represent the interests and perspectives of the diverse set of groups who derive benefits(1) from the target.
  3. Solicit these stakeholders to identify their key benefits or needs relative to the target.
  4. Focus the risk assessment on risks to benefits rather than on technical risks. 
  5. Assume that intellectually "ideal" technical solutions are probably flawed from various non-technical perspectives, and elicit those flaws from all participants (especially the non-technical and non-experts).
  6. Design the process and conduct the assessment so that unpopular or unorthodox ideas are not unduly suppressed or swamped.

To risk assessors everywhere: happy hunting!

(1) Benefits in this context is meant to be all-inclusive, but could be substituted with value, dependencies, goal achievement, meeting objectives, accomplishing mission, and so on.  

Thursday, May 12, 2016

What is a "Leading" Compliance Function?



A friend ask me not too long ago: “How do you measure the maturity of a compliance function?”

As I think of an organization "leading on compliance," ironically, I think of the compliance function as following not leading, and being concerned more with risk than compliance.

Let me explain. Compliance covers some things, and organizational need covers some things. In the most mature organizations, I think that 90-95% of compliance needs are covered if the organizational needs are being met with efficiency and effectivness. In contrast, if compliance needs are covered, it can bear little or no relation to the business needs, the degree to which this occurs is highly dependent upon industry, specific regulation, and company operating behaviors. This is not meant to diminish the importance of compliance, or security, or HR, or any other function, but these functions are all small rocks. Business is the big rocks. For everything to fit into the jar, the big rocks have to go in first. (http://www.appleseeds.org/Big-Rocks_Covey.htm) So, frustratingly, to achieve the highest maturity of compliance, you need the organization to be operating at the highest levels of maturity as well. Otherwise, compliance before business could cause the business to decrease the size of a business rock, or simply not attempt to get one more business rock into the jar.

So the question I like to ask is: how does the compliance function operate in a way that encourages the organization to recognize and act on what it needs to do from a larger organizational perspective, *while* allowing compliance to focus on the specific things it needs to do to fill in with compliance between the bigger business rocks? Note: If you can do this, not only will you be meeting all your obligations, but people will also like you a lot more.

Culture and processes and collaboration are core, for both compliance and risk management . GRC for me is about efficiency, and some enablement. You can’t get to the highest levels of maturity without GRC, but if you lead with GRC, it’s unlikely you’ll get to even the mid-levels of maturity without going back and doing the organizational steps and then re-working GRC, likely taking more time and incurring greater cost, and making people more unhappy.

In regards to risk management… there are definitely differences with compliance. Here’s the crux: Compliance serves those outside of the organization with usually a pretty narrow view of what they want from your organization. Risk management serves your organization and (ideally) does so with the most complete view as reasonable. Specifically, risk management facilitates the allocation of resources towards low risk high profit activities and away from high risk low profit activities (there are definite exceptions, like bold business moves that are long bets but could be highly profitable). Compliance can be viewed as avoiding fines, business interruption due to court orders or withdrawal of licensure or certification, and a few other very specific things; compliance is one type of risk among many.
 
Here’s how I think about it: Formal risk management is decision support, and should be a core element of decision making organization wide. In reality, risk management already is at the core of every decision, but it’s not usually formal. Compliance is a double check. 

In regards to compliance, here are some things that I would consider in determining maturity:
  • Is compliance viewed as a trigger for the business to think about risk (of which compliance is a subset) 
  • Does compliance have a view into business processes and understand the business context
  • To what degree does compliance talk about risks of non-compliance (vs. taking a there-is-no-choice approach)
  • Does compliance collaborate on solutions driven by operational needs, with compliance merged in? Or does compliance dictate solutions where compliance trumps regardless of operational perspectives or priorities or feasibility or cost? Is compliance as flexible as it reasonably can be?
  • Does compliance seek out the larger problem? For example, does compliance require that the instance they find of an issue be fixed (this is usually an audit perspective as well), or do they seek root causes and process improvement? Let’s face it, no one wants to spend 100 hours to fix what they know is 60% of the problem, when they can spend 130 hours fixing 98% of the problem (you know: actually really solve problems)
  • How well does compliance surface issues to the people who are accountable for certain types of compliance? (i.e. HIPAA security officer, HIPAA compliance officer, chief legal council, etc.) (also, this is where GRC helps)
  • How strategically aligned is Compliance to the other risk functions such as Internal Audit, Risk Management, Information Security, IT Audit/Compliance, and possibly others? How tactically aligned are they? Is annual planning done together (e.g. divide and conquer, no overlap, etc)? Are major findings shared?
  • Does compliance adopt business language and align where possible to business measures, or operate as an island unto itself (again, a little bit of that there-is-no-choice, compliance-is-it’s-own-thing attitude)?


Although Norman Marks is fairly widely respected, I don’t always agree with him. However, this post on GRC is worth a look, https://iaonline.theiia.org/blogs/marks/2015/trends-in-grc, especially the links, specifically to items from OECG like this: http://www.oceg.org/lesson/principled-performance-and-grc/ (I watched this for the first time tonight, after writing all of the above, and it echo’s many of the things I say above and adds to it), and the blog he links to, yogrc, looks like a pretty good take specifically on GRC http://yogrc.typepad.com/.

I look forward to your comments and feedback.






Thursday, December 11, 2014

Data Stewards as Risk Managers and Champions of Information Security

In early 2007, as the information security officer at a health insurance company, I began to consider how to build better connections between information security goals and business goals. I had observed for some time that the closest that the business units generally got to the question of data security was, "Who has access to this application or screen or function within an application?" Most concerns were about data confidentiality, and using screen-level access reviews was a convoluted and confusing proxy for directly addressing access to and uses of data.

During this same time period I developed a few notions. First, that process controls serve primarily integrity-oriented goals that are important for narrowing activities, such as for financial audits culminating in a single audited financial statement. Data controls inherently serve confidentiality-oriented goals that prevent the uncontrolled spread of data, such as preventing information leaks about mergers and acquisitions in the pre-merger period. Data controls would be a poor, work-intensive way to determine if the financial statement was accurate. Process controls don’t really apply when a board member accidentally sends merger due-diligence emails to the mailing list for a different board. Keeping in mind that most applications at the time were designed with process controls in mind ("Can this person initiate or approve certain transactions?”), we didn’t have the right approach for the concern. The question of data security got lost in process control thinking - but because so many were “brought up” on process controls, the disconnect wasn’t obvious.

Many organizations experience (i.e. they design) one or more of the following situations:

1. Supervisors are responsible for getting work done, and are simultaneously responsible for defining and authorizing access for their employees. If production is the primary goal and incentive for the supervisor, it would be safer for the supervisor to avoid providing too little access, rather than avoid providing too much access. This situation creates a perverse incentive.

2. People in key business roles may have a general sense that they have a leading organizational role in regards to certain types of data. However, that may only go as far as involvement in a big data initiative, or other large but targeted projects. What is commonly missed is the general accountability, including for data security and risk management. Even if the security role is explicit, there is often little clarity as to how applications, requests for access, business processes, data exchanges, and system configurations affect the accessibility, security and use of the data. This is what I’d call opaque.

3. IT is expected to protect data, even under an access authorization process which places supervisors in a role of authority. IT is expected to intuitively know when to push back on a request. It is also too often the case in projects where security design considerations of technical solution components happen too far downstream, and IT is left “accepting” the risk or seemingly a roadblock. A good word for IT's situation in these examples is untenable.

4. Level-only classification systems (i.e. those of the sort that use only classifications such as Secret, Confidential, Internal Use Only, Public) fail to establish accountability-aligned ways to classify and declassify, and provide no clear path to making either consistent policy decisions or making nuanced decisions about data. Different people make different decisions about the same data, due in part to individual risk temperaments, a variety of personal experiences, profession-driven leanings, and role-specific incentives. This is a broken model.

A Starting Principle

"The person who benefits from accepting a risk on behalf of the organization should also be individually accountable for the consequences."

The importance of the prior statement seems almost too obvious but many organizations fail to consistently make a connection between risk-taking benefits and risk-taking consequence management. Larger organizations, often by design, create separate processes for accepting risk versus managing risk. Those two intricately tied decisions often happen at different times and in different venues and contexts. This is less than ideal.

An Accountability Model

I developed a model in response to the organizational challenges mentioned above, building on my initial notions, and using the principle of aligned accountability. I will explain a few of the key roles and then how they work together.

Data stewards are responsible for establishing organization-wide policies for a specific type of data. They are also responsible for considering policy exceptions and making ad-hoc decisions when policy is unclear or a situation requires analysis. Depending on size and industry, an organization can have anywhere from a few to 30 data stewards. Generally the data steward is a leader who is close to the organization’s data intake point (e.g. members/customer operations, business partner relations), or resides over the production of the data (e.g. finance, strategy). Among the classically recognizable data stewards are the head of HR (employee demographic and performance information), the head of payroll (salary, benefit, and garnishment information), the CFO (financial performance prior to reporting), and the head of research and development.

Data gatekeepers are aligned to external stakeholders, may handle a variety of data, and are responsible for following and enforcing the data access and use rules created by the data stewards. Generally, every type of audience or external recipient of data has a related data gatekeeping function. Long-established examples include the Legal department acting as the gatekeeper to law enforcement and the courts; Compliance acting as the gatekeeper to regulatory bodies; and Corporate Communication acting as gatekeepers to the media and the general public. It is a familiar approach, but usually only rigorously applied in specific contexts. It is quite possible, and also useful, to extend the concept to other areas. Almost every function with external touch-points acts in some capacity as a gatekeeper, but in many organizations they are not able to perform that function effectively because of unclear responsibilities and a lack of guidance.

Application sponsors are, as the name implies, those who request, convince the organization to pay for, and provide ongoing demand for a technical capability that supports a business need. Essentially, their business needs are what drive application and system implementations. In their roles, they are accountable to the data stewards for developing requirements that support data policies, and deploying configurations that enforce those policies. They are also accountable to process owners for things such as uptime and process controls such as segregation of duties.

Process owners are responsible for end-to-end processes. General examples include order to ship and procure to pay. Industry-specific examples exist as well, such as admit to discharge and enroll to dis-enroll in healthcare. Process owners establish business requirements for up-time, integrity of transactions, integrity of reporting, authorization of specific transactions, and segregation of duties.

Data custodians are those that hold data on the behalf of the stewards, but have no policy-level say over the direct management of the data. The prime internal example of a custodian is the Information Technology department for electronic data and facilities for paper record storage. A more subtle example is Finance acting as a custodian for payroll data on behalf of HR. Externally, Cloud service providers are custodians.

Each of the roles represent unique business goals, and it is expected that they also have different perspectives and intrinsic motivations related to certain types of decisions. These responsibilities are designed so that a fundamental tension exists between the data stewards, process owners, and system sponsors. Note that all three are business roles. Also note that Compliance, IT, and Legal have been removed from the middle of the decision making process.

Benefits of the Model

In short, the primary benefit is that the business units can focus on risk decisions in a more business oriented context. The discussion is no longer focused on security, control gaps, or regulatory issues as a proxy for business concerns; it is actually about those business concerns. This makes decision making more straightforward, better designed for long term planning, and more responsive to changing business and operating realities.

Contrast this model against how risk decisions often happen... indirectly and inefficiently mediated through specialized IT, Audit, or Compliance contexts.

Which would you choose?

Tuesday, November 18, 2014

Three Marks of Risk Assessment

Risk assessment has been addressed extensively in available security and risk assessment literature and in public and semi-public standards1. Despite this coverage, and perhaps because of the scope and complexity of it, the essence of risk assessment is often lost. In a recent article, I spoke of widespread and significant misunderstanding about risk assessment, a misunderstanding which leads many to think that they are performing risk assessment when they are doing something more akin to a compliance assessment or a controls assessment.

I will provide three litmus tests to differentiate a risk assessment from other types of assessments, to help you determine if your organization is on the right track. Keep in mind that this is not a how-to and it is not comprehensive. Simply stated, if you are not doing all of the following, then you are not doing risk assessment. It's likely that you need to re-evaluate your entire approach, for which I provide further suggestions.

The three litmus tests:


1. Your organization has answered the question, "What business assets or capabilities are we trying to protect?”. The discussion about assets and capabilities implies the closely related question, “Why do we exist?”. It should be obvious, but the reason for existence must be other than, “to be compliant.” While it is true that non-compliance could pose financial risks, or in extreme cases an existential risk, it not a reason to exist, even if related to the ability to continue to exist. In other words, non-compliance is but one of many possible risks to the organization. (Here’s a hint for those in healthcare attempting HIPAA Security Risk Assessment: you’ve been told your asset, and it’s electronic Protected Health Information.)

2. Your organization has identified threats to its assets and possibly threats to the organization’s mission at a broader level. Each item on your threat list has implicit or explicit threat actors such as employees, hacktivists, mother nature, competitors, and so on. (If the only or primary threat actors are regulators, that’s a compliance assessment.) These threats are documented, and they are used throughout the risk assessment process. If the question you are asking is of the form, “Are we fulfilling this particular regulatory requirement for X?”, then you are not actually doing a risk assessment, you’re doing a compliance assessment. The questions should be of the form, “How could these threats act on our assets to cause harm?”.

3. Your organization’s discussion is focused mostly on possible future negative events, and current facts contribute information to determine aspects of risk such as probability and impact. Risks are stated in terms of impact to organizational mission, objectives, operations, or value. (If your "risks" are each equivalent to, "We are not compliant”, then that is a compliance assessment.2) Risks may manifest as lost revenue, diminished reputation, direct customer impact, financial impact, possible regulatory action, inability to conduct business, and so on.

To reiterate, the above three items are not meant to be comprehensive and are not all that is required for a risk assessment. If you are doing the above, your risk assessment process may still not be as complete or as mature as your organization needs. However, if you are not doing the above three things, then you are not on the right track and need to re-evaluate your approach.

Next Steps


At this point you may be asking, "What do I do if I missed one or more of these?” Here are recommended next steps:

1. Do research on available and industry-appropriate risk assessment methodologies and approaches. If you have access to ISO 31010, this standard contains a comprehensive list and comparative analysis of various risk analysis methods. Having access to this list is generally not necessary to get started and is more important for maturing risk assessment processes, but is still a useful reference. Also, look to industry-specific standards, high-performing peers, and qualified and experienced consultants to provide guidance and assistance.

2. Share your concerns and a proposed risk assessment approach with senior management. Provide plausible business rationale for your concerns and business-based justification for your proposed approach. This is another area where an experienced information security risk consultant can help, particularly one familiar with your industry. Such an advisor can bring to light specific business requirements, risks and benefits related to conducting a proper risk assessment.

3. Select your people, methods, and tools - in that order. Risk assessment benefits from multiple business and technical perspectives. Include various IT specialities, lines of business, and specialists, depending on the particular assessment.

4. Conduct your risk assessment(s).

5. Track, manage, and report risks on an ongoing basis. Risks should be documented and explained in non-technical, relevant terms in such a way that organizational leaders can understand them. This step is technically risk management. I include it because risk assessment has little purpose and negligible impact without some level of risk management.


1 Examples include:

  • NIST Special Publication series: SP 800-30 Guide for Conducting Risk Assessments;
  • ISO IEC 31010:2009 Risk management -- Risk assessment techniques;
  • ISO/IEC 27005:2011 Information technology -- Security techniques -- Information security risk management;
  • CERT OCTAVE (Operationally Critical Threat, Asset, and Vulnerability Evaluation) Allegro
  • PCI Data Security Standard (PCI DSS) Information Supplement: PCI DSS Risk Assessment Guidelines

2 Compliance issues are not excluded because non-compliance impacts may include loss of license, fines, additional oversight; all of which have operational or financial implications. These, in turn, have consequences on the mission, business objectives, operations, and value of an organization.

Tuesday, October 7, 2014

Risk assessment vs risk analysis, and why it doesn't matter

Over the last decade I have witnessed heated debates about the terms risk assessment and risk analysis. In most cases, the outcome of these debates is not a richer understanding of the risk domain but rather a fruitless exercise in politics and getting (or not getting) along. This got me to thinking about the circumstances under which these and other risk definitions are important and those under which they are not.

On Audience

We could speak the truest sentence ever spoken by using exactly the correct words, but in doing so with a non-native speaker visiting in a foreign land, it would be futile. This may sound absurd if we think of foreign tourists, but I have seen security and risk people do this often enough with non-practitioners to cringe. It’s as if shouting ‘risk analysis’ over and over is any more effective than shouting ‘go 1.2 miles west’ to a tourist over and over. Hanging your communication hat on the necessity of others' understanding of your specialized vocabulary is a sure-fire way for your audience to get lost.

I propose that when dealing with audiences who are not risk practitioners you should do as you would with a non-native speaker: don’t expect them to know the nuances of a particular word or phrase and base everything you’re saying on that understanding. Instead, use a greater variety of words, use examples, draw pictures and use your gestures. Keep on doing that until it’s apparent that everyone in the room gets it and wants you to move on to the discussion and decisions at hand.

Of course, when communicating with risk peers in your sub-specialty, it is acceptable and necessary to use the terms and concepts appropriate to that sub-specialty.

On Authority

After I drafted this article, I happened to pick up the July 2014 issue of the Risk Analysis Journal. It contains the special series, “Foundational Issues In Risk Analysis”. The first paragraph of the first article, “Foundational Issues in Risk Assessment and Risk Management”, states, in part, “Lack of consensus on even basic terminology and principles, lack of proper scientific support, and justification of many definitions and perspectives lead to an unacceptable situation for operatively managing risk with confidence and success.” This statement comes from authors who are researchers in the field, one of whom is an area editor for the Risk Analysis Journal - in short, knowledgable people. With this situation being the case for a field that had its beginnings in the 1980’s, how likely and how important is it that your organization develops the perfect definition for these terms? It is probably not.

What I have seen work reasonably well is to settle on working terms collectively, under the leadership of the highest level risk management function in your organization. Yes, that means that the terms and principles they propose and that are ultimately adopted do not account for the nuances of your specialized risk area, but the alternative is that parts of the organization won’t effectively communicate with one another. That is worse, overall, than being stymied in your effort to translate the details of your speciality into business concerns.

Summary

Pick basic and simple definitions and move forward. In a few years, your organization just might iterate enough to arrive at rigorous and thorough definitions and, more importantly, to achieve an organization-wide understanding. Who knows? The field could settle on formal definitions for basic terms that work across organizations and sub-specialties at about the same time.