Thursday, December 11, 2014

Data Stewards as Risk Managers and Champions of Information Security

In early 2007, as the information security officer at a health insurance company, I began to consider how to build better connections between information security goals and business goals. I had observed for some time that the closest that the business units generally got to the question of data security was, "Who has access to this application or screen or function within an application?" Most concerns were about data confidentiality, and using screen-level access reviews was a convoluted and confusing proxy for directly addressing access to and uses of data.

During this same time period I developed a few notions. First, that process controls serve primarily integrity-oriented goals that are important for narrowing activities, such as for financial audits culminating in a single audited financial statement. Data controls inherently serve confidentiality-oriented goals that prevent the uncontrolled spread of data, such as preventing information leaks about mergers and acquisitions in the pre-merger period. Data controls would be a poor, work-intensive way to determine if the financial statement was accurate. Process controls don’t really apply when a board member accidentally sends merger due-diligence emails to the mailing list for a different board. Keeping in mind that most applications at the time were designed with process controls in mind ("Can this person initiate or approve certain transactions?”), we didn’t have the right approach for the concern. The question of data security got lost in process control thinking - but because so many were “brought up” on process controls, the disconnect wasn’t obvious.

Many organizations experience (i.e. they design) one or more of the following situations:

1. Supervisors are responsible for getting work done, and are simultaneously responsible for defining and authorizing access for their employees. If production is the primary goal and incentive for the supervisor, it would be safer for the supervisor to avoid providing too little access, rather than avoid providing too much access. This situation creates a perverse incentive.

2. People in key business roles may have a general sense that they have a leading organizational role in regards to certain types of data. However, that may only go as far as involvement in a big data initiative, or other large but targeted projects. What is commonly missed is the general accountability, including for data security and risk management. Even if the security role is explicit, there is often little clarity as to how applications, requests for access, business processes, data exchanges, and system configurations affect the accessibility, security and use of the data. This is what I’d call opaque.

3. IT is expected to protect data, even under an access authorization process which places supervisors in a role of authority. IT is expected to intuitively know when to push back on a request. It is also too often the case in projects where security design considerations of technical solution components happen too far downstream, and IT is left “accepting” the risk or seemingly a roadblock. A good word for IT's situation in these examples is untenable.

4. Level-only classification systems (i.e. those of the sort that use only classifications such as Secret, Confidential, Internal Use Only, Public) fail to establish accountability-aligned ways to classify and declassify, and provide no clear path to making either consistent policy decisions or making nuanced decisions about data. Different people make different decisions about the same data, due in part to individual risk temperaments, a variety of personal experiences, profession-driven leanings, and role-specific incentives. This is a broken model.

A Starting Principle

"The person who benefits from accepting a risk on behalf of the organization should also be individually accountable for the consequences."

The importance of the prior statement seems almost too obvious but many organizations fail to consistently make a connection between risk-taking benefits and risk-taking consequence management. Larger organizations, often by design, create separate processes for accepting risk versus managing risk. Those two intricately tied decisions often happen at different times and in different venues and contexts. This is less than ideal.

An Accountability Model

I developed a model in response to the organizational challenges mentioned above, building on my initial notions, and using the principle of aligned accountability. I will explain a few of the key roles and then how they work together.

Data stewards are responsible for establishing organization-wide policies for a specific type of data. They are also responsible for considering policy exceptions and making ad-hoc decisions when policy is unclear or a situation requires analysis. Depending on size and industry, an organization can have anywhere from a few to 30 data stewards. Generally the data steward is a leader who is close to the organization’s data intake point (e.g. members/customer operations, business partner relations), or resides over the production of the data (e.g. finance, strategy). Among the classically recognizable data stewards are the head of HR (employee demographic and performance information), the head of payroll (salary, benefit, and garnishment information), the CFO (financial performance prior to reporting), and the head of research and development.

Data gatekeepers are aligned to external stakeholders, may handle a variety of data, and are responsible for following and enforcing the data access and use rules created by the data stewards. Generally, every type of audience or external recipient of data has a related data gatekeeping function. Long-established examples include the Legal department acting as the gatekeeper to law enforcement and the courts; Compliance acting as the gatekeeper to regulatory bodies; and Corporate Communication acting as gatekeepers to the media and the general public. It is a familiar approach, but usually only rigorously applied in specific contexts. It is quite possible, and also useful, to extend the concept to other areas. Almost every function with external touch-points acts in some capacity as a gatekeeper, but in many organizations they are not able to perform that function effectively because of unclear responsibilities and a lack of guidance.

Application sponsors are, as the name implies, those who request, convince the organization to pay for, and provide ongoing demand for a technical capability that supports a business need. Essentially, their business needs are what drive application and system implementations. In their roles, they are accountable to the data stewards for developing requirements that support data policies, and deploying configurations that enforce those policies. They are also accountable to process owners for things such as uptime and process controls such as segregation of duties.

Process owners are responsible for end-to-end processes. General examples include order to ship and procure to pay. Industry-specific examples exist as well, such as admit to discharge and enroll to dis-enroll in healthcare. Process owners establish business requirements for up-time, integrity of transactions, integrity of reporting, authorization of specific transactions, and segregation of duties.

Data custodians are those that hold data on the behalf of the stewards, but have no policy-level say over the direct management of the data. The prime internal example of a custodian is the Information Technology department for electronic data and facilities for paper record storage. A more subtle example is Finance acting as a custodian for payroll data on behalf of HR. Externally, Cloud service providers are custodians.

Each of the roles represent unique business goals, and it is expected that they also have different perspectives and intrinsic motivations related to certain types of decisions. These responsibilities are designed so that a fundamental tension exists between the data stewards, process owners, and system sponsors. Note that all three are business roles. Also note that Compliance, IT, and Legal have been removed from the middle of the decision making process.

Benefits of the Model

In short, the primary benefit is that the business units can focus on risk decisions in a more business oriented context. The discussion is no longer focused on security, control gaps, or regulatory issues as a proxy for business concerns; it is actually about those business concerns. This makes decision making more straightforward, better designed for long term planning, and more responsive to changing business and operating realities.

Contrast this model against how risk decisions often happen... indirectly and inefficiently mediated through specialized IT, Audit, or Compliance contexts.

Which would you choose?

Tuesday, November 18, 2014

Three Marks of Risk Assessment

Risk assessment has been addressed extensively in available security and risk assessment literature and in public and semi-public standards1. Despite this coverage, and perhaps because of the scope and complexity of it, the essence of risk assessment is often lost. In a recent article, I spoke of widespread and significant misunderstanding about risk assessment, a misunderstanding which leads many to think that they are performing risk assessment when they are doing something more akin to a compliance assessment or a controls assessment.

I will provide three litmus tests to differentiate a risk assessment from other types of assessments, to help you determine if your organization is on the right track. Keep in mind that this is not a how-to and it is not comprehensive. Simply stated, if you are not doing all of the following, then you are not doing risk assessment. It's likely that you need to re-evaluate your entire approach, for which I provide further suggestions.

The three litmus tests:


1. Your organization has answered the question, "What business assets or capabilities are we trying to protect?”. The discussion about assets and capabilities implies the closely related question, “Why do we exist?”. It should be obvious, but the reason for existence must be other than, “to be compliant.” While it is true that non-compliance could pose financial risks, or in extreme cases an existential risk, it not a reason to exist, even if related to the ability to continue to exist. In other words, non-compliance is but one of many possible risks to the organization. (Here’s a hint for those in healthcare attempting HIPAA Security Risk Assessment: you’ve been told your asset, and it’s electronic Protected Health Information.)

2. Your organization has identified threats to its assets and possibly threats to the organization’s mission at a broader level. Each item on your threat list has implicit or explicit threat actors such as employees, hacktivists, mother nature, competitors, and so on. (If the only or primary threat actors are regulators, that’s a compliance assessment.) These threats are documented, and they are used throughout the risk assessment process. If the question you are asking is of the form, “Are we fulfilling this particular regulatory requirement for X?”, then you are not actually doing a risk assessment, you’re doing a compliance assessment. The questions should be of the form, “How could these threats act on our assets to cause harm?”.

3. Your organization’s discussion is focused mostly on possible future negative events, and current facts contribute information to determine aspects of risk such as probability and impact. Risks are stated in terms of impact to organizational mission, objectives, operations, or value. (If your "risks" are each equivalent to, "We are not compliant”, then that is a compliance assessment.2) Risks may manifest as lost revenue, diminished reputation, direct customer impact, financial impact, possible regulatory action, inability to conduct business, and so on.

To reiterate, the above three items are not meant to be comprehensive and are not all that is required for a risk assessment. If you are doing the above, your risk assessment process may still not be as complete or as mature as your organization needs. However, if you are not doing the above three things, then you are not on the right track and need to re-evaluate your approach.

Next Steps


At this point you may be asking, "What do I do if I missed one or more of these?” Here are recommended next steps:

1. Do research on available and industry-appropriate risk assessment methodologies and approaches. If you have access to ISO 31010, this standard contains a comprehensive list and comparative analysis of various risk analysis methods. Having access to this list is generally not necessary to get started and is more important for maturing risk assessment processes, but is still a useful reference. Also, look to industry-specific standards, high-performing peers, and qualified and experienced consultants to provide guidance and assistance.

2. Share your concerns and a proposed risk assessment approach with senior management. Provide plausible business rationale for your concerns and business-based justification for your proposed approach. This is another area where an experienced information security risk consultant can help, particularly one familiar with your industry. Such an advisor can bring to light specific business requirements, risks and benefits related to conducting a proper risk assessment.

3. Select your people, methods, and tools - in that order. Risk assessment benefits from multiple business and technical perspectives. Include various IT specialities, lines of business, and specialists, depending on the particular assessment.

4. Conduct your risk assessment(s).

5. Track, manage, and report risks on an ongoing basis. Risks should be documented and explained in non-technical, relevant terms in such a way that organizational leaders can understand them. This step is technically risk management. I include it because risk assessment has little purpose and negligible impact without some level of risk management.


1 Examples include:

  • NIST Special Publication series: SP 800-30 Guide for Conducting Risk Assessments;
  • ISO IEC 31010:2009 Risk management -- Risk assessment techniques;
  • ISO/IEC 27005:2011 Information technology -- Security techniques -- Information security risk management;
  • CERT OCTAVE (Operationally Critical Threat, Asset, and Vulnerability Evaluation) Allegro
  • PCI Data Security Standard (PCI DSS) Information Supplement: PCI DSS Risk Assessment Guidelines

2 Compliance issues are not excluded because non-compliance impacts may include loss of license, fines, additional oversight; all of which have operational or financial implications. These, in turn, have consequences on the mission, business objectives, operations, and value of an organization.

Tuesday, October 7, 2014

Risk assessment vs risk analysis, and why it doesn't matter

Over the last decade I have witnessed heated debates about the terms risk assessment and risk analysis. In most cases, the outcome of these debates is not a richer understanding of the risk domain but rather a fruitless exercise in politics and getting (or not getting) along. This got me to thinking about the circumstances under which these and other risk definitions are important and those under which they are not.

On Audience

We could speak the truest sentence ever spoken by using exactly the correct words, but in doing so with a non-native speaker visiting in a foreign land, it would be futile. This may sound absurd if we think of foreign tourists, but I have seen security and risk people do this often enough with non-practitioners to cringe. It’s as if shouting ‘risk analysis’ over and over is any more effective than shouting ‘go 1.2 miles west’ to a tourist over and over. Hanging your communication hat on the necessity of others' understanding of your specialized vocabulary is a sure-fire way for your audience to get lost.

I propose that when dealing with audiences who are not risk practitioners you should do as you would with a non-native speaker: don’t expect them to know the nuances of a particular word or phrase and base everything you’re saying on that understanding. Instead, use a greater variety of words, use examples, draw pictures and use your gestures. Keep on doing that until it’s apparent that everyone in the room gets it and wants you to move on to the discussion and decisions at hand.

Of course, when communicating with risk peers in your sub-specialty, it is acceptable and necessary to use the terms and concepts appropriate to that sub-specialty.

On Authority

After I drafted this article, I happened to pick up the July 2014 issue of the Risk Analysis Journal. It contains the special series, “Foundational Issues In Risk Analysis”. The first paragraph of the first article, “Foundational Issues in Risk Assessment and Risk Management”, states, in part, “Lack of consensus on even basic terminology and principles, lack of proper scientific support, and justification of many definitions and perspectives lead to an unacceptable situation for operatively managing risk with confidence and success.” This statement comes from authors who are researchers in the field, one of whom is an area editor for the Risk Analysis Journal - in short, knowledgable people. With this situation being the case for a field that had its beginnings in the 1980’s, how likely and how important is it that your organization develops the perfect definition for these terms? It is probably not.

What I have seen work reasonably well is to settle on working terms collectively, under the leadership of the highest level risk management function in your organization. Yes, that means that the terms and principles they propose and that are ultimately adopted do not account for the nuances of your specialized risk area, but the alternative is that parts of the organization won’t effectively communicate with one another. That is worse, overall, than being stymied in your effort to translate the details of your speciality into business concerns.

Summary

Pick basic and simple definitions and move forward. In a few years, your organization just might iterate enough to arrive at rigorous and thorough definitions and, more importantly, to achieve an organization-wide understanding. Who knows? The field could settle on formal definitions for basic terms that work across organizations and sub-specialties at about the same time.

Wednesday, August 6, 2014

Top 5 meta-findings from 12 years of security risk assessments in healthcare


My background: I have performed over 150 security risk assessments over the last 12 years, for organizations large and small, and for scopes as broad as an entire enterprise to as narrow as a single application, system, or vendor. Some of these assessments occurred within a day, some took months.

I’m writing this post in the hopes that:
* it can serve as a useful starting point for dialog within your organization about these issues
* enough people will read this that the prevalence of these findings will decrease over time
* my work performing risk assessment becomes more interesting and challenging over time
* I can remove all these meta-findings from my list 15 years from now

Risk assessments can contain all manner of findings, from the high-level policy issues to detailed technical issues. Corrections of the meta-findings that follow would significantly improve the effective management of all information security risks:

1. The "risk assessments” performed to date are actually compliance or control assessments. The organization (1) hasn’t complied with the HIPAA Security Rule and Meaningful Use requirements to perform risk assessment, and (2) has skipped the step that forms the fundamental basis for planning, thereby missing opportunities to efficiently use the organization's resources to appropriately and effectively protect patient and/or member data.

2. About 1/3 of the activities that are either universally important to effective security programs or needed to address the organization’s unique environment were overlooked because the consideration started and ended with an interpretation of the HIPAA Security Rule. The consideration only included the more directly worded CFR § 164.308 though 164.312. Specifically, the HIPAA Security Rule was misconstrued and misinterpreted because the entire preamble and CFR § 164.306 (a)(1) through (3) was skipped in the rush to quickly “be compliant.” 1

3. IT, Information Security, Facilities, Materials/Procurement, HR, Audit, and Compliance have distinct perspectives about information security, and these perspectives have not been harmonized, formalized, and agreed to. The organization as a whole lacks a uniform and coordinated approach and is missing a well-considered set of roles and responsibilities.

4. A large portion of the technical issues that the organization is experiencing is a result of processes or architectures that either do not exist or are poorly designed or implemented or are supported by functions that are understaffed. Technical tools intended to support security are under-utilized or improperly utilized. Much time is spent chasing specific resulting technical issues. The focus should be on identifying and correcting the organizational systems, business processes, personal incentives and (mis-aligned) accountabilities that create and perpetuate the technical issues.

5. Employed physicians, nurses and staff are not supporting security activities and policies because no one has explained in the language of their professions how their personal and individual missions can be put in jeopardy. Leaders, physicians with privileges, and sponsoring organizations have decision-making influence on business goals and risks. In the process, the information security risks are under-factored because they are explained in technical terms rather than in business terms.

In future posts, I will tackle some of these issues and provide recommendations for addressing them in your organization.

1 For those not familiar, CFR § 164.306 establishes "risks to ePHI" (not compliance) as the basis for all decision making related to security under the HIPAA Security Rule.

Wednesday, April 23, 2014

The Difficulties of Inherent Risk

The concept of inherent risk is occasionally mentioned by information security and information risk practitioners. Inherent risk is difficult to conceptualize, and an even more difficult idea to apply in practice.

The typical equation is: inherent risk + controls = residual risk.

It is easy to mask poor models when they are applied in theoretical fields such as information security and information risk assessment. The problem with these approaches can be illustrated by attempting to apply them to examples that have a physical reality. Here is one:

A city-dweller is considering going to a grocery store 10 blocks away, and whether to get there by walking, bicycling, driving or public transportation. As he considers his options, he decides to determine the inherent risk of staying home, and the inherent risk of each of the transportation options. He considers each choice as if conducted with eyes closed and ears plugged and with an ignorance of the neighborhood, vehicular traffic laws and physics. He will pretend to have no knowledge of the local culture around pedestrians or cyclists and pretend not to feel curbs as he stumbles over them. He will imagine that no one will adjust their behavior upon encountering him; that no one will act to protect him; that most vehicles in cities have low profiles, travel at low speeds and have few catastrophic consequences when impacting a person; that vehicles will likely only be present on streets and not sidewalks; that building facades won't come loose and fracture his skull; that he won't get hit by lighting by virtue of being outside; and so on. These considerations might seem ridiculous, but all of them, and a nearly infinite number more, must be eliminated to arrive at inherent risk. If even one is left in, it's no longer inherent risk.

On top of that conundrum, the process requires that “controls” are added back into the equation. So, once “inherent risk” is determined, the next step is to add back traffic laws, citizen good will, building codes, a possible use of seat belts or helmets, pedestrian crossings, a general sense that thunder implies rain and a likely seeking of shelter, general awareness and competence, and so on.

How does one even begin to calculate "inherent risk"? Is this how people think about risk? Clearly not. Is this type of calculation even feasible? Not really. (We haven't even considered benefits, which are addressed in this blog in the post on risk matrixes.) The concept of inherent risk has been conspicuously absent from security and risk standards and methods. Most experienced practitioners long ago dropped it from their approaches. The attempt to address inherent risk confuses and complicates the fields of risk assessment and risk management, adding little value in the process. It's reasonable to expect that inherent risk no longer be promoted or used. Yet, within the last year, I have become aware of initiatives in risk assessment and modeling which include, and are dependent upon, the definition and determination of inherent risk. The stories of these initiatives were painful to hear. It was even more painful to find out that the idea was being promoted by a group believed to be expert in the field of information security management programs.

To be clear: aside from situations in which inherent risk is rigorously determined as the best approach, it should not be used by information security and information risk practitioners. If they insist on using the constructs of inherent risk, practitioners will have a Sisyphean task ahead of them.

Wednesday, March 26, 2014

Towards a More Useful Visualization of Risk

I am a photographer, and I consider natural landscapes my most challenging subject. This is because I must capture both what is inside and outside of the frame by including just the right objects and presenting them in a meaningful way. When we examine risk matrices using the analogy of landscape photography, we see that they leave out too many important things (benefits, possibilities and uncertainty), do a poor job of providing understanding of the situation in context (sense making), and leave decision makers over-reliant on visual cues and visual motifs (spurious visuals). A better representation would put risk information in a context of relevant business data, provide visual cues for potential focus areas, and give better signals for decision making. 

Typically a risk matrix provides event, likelihood, and impact. Important information is missing but the typical risk matrix tricks the viewer into thinking it is all there. Sometimes businesses get caught up in visuals. Sometimes it is useful to drop back to the data to see what the visuals really speak to.

Here are some examples:
  • skin scrape or minor cut (medium likelihood, minor impact)
  • paper cut (medium likelihood, inconsequential impact) 
  • death (low likelihood, catastrophic impact)
  • lost limb (low likelihood, severe impact)
  • stolen wallet (low likelihood, moderate impact)

Which risk should get the most attention and resources? How is this determined? What specifically is being asked and what information is available? Is this the same type of information found in a risk matrix used for business purposes?

To demonstrate what is missing, here are narratives for each example above, respectively:
  • Peter is climbing Half Dome, a lifelong goal
  • Quinn files paperwork in an office, earning $1800 weekly
  • Ryan is undergoing a possibly lethal treatment, to cure a debilitating, painful disease
  • Steve works manually, loading a sheet metal press, earning $1200 weekly
  • Tom is spending three days in an area noted for its pick-pockets, during his dream vacation

The narratives reveal what the risk matrix lacks: context. Needless to say, when presented in a risk matrix, death would likely get the most attention, discussion and debate. Any significant deflection of attention would need to be based on information outside the frame - and that's a poor informational and visual model. In essence, the viewer is fighting against the tool that is meant to assist in deciding where to put that attention. When that happens, something is wrong.

A better representation would use a clustered stock chart in a format I call Benefit-Harm Pairing. Compared to the risk matrix, the Benefit-Harm Pairing shows us more aspects of risk and it better approximates a natural narrative style thinking. The pairing also provides action indicators which are more tailored to the risk than the risk matrix variety of "reduce likelihood." The chart below represents: activity, harm (expected, minimum, maximum), and benefit (expected, minimum, maximum).



The sample Benefit-Harm Pairing presents a different picture of risk. Starting on the left with the treatment activity, the expected benefit is greater than the expected harm but there are also problems: high maximum harm and low minimum benefit.

Benefit-Harm Pairing has a strong idealized form and weak idealized form. For each risk the strong idealized form seeks that:
  • all possible benefits are greater than all possible harms

and the weak idealized form seeks that:
  • expected benefit is greater than expected harm
  • maximum harm is close to or less than expected benefit
  • minimum benefit is close to or greater than expected harm

In our treatment example, a weak idealized form would lead to the same type of tailored tactics that experience has lead to in actual practice:
  • lower the maximum harm - by employing counter-agents for the most likely fatality-inducing aspects of treatment
  • increase minimum benefit - by supplementing the primary treatment with less beneficial but more proven strategies that alleviate rather than cure

Depending on circumstances, we also have the option of assessment over time which offers an additional tactic:
  • stop harm when utility isn't realized - by monitoring treatment to make sure there are signs that it is working as expected, before continuing to expose the patient to lethal treatment

We have a similar Benefit-Risk Pairing shape with the factory worker. One of the indicated general approaches is to bring the minimum benefit up closer to potential harm. In practice, this could mean providing guaranteed lifetime benefits to compensate for work related injuries. 

The next three scenarios are different from the prior two but are similar to each other in basic shape. Picking the vacation example, the general approach indicated is to lower the maximum harm. A sample solution is splitting up the contents of the wallet into multiple pockets, and a hotel safe if possible.

Benefit-Harm Pairing addresses a number things that the traditional risk matrix does not:

  • is based on activities rather than events
  • provides benefit information as a context for harm information
  • represents expected values and possible ranges simultaneously
  • better addresses the goal of surfacing risk appetite and risk tolerance
  • more accurately reflects real-world prioritization and resource allocation
  • is closer to narrative, which is how people naturally think about risk
  • can be used for individual activities, sets of activities, or options for activities
  • addresses black swan events and nuisance events with equal effectiveness
  • allows for any kind of risk to be more easily integrated

I suggest that while a risk matrix does give a view of risk, it isn't a particularly useful view of risk. Risk matrix conversations tend to focus around the correct values of specific likelihood and impacts, often as a proxy for benefits, desires, and other data outside the frame. People intuitively know important data isn't in the model. Benefit-Harm Pairing drives the conversation closer to the heart of the matter: are the benefits of this action worth the harms? Where could we focus resources and make adjustments to improve the relative benefit-harm outcomes? Should we abandon an activity? While Benefit-Harm Pairing has flaws, it seems to be more useful on the whole. In the words of George E. P. Box: "All models are wrong. Some are useful."


I look forward to your comments and feedback.

Tuesday, March 11, 2014

Big Data Privacy Risks

A big data privacy workshop was held by MIT and the White House on March 3, 2014. Many interesting topics were discussed including the privacy of medical data.

"Medical data is special, but not because privacy is more important than in other areas. It's special because progress in healthcare is too important and too urgent to wait for privacy to be solved. I'm in favor of privacy but not at the cost of avoidable pain and suffering and death. We need to find ways to make full EMR data sets available to researchers. We'll have to live with some violations of privacy, as we do today.  And as Mike [Stonebraker] said, what we need to focus on is auditing mechanisms, and finding ways to punish those that misbehave." (at 1:46:00) - John Guttag, Professor, MIT, “Clinical Data: Opportunities and Obstacles,” 03/03/2014

The italicized sentence above is a call to action for privacy and data use advocates alike. Decisions made without extensive consideration of the benefits and harms of the many options surrounding privacy and the collection, aggregation and uses of personal data would be imprudent. (https://en.wikipedia.org/wiki/Precautionary_principle). Further, because of the nuance of the issues, pervasive myths, and general lack of familiarity with risks (especially among big data practitioners and even among privacy practitioners), it can't be a dialog that is both short-lived and legitimate.

In the six hour video, the arguments for not addressing privacy take the forms exemplified by the quote above: general pleas ("too important and too urgent"), fear-mongering ("pain and suffering and death"), and dubious assurances ("punish those that misbehave"). This is not a solid foundation for making risk decisions. A valid and salient argument for the wholesale collection and analysis of data, healthcare or otherwise, is never constructed. No one provided specific or measurable benefits.

I would like to see an approach or framework for calculating privacy-related risks and benefits that could be applied in these situations. The framework should not be domain-specific and, if rigorously constructed, would be applied to population health, national defense, consumer marketing, and all other privacy domains with equal effectiveness. Such a framework would allow individual knowledge and experiences to be included in collective discussion and analysis, and allow for grounded debate about the outcomes. At the very least, the framework would provide a basis for more meaningful dialog. It would accelerate the ability of researchers, the government and the public to conduct a more informed and nuanced risk analysis.

Principles which can be used for such a universal framework have already been developed or are being updated by international organizations:
OECD Guidelines on the Protection of Privacy and Transborder Flows of Personal Data
EU Data Protection Directive
APEC Privacy Framework
The development of a risk analysis should start by incorporating these various privacy principles into a framework. How it would then be completed and which assessment methodologies would be appropriate would have to be determined, but the discussion should begin in earnest. It's time for less hand-waving and something more substantive.

Friday, February 28, 2014

Welcome

Decreasing risk while enabling pursuit of opportunity without unnecessary restrictions, delays, and cost is the heart of a risk strategy. In the best case, a risk strategy lets one go faster, much like the brakes on a car.*

Risk management has long held that the practices of risk management - which provide process and tools to make risk decisions - will solve this problem. What has become clear is that as organizations get better at understanding and identifying risks, those same organizations have more decisions to make, and now have a further problem of overwhelmed decision makers.

A risk strategy presumes that we don't want to make decisions framed in risk alone, nor that it makes sense to always pass those decisions to the top echelons of the organization. We want to run our businesses, and we want individuals to make good risk decisions while we are doing so.

This blog and my @riskstrategist tweets will address some of this landscape. I look forward to sharing with you, and to your feedback.

* Expect to see a post in the coming weeks on the real value and purpose of brakes, and how it relates to risk strategy.