I see the overuse of generalized external risk assessments (Top-N “frameworks”) as the result of two issues. First, the entirely poor state of basic cybersecurity hygiene which is a result of cybersecurity and information technology technical debt, the kind of debt which now requires a defensible approach for being only partially paid down. Second, the failure to use risk assessment in the context of business decision making supported communicating technical risks in business risk terms. Not coincidentally, the second issue is the root of the first.
Chris Brown on Risk Strategy
Thursday, November 8, 2018
Top-N List "Frameworks" and Why They Will Fail You
Lately, I’ve noticed a trend of prioritizing security program activities using Top-N security issue lists under the guise of using them as ‘a framework’. While possibly a useful input to decision making, Top-N lists amount to someone else’s risk assessment, one that doesn’t necessarily address your business objectives, operating model, technical environment, and industry specific issues. They are also rarely the type of all-encompassing catalog or taxonomy of considerations that are denoted by “framework.”
I see the overuse of generalized external risk assessments (Top-N “frameworks”) as the result of two issues. First, the entirely poor state of basic cybersecurity hygiene which is a result of cybersecurity and information technology technical debt, the kind of debt which now requires a defensible approach for being only partially paid down. Second, the failure to use risk assessment in the context of business decision making supported communicating technical risks in business risk terms. Not coincidentally, the second issue is the root of the first.
I see the overuse of generalized external risk assessments (Top-N “frameworks”) as the result of two issues. First, the entirely poor state of basic cybersecurity hygiene which is a result of cybersecurity and information technology technical debt, the kind of debt which now requires a defensible approach for being only partially paid down. Second, the failure to use risk assessment in the context of business decision making supported communicating technical risks in business risk terms. Not coincidentally, the second issue is the root of the first.
Wednesday, June 13, 2018
Everything Old is New Again, or Why Firewalls Were Always Supplemental
Those who I have spoken to who entered the security field in the last 15 years have often expressed the notion of cybersecurity as a network-first endeavor: firewalls, DMZs, perimeter controls, and so on. Those that have been in the field for 30 years or more will likely remember that firewalls were created as backstop for host security at a time when host-based security protection from the network was scarcely on the radar of most OS designers.
As the network-first approach continues to show the limits of its usefulness in environments where network boundaries are becoming less definable and where advanced persistent threats are common, it makes sense to advocate for a shift back to the host-centric mindset.
I suspect that this will be widely viewed as cost prohibitive, burdensome, unrealistic, and so on. Most good ideas are characterized this way until effective approaches and efficient methods are developed.
As the network-first approach continues to show the limits of its usefulness in environments where network boundaries are becoming less definable and where advanced persistent threats are common, it makes sense to advocate for a shift back to the host-centric mindset.
I suspect that this will be widely viewed as cost prohibitive, burdensome, unrealistic, and so on. Most good ideas are characterized this way until effective approaches and efficient methods are developed.
Thursday, November 16, 2017
The Trap of Risk Assessment Tools
Humans understand and respond to narrative innately. Impactful events are explained as stories, successful calls to action are delivered as a descriptions of a desired or undesired future, and people make decisions every day through explanations.
We do this because narratives are a natural form for communicating information and insight. Narratives are therefor also a natural form for risk communication. In regards to risk, narratives help listeners visualize and develop internalized models of risk, which in turn represent the truth in a way that numbers alone won't (and can't) for most individuals and most situations.
Much of the current cybersecurity and enterprise risk management world is premised on the idea (or ideal) that risks can be meaningfully summarized in ordinal form, such as “very high likelihood”, “moderate impact.” These english ordinals are then often converted to numbers. Sometimes math is performed, and a magic risk number is derived. Some models even use dollars or dollar ranges in their outputs. However, focusing on getting to the “right number” - and reporting that is focused on numbers - denies the reader the benefit of context. It makes it hard for decision makers to object to the assumptions embodied in the inputs and the calculations. Numbers or dollars alone convey objectiveness and authority, even while that is not necessarily the case. The work of "getting to the numbers" behind closed doors only serves to exacerbate this issue. In the best case systematizes the subtle biases of the assessors, and in the worst case distorts the model to fit preconceived notions about organizational priorities, or conform to personal risk worldviews. Caution should be advised for those using numbers alone to represent risk.
In contrast, risk assessment communication that focuses on narrative and which embraces dialog allows for discoveries and insights by the decision makers, provides an opportunity to question assumptions, and enables sharing and alignment of perspectives. Risk decisions, everyday and tough risk decisions alike, are best borne of discussion, and will be for the foreseeable future.
If you are responsible for risk assessment at your organization, don’t fall pray to a tool’s promise, or the unquestioned illusion of objectivity and certainty that comes from numbers. If you are performing risk assessment where the focus is populating fields in a spreadsheet or application, no matter how advanced it is, you run the risk that the one thing that everyone needs to consider is getting lost through reductionism. Do the numbers if required, but wrap it in an informative discussion, and design and facilitate the discussion to convey risks in business terms that supports business decision making.
We do this because narratives are a natural form for communicating information and insight. Narratives are therefor also a natural form for risk communication. In regards to risk, narratives help listeners visualize and develop internalized models of risk, which in turn represent the truth in a way that numbers alone won't (and can't) for most individuals and most situations.
Much of the current cybersecurity and enterprise risk management world is premised on the idea (or ideal) that risks can be meaningfully summarized in ordinal form, such as “very high likelihood”, “moderate impact.” These english ordinals are then often converted to numbers. Sometimes math is performed, and a magic risk number is derived. Some models even use dollars or dollar ranges in their outputs. However, focusing on getting to the “right number” - and reporting that is focused on numbers - denies the reader the benefit of context. It makes it hard for decision makers to object to the assumptions embodied in the inputs and the calculations. Numbers or dollars alone convey objectiveness and authority, even while that is not necessarily the case. The work of "getting to the numbers" behind closed doors only serves to exacerbate this issue. In the best case systematizes the subtle biases of the assessors, and in the worst case distorts the model to fit preconceived notions about organizational priorities, or conform to personal risk worldviews. Caution should be advised for those using numbers alone to represent risk.
In contrast, risk assessment communication that focuses on narrative and which embraces dialog allows for discoveries and insights by the decision makers, provides an opportunity to question assumptions, and enables sharing and alignment of perspectives. Risk decisions, everyday and tough risk decisions alike, are best borne of discussion, and will be for the foreseeable future.
If you are responsible for risk assessment at your organization, don’t fall pray to a tool’s promise, or the unquestioned illusion of objectivity and certainty that comes from numbers. If you are performing risk assessment where the focus is populating fields in a spreadsheet or application, no matter how advanced it is, you run the risk that the one thing that everyone needs to consider is getting lost through reductionism. Do the numbers if required, but wrap it in an informative discussion, and design and facilitate the discussion to convey risks in business terms that supports business decision making.
Friday, April 14, 2017
You Can Keep Your Compliance, I Have a Mission
Doctors practice very real, very tangible risk management every day when caring for patients. The decisions they make affect the wellbeing and the very lives of their patients. The trade-off between various treatment options, judgements about future patient behavior based on historical behavior, the upsides and downsides of surgical and pharmacological treatments vs the likelihood of behavior changes, and long list of considerations are based on risk assessment and are themselves part of risk assessment.
Patient care is a complicated and nuanced field, and risk management is core to managing the complexity. As such, doctors have a seasoned perspective on risk management that gives them a unique perspective on information security and compliance. What they do is telling. They reframe compliance and security with a question derived from their mission: What is the impact to patient care?
While it should be fairly clear to most that patient care is more important than compliance or information security, what is less often clear to practitioners is that the only framing in which to consider either compliance or information security is that of impact to patient care (so long as patient care is defined broadly enough). This highlights the need for something to bridge from compliance and information security to patient care; and that bridge is risk management.
Most doctors understand risk management innately, at least as well as, and perhaps even more intimately than those in the compliance, technology, and security fields. They haven’t been steeped in the myth of the “choice between security and functionality” and they are willing to have substantive conversations, often leading to “let’s do both” solutions. For all organizations, healthcare or otherwise, it comes down to mission and risks to the mission. If you are having a conversation where your ultimate goal is compliance or “great” security, you’re doing yourself and your organization a disfavor and a disservice.
Instead, ask “what is the impact to our mission?”
Patient care is a complicated and nuanced field, and risk management is core to managing the complexity. As such, doctors have a seasoned perspective on risk management that gives them a unique perspective on information security and compliance. What they do is telling. They reframe compliance and security with a question derived from their mission: What is the impact to patient care?
While it should be fairly clear to most that patient care is more important than compliance or information security, what is less often clear to practitioners is that the only framing in which to consider either compliance or information security is that of impact to patient care (so long as patient care is defined broadly enough). This highlights the need for something to bridge from compliance and information security to patient care; and that bridge is risk management.
Most doctors understand risk management innately, at least as well as, and perhaps even more intimately than those in the compliance, technology, and security fields. They haven’t been steeped in the myth of the “choice between security and functionality” and they are willing to have substantive conversations, often leading to “let’s do both” solutions. For all organizations, healthcare or otherwise, it comes down to mission and risks to the mission. If you are having a conversation where your ultimate goal is compliance or “great” security, you’re doing yourself and your organization a disfavor and a disservice.
Instead, ask “what is the impact to our mission?”
Labels:
compliance,
healthcare,
management,
risk management,
risk strategy
Sunday, September 11, 2016
Mitigating Catastrophic Risks of Smart Meters
I posed a question on quora.com about whether it was feasible to design smart meters so that the power couldn’t be entirely shut off by remote control. In particular, I wanted to know if some small portion of the power could be on a separate mechanism that would require a physical visit by the utility company to shut it off.
The question received an articulate, passionate, and well-reasoned response that explained why such a design should not be pursued. The author made good points about motor burnouts, damaged equipment, and so on - it was apparent that he was familiar with the topic of electricity. He also implied that my design suggestion was downright dangerous, and perhaps indicated a lack of thoughtfulness on my part for even considering such a design.
I explained that I was concerned that a cyber attacker could remotely shut off power to tens or hundreds of thousands of electrical utility customers and then brick (render inoperable) the remote control unit. This of course would require the utility company to physically visit every customer to restore power. This would especially affect those who need power for medical devices.
The passionate respondent replied:
How should risk assessments be designed so that they avoid the issues of being too narrowly focused? Here are a few ideas:
To risk assessors everywhere: happy hunting!
(1) Benefits in this context is meant to be all-inclusive, but could be substituted with value, dependencies, goal achievement, meeting objectives, accomplishing mission, and so on.
The question received an articulate, passionate, and well-reasoned response that explained why such a design should not be pursued. The author made good points about motor burnouts, damaged equipment, and so on - it was apparent that he was familiar with the topic of electricity. He also implied that my design suggestion was downright dangerous, and perhaps indicated a lack of thoughtfulness on my part for even considering such a design.
I explained that I was concerned that a cyber attacker could remotely shut off power to tens or hundreds of thousands of electrical utility customers and then brick (render inoperable) the remote control unit. This of course would require the utility company to physically visit every customer to restore power. This would especially affect those who need power for medical devices.
The passionate respondent replied:
"That’s a good point, I never thought of that."This is the nature of risk in complex systems: risks have a tendency to emerge from unexpected places when evaluated solely by experts in specific domains. Experts tend to bring specific training, specialized ethics, implied goals, and certain types of experiences. This narrow focus is good for daily work, but can negatively affect risk assessments. In the case of the passionate respondent it's clear that the electrical system and equipment were front and center, perhaps summarized by a mantra of "don't damage equipment!" However, in the broader context, this important principle/ethic was too narrow to address broad needs and stakeholder expectations. In this case it came down to a framing issue.
How should risk assessments be designed so that they avoid the issues of being too narrowly focused? Here are a few ideas:
- Define the target of the risk assessment in terms that have relevancy to stakeholders.
- Invite stakeholders who can, as directly as possible, represent the interests and perspectives of the diverse set of groups who derive benefits(1) from the target.
- Solicit these stakeholders to identify their key benefits or needs relative to the target.
- Focus the risk assessment on risks to benefits rather than on technical risks.
- Assume that intellectually "ideal" technical solutions are probably flawed from various non-technical perspectives, and elicit those flaws from all participants (especially the non-technical and non-experts).
- Design the process and conduct the assessment so that unpopular or unorthodox ideas are not unduly suppressed or swamped.
To risk assessors everywhere: happy hunting!
(1) Benefits in this context is meant to be all-inclusive, but could be substituted with value, dependencies, goal achievement, meeting objectives, accomplishing mission, and so on.
Thursday, May 12, 2016
What is a "Leading" Compliance Function?
A friend ask me not too long ago: “How do you measure the maturity of a compliance function?”
As I think of an organization "leading on compliance," ironically, I think of the compliance function as following not leading, and being concerned more with risk than compliance.
Let me explain. Compliance covers some things, and organizational need covers some things. In the most mature organizations, I think that 90-95% of compliance needs are covered if the organizational needs are being met with efficiency and effectivness. In contrast, if compliance needs are covered, it can bear little or no relation to the business needs, the degree to which this occurs is highly dependent upon industry, specific regulation, and company operating behaviors. This is not meant to diminish the importance of compliance, or security, or HR, or any other function, but these functions are all small rocks. Business is the big rocks. For everything to fit into the jar, the big rocks have to go in first. (http://www.appleseeds.org/Big-Rocks_Covey.htm) So, frustratingly, to achieve the highest maturity of compliance, you need the organization to be operating at the highest levels of maturity as well. Otherwise, compliance before business could cause the business to decrease the size of a business rock, or simply not attempt to get one more business rock into the jar.
So the question I like to ask is: how does the compliance function operate in a way that encourages the organization to recognize and act on what it needs to do from a larger organizational perspective, *while* allowing compliance to focus on the specific things it needs to do to fill in with compliance between the bigger business rocks? Note: If you can do this, not only will you be meeting all your obligations, but people will also like you a lot more.
Culture and processes and collaboration are core, for both compliance and risk management . GRC for me is about efficiency, and some enablement. You can’t get to the highest levels of maturity without GRC, but if you lead with GRC, it’s unlikely you’ll get to even the mid-levels of maturity without going back and doing the organizational steps and then re-working GRC, likely taking more time and incurring greater cost, and making people more unhappy.
In regards to risk management… there are definitely differences with compliance. Here’s the crux: Compliance serves those outside of the organization with usually a pretty narrow view of what they want from your organization. Risk management serves your organization and (ideally) does so with the most complete view as reasonable. Specifically, risk management facilitates the allocation of resources towards low risk high profit activities and away from high risk low profit activities (there are definite exceptions, like bold business moves that are long bets but could be highly profitable). Compliance can be viewed as avoiding fines, business interruption due to court orders or withdrawal of licensure or certification, and a few other very specific things; compliance is one type of risk among many.
Here’s how I think about it: Formal risk management is decision support, and should be a core element of decision making organization wide. In reality, risk management already is at the core of every decision, but it’s not usually formal. Compliance is a double check.
In regards to compliance, here are some things that I would consider in determining maturity:
- Is compliance viewed as a trigger for the business to think about risk (of which compliance is a subset)
- Does compliance have a view into business processes and understand the business context
- To what degree does compliance talk about risks of non-compliance (vs. taking a there-is-no-choice approach)
- Does compliance collaborate on solutions driven by operational needs, with compliance merged in? Or does compliance dictate solutions where compliance trumps regardless of operational perspectives or priorities or feasibility or cost? Is compliance as flexible as it reasonably can be?
- Does compliance seek out the larger problem? For example, does compliance require that the instance they find of an issue be fixed (this is usually an audit perspective as well), or do they seek root causes and process improvement? Let’s face it, no one wants to spend 100 hours to fix what they know is 60% of the problem, when they can spend 130 hours fixing 98% of the problem (you know: actually really solve problems)
- How well does compliance surface issues to the people who are accountable for certain types of compliance? (i.e. HIPAA security officer, HIPAA compliance officer, chief legal council, etc.) (also, this is where GRC helps)
- How strategically aligned is Compliance to the other risk functions such as Internal Audit, Risk Management, Information Security, IT Audit/Compliance, and possibly others? How tactically aligned are they? Is annual planning done together (e.g. divide and conquer, no overlap, etc)? Are major findings shared?
- Does compliance adopt business language and align where possible to business measures, or operate as an island unto itself (again, a little bit of that there-is-no-choice, compliance-is-it’s-own-thing attitude)?
Although Norman Marks is fairly widely respected, I don’t always agree with him. However, this post on GRC is worth a look, https://iaonline.theiia.org/blogs/marks/2015/trends-in-grc, especially the links, specifically to items from OECG like this: http://www.oceg.org/lesson/principled-performance-and-grc/ (I watched this for the first time tonight, after writing all of the above, and it echo’s many of the things I say above and adds to it), and the blog he links to, yogrc, looks like a pretty good take specifically on GRC http://yogrc.typepad.com/.
I look forward to your comments and feedback.
Thursday, December 11, 2014
Data Stewards as Risk Managers and Champions of Information Security
In early 2007, as the information security officer at a health insurance company, I began to consider how to build better connections between information security goals and business goals. I had observed for some time that the closest that the business units generally got to the question of data security was, "Who has access to this application or screen or function within an application?" Most concerns were about data confidentiality, and using screen-level access reviews was a convoluted and confusing proxy for directly addressing access to and uses of data.
During this same time period I developed a few notions. First, that process controls serve primarily integrity-oriented goals that are important for narrowing activities, such as for financial audits culminating in a single audited financial statement. Data controls inherently serve confidentiality-oriented goals that prevent the uncontrolled spread of data, such as preventing information leaks about mergers and acquisitions in the pre-merger period. Data controls would be a poor, work-intensive way to determine if the financial statement was accurate. Process controls don’t really apply when a board member accidentally sends merger due-diligence emails to the mailing list for a different board. Keeping in mind that most applications at the time were designed with process controls in mind ("Can this person initiate or approve certain transactions?”), we didn’t have the right approach for the concern. The question of data security got lost in process control thinking - but because so many were “brought up” on process controls, the disconnect wasn’t obvious.
Many organizations experience (i.e. they design) one or more of the following situations:
1. Supervisors are responsible for getting work done, and are simultaneously responsible for defining and authorizing access for their employees. If production is the primary goal and incentive for the supervisor, it would be safer for the supervisor to avoid providing too little access, rather than avoid providing too much access. This situation creates a perverse incentive.
2. People in key business roles may have a general sense that they have a leading organizational role in regards to certain types of data. However, that may only go as far as involvement in a big data initiative, or other large but targeted projects. What is commonly missed is the general accountability, including for data security and risk management. Even if the security role is explicit, there is often little clarity as to how applications, requests for access, business processes, data exchanges, and system configurations affect the accessibility, security and use of the data. This is what I’d call opaque.
3. IT is expected to protect data, even under an access authorization process which places supervisors in a role of authority. IT is expected to intuitively know when to push back on a request. It is also too often the case in projects where security design considerations of technical solution components happen too far downstream, and IT is left “accepting” the risk or seemingly a roadblock. A good word for IT's situation in these examples is untenable.
4. Level-only classification systems (i.e. those of the sort that use only classifications such as Secret, Confidential, Internal Use Only, Public) fail to establish accountability-aligned ways to classify and declassify, and provide no clear path to making either consistent policy decisions or making nuanced decisions about data. Different people make different decisions about the same data, due in part to individual risk temperaments, a variety of personal experiences, profession-driven leanings, and role-specific incentives. This is a broken model.
A Starting Principle
"The person who benefits from accepting a risk on behalf of the organization should also be individually accountable for the consequences."
The importance of the prior statement seems almost too obvious but many organizations fail to consistently make a connection between risk-taking benefits and risk-taking consequence management. Larger organizations, often by design, create separate processes for accepting risk versus managing risk. Those two intricately tied decisions often happen at different times and in different venues and contexts. This is less than ideal.
An Accountability Model
I developed a model in response to the organizational challenges mentioned above, building on my initial notions, and using the principle of aligned accountability. I will explain a few of the key roles and then how they work together.
Data stewards are responsible for establishing organization-wide policies for a specific type of data. They are also responsible for considering policy exceptions and making ad-hoc decisions when policy is unclear or a situation requires analysis. Depending on size and industry, an organization can have anywhere from a few to 30 data stewards. Generally the data steward is a leader who is close to the organization’s data intake point (e.g. members/customer operations, business partner relations), or resides over the production of the data (e.g. finance, strategy). Among the classically recognizable data stewards are the head of HR (employee demographic and performance information), the head of payroll (salary, benefit, and garnishment information), the CFO (financial performance prior to reporting), and the head of research and development.
Data gatekeepers are aligned to external stakeholders, may handle a variety of data, and are responsible for following and enforcing the data access and use rules created by the data stewards. Generally, every type of audience or external recipient of data has a related data gatekeeping function. Long-established examples include the Legal department acting as the gatekeeper to law enforcement and the courts; Compliance acting as the gatekeeper to regulatory bodies; and Corporate Communication acting as gatekeepers to the media and the general public. It is a familiar approach, but usually only rigorously applied in specific contexts. It is quite possible, and also useful, to extend the concept to other areas. Almost every function with external touch-points acts in some capacity as a gatekeeper, but in many organizations they are not able to perform that function effectively because of unclear responsibilities and a lack of guidance.
Application sponsors are, as the name implies, those who request, convince the organization to pay for, and provide ongoing demand for a technical capability that supports a business need. Essentially, their business needs are what drive application and system implementations. In their roles, they are accountable to the data stewards for developing requirements that support data policies, and deploying configurations that enforce those policies. They are also accountable to process owners for things such as uptime and process controls such as segregation of duties.
Process owners are responsible for end-to-end processes. General examples include order to ship and procure to pay. Industry-specific examples exist as well, such as admit to discharge and enroll to dis-enroll in healthcare. Process owners establish business requirements for up-time, integrity of transactions, integrity of reporting, authorization of specific transactions, and segregation of duties.
Data custodians are those that hold data on the behalf of the stewards, but have no policy-level say over the direct management of the data. The prime internal example of a custodian is the Information Technology department for electronic data and facilities for paper record storage. A more subtle example is Finance acting as a custodian for payroll data on behalf of HR. Externally, Cloud service providers are custodians.
Each of the roles represent unique business goals, and it is expected that they also have different perspectives and intrinsic motivations related to certain types of decisions. These responsibilities are designed so that a fundamental tension exists between the data stewards, process owners, and system sponsors. Note that all three are business roles. Also note that Compliance, IT, and Legal have been removed from the middle of the decision making process.
Benefits of the Model
In short, the primary benefit is that the business units can focus on risk decisions in a more business oriented context. The discussion is no longer focused on security, control gaps, or regulatory issues as a proxy for business concerns; it is actually about those business concerns. This makes decision making more straightforward, better designed for long term planning, and more responsive to changing business and operating realities.
Contrast this model against how risk decisions often happen... indirectly and inefficiently mediated through specialized IT, Audit, or Compliance contexts.
Which would you choose?
During this same time period I developed a few notions. First, that process controls serve primarily integrity-oriented goals that are important for narrowing activities, such as for financial audits culminating in a single audited financial statement. Data controls inherently serve confidentiality-oriented goals that prevent the uncontrolled spread of data, such as preventing information leaks about mergers and acquisitions in the pre-merger period. Data controls would be a poor, work-intensive way to determine if the financial statement was accurate. Process controls don’t really apply when a board member accidentally sends merger due-diligence emails to the mailing list for a different board. Keeping in mind that most applications at the time were designed with process controls in mind ("Can this person initiate or approve certain transactions?”), we didn’t have the right approach for the concern. The question of data security got lost in process control thinking - but because so many were “brought up” on process controls, the disconnect wasn’t obvious.
Many organizations experience (i.e. they design) one or more of the following situations:
1. Supervisors are responsible for getting work done, and are simultaneously responsible for defining and authorizing access for their employees. If production is the primary goal and incentive for the supervisor, it would be safer for the supervisor to avoid providing too little access, rather than avoid providing too much access. This situation creates a perverse incentive.
2. People in key business roles may have a general sense that they have a leading organizational role in regards to certain types of data. However, that may only go as far as involvement in a big data initiative, or other large but targeted projects. What is commonly missed is the general accountability, including for data security and risk management. Even if the security role is explicit, there is often little clarity as to how applications, requests for access, business processes, data exchanges, and system configurations affect the accessibility, security and use of the data. This is what I’d call opaque.
3. IT is expected to protect data, even under an access authorization process which places supervisors in a role of authority. IT is expected to intuitively know when to push back on a request. It is also too often the case in projects where security design considerations of technical solution components happen too far downstream, and IT is left “accepting” the risk or seemingly a roadblock. A good word for IT's situation in these examples is untenable.
4. Level-only classification systems (i.e. those of the sort that use only classifications such as Secret, Confidential, Internal Use Only, Public) fail to establish accountability-aligned ways to classify and declassify, and provide no clear path to making either consistent policy decisions or making nuanced decisions about data. Different people make different decisions about the same data, due in part to individual risk temperaments, a variety of personal experiences, profession-driven leanings, and role-specific incentives. This is a broken model.
A Starting Principle
"The person who benefits from accepting a risk on behalf of the organization should also be individually accountable for the consequences."
The importance of the prior statement seems almost too obvious but many organizations fail to consistently make a connection between risk-taking benefits and risk-taking consequence management. Larger organizations, often by design, create separate processes for accepting risk versus managing risk. Those two intricately tied decisions often happen at different times and in different venues and contexts. This is less than ideal.
An Accountability Model
I developed a model in response to the organizational challenges mentioned above, building on my initial notions, and using the principle of aligned accountability. I will explain a few of the key roles and then how they work together.
Data stewards are responsible for establishing organization-wide policies for a specific type of data. They are also responsible for considering policy exceptions and making ad-hoc decisions when policy is unclear or a situation requires analysis. Depending on size and industry, an organization can have anywhere from a few to 30 data stewards. Generally the data steward is a leader who is close to the organization’s data intake point (e.g. members/customer operations, business partner relations), or resides over the production of the data (e.g. finance, strategy). Among the classically recognizable data stewards are the head of HR (employee demographic and performance information), the head of payroll (salary, benefit, and garnishment information), the CFO (financial performance prior to reporting), and the head of research and development.
Data gatekeepers are aligned to external stakeholders, may handle a variety of data, and are responsible for following and enforcing the data access and use rules created by the data stewards. Generally, every type of audience or external recipient of data has a related data gatekeeping function. Long-established examples include the Legal department acting as the gatekeeper to law enforcement and the courts; Compliance acting as the gatekeeper to regulatory bodies; and Corporate Communication acting as gatekeepers to the media and the general public. It is a familiar approach, but usually only rigorously applied in specific contexts. It is quite possible, and also useful, to extend the concept to other areas. Almost every function with external touch-points acts in some capacity as a gatekeeper, but in many organizations they are not able to perform that function effectively because of unclear responsibilities and a lack of guidance.
Application sponsors are, as the name implies, those who request, convince the organization to pay for, and provide ongoing demand for a technical capability that supports a business need. Essentially, their business needs are what drive application and system implementations. In their roles, they are accountable to the data stewards for developing requirements that support data policies, and deploying configurations that enforce those policies. They are also accountable to process owners for things such as uptime and process controls such as segregation of duties.
Process owners are responsible for end-to-end processes. General examples include order to ship and procure to pay. Industry-specific examples exist as well, such as admit to discharge and enroll to dis-enroll in healthcare. Process owners establish business requirements for up-time, integrity of transactions, integrity of reporting, authorization of specific transactions, and segregation of duties.
Data custodians are those that hold data on the behalf of the stewards, but have no policy-level say over the direct management of the data. The prime internal example of a custodian is the Information Technology department for electronic data and facilities for paper record storage. A more subtle example is Finance acting as a custodian for payroll data on behalf of HR. Externally, Cloud service providers are custodians.
Each of the roles represent unique business goals, and it is expected that they also have different perspectives and intrinsic motivations related to certain types of decisions. These responsibilities are designed so that a fundamental tension exists between the data stewards, process owners, and system sponsors. Note that all three are business roles. Also note that Compliance, IT, and Legal have been removed from the middle of the decision making process.
Benefits of the Model
In short, the primary benefit is that the business units can focus on risk decisions in a more business oriented context. The discussion is no longer focused on security, control gaps, or regulatory issues as a proxy for business concerns; it is actually about those business concerns. This makes decision making more straightforward, better designed for long term planning, and more responsive to changing business and operating realities.
Contrast this model against how risk decisions often happen... indirectly and inefficiently mediated through specialized IT, Audit, or Compliance contexts.
Which would you choose?
Labels:
data,
governance,
management,
organization,
practices,
risk,
strategy
Subscribe to:
Posts (Atom)