Sunday, September 11, 2016

Mitigating Catastrophic Risks of Smart Meters

I posed a question on quora.com about whether it was feasible to design smart meters so that the power couldn’t be entirely shut off by remote control. In particular, I wanted to know if some small portion of the power could be on a separate mechanism that would require a physical visit by the utility company to shut it off.

The question received an articulate, passionate, and well-reasoned response that explained why such a design should not be pursued. The author made good points about motor burnouts, damaged equipment, and so on - it was apparent that he was familiar with the topic of electricity. He also implied that my design suggestion was downright dangerous, and perhaps indicated a lack of thoughtfulness on my part for even considering such a design.

I explained that I was concerned that a cyber attacker could remotely shut off power to tens or hundreds of thousands of electrical utility customers and then brick (render inoperable) the remote control unit. This of course would require the utility company to physically visit every customer to restore power. This would especially affect those who need power for medical devices.

The passionate respondent replied:
"That’s a good point, I never thought of that."
This is the nature of risk in complex systems: risks have a tendency to emerge from unexpected places when evaluated solely by experts in specific domains. Experts tend to bring specific training, specialized ethics, implied goals, and certain types of experiences. This narrow focus is good for daily work, but can negatively affect risk assessments. In the case of the passionate respondent it's clear that the electrical system and equipment were front and center, perhaps summarized by a mantra of "don't damage equipment!" However, in the broader context, this important principle/ethic was too narrow to address broad needs and stakeholder expectations. In this case it came down to a framing issue. 

How should risk assessments be designed so that they avoid the issues of being too narrowly focused? Here are a few ideas:

  1. Define the target of the risk assessment in terms that have relevancy to stakeholders.
  2. Invite stakeholders who can, as directly as possible, represent the interests and perspectives of the diverse set of groups who derive benefits(1) from the target.
  3. Solicit these stakeholders to identify their key benefits or needs relative to the target.
  4. Focus the risk assessment on risks to benefits rather than on technical risks. 
  5. Assume that intellectually "ideal" technical solutions are probably flawed from various non-technical perspectives, and elicit those flaws from all participants (especially the non-technical and non-experts).
  6. Design the process and conduct the assessment so that unpopular or unorthodox ideas are not unduly suppressed or swamped.

To risk assessors everywhere: happy hunting!

(1) Benefits in this context is meant to be all-inclusive, but could be substituted with value, dependencies, goal achievement, meeting objectives, accomplishing mission, and so on.