One View of Why Risk Management Takes Too Long

by sammy on Monday, September 24, 2007

As I get back into the risk management arena after a sojourn in knowledge management (mainly designing knowledge-driven offerings and monetizing the associated intellectual property), I find yet another example of “the more things change, the more they stay the same.” I think the executive view of information security risk management techniques as viable decision support tools has come a long way, but the complaint I hear from most executives is that even the simplest risk modeling appears to take a very long time.

I find that information security risk management still has the unfortunate combination of being a lightning rod for snake oil in the marketplace, being a real polarizing and dogmatic topic that often defies reasoned discourse, and being something that the average person handles by gut and not by numbers. Practitioners, academics, and managers alike bemoan the scarcity of actuarial data on which to base decisions. And, just to round out the issues list, any given group of people will quickly fall into its own set of terms and meanings, making it very difficult for any two groups to have short, interesting arguments that actually advance the knowledge base.

In my recent reviews of what’s going on in the world, risk modeling exercises related to application security seem to stretch on for two primary reasons:

1. An obsession with knowing every “threat”
2. Not having a good rule for deciding when a threat-vulnerability-control coupling deserves no more scrutiny

What I’ve evolved over the past couple of decades to reduce this work is something I’ve called “Looking for zeros” and “Looking for ones.”

In my experience, knowing the exact threat (i.e., combination of attacker, attack, attack path, resources, and some intangible things such as motive) is often irrelevant. I call this “Looking for ones.” For example, if a particular attack always works (e.g., cross-site scripting in a particular web form), then it likely matters not whether the attacker is a national government, a terrorist, a criminal, a script kiddie, or someone who accidentally pastes HTML into the field — the “success value” for this attack-vulnerability tuple will always be ‘1’. Knowledge of the attacker might give us a bit more information about what he or she might be ultimately trying to accomplish, but the decision to fix the problem should already have been made.

Similarly, this is why we decompose threat, vulnerability, assets, and other things into constituent components. I call this part “Looking for zeros” and it works really well. As soon we find a ‘0’ for a material aspect of attacker, attack, attack path, resources, motivation, threat frequency, feasibility, accessibility, susceptibility, vulnerability prevalence, asset value, event cost, downside, caring, and so on for the various dimensions of “bad stuff happens,” then that scenario is over and we can move on. Again, even given a national government with lots of resources and a zero-day electronic attack that really works, we just don’t care if the attack works only on WebServerA and we’ve deployed WebServerB, or if the attack path required (e.g., site must be using JSessionID), or whatever else, simply doesn’t apply to us.

When we bring into the mix controls (for which we reason about preventive, detective, corrective, and deterrent values), then we try to kill off entire attacks (e.g., timeouts that prevents script kiddies doing half-open TCP stuff), attack paths (e.g., Internet routers that allow incoming packets with RFC 1918 addresses), motivations (e.g., dye packs that render stolen cash unspendable), and so on all at once; seldom should the IT or software development world try to work off “threats,” and specifically threat agents (attackers), one at a time. That just takes too long.

A comprehensive and holistic view of what makes “risk” happen will always make risk management much more efficient and make the results of risk management much more usable in day-to-day decision support.

[tags]risk management,threat modeling[/tags]

3 Responses to “One View of Why Risk Management Takes Too Long”

  1. Don’t Regulate Cyberinsurance Markets

    If you’re new here, you may want to subscribe to my RSS feed. Thanks for visiting!Any such security regulation can only reduce the amount of wealth and comfort the economy produces because it is foisting one man’s values upon another – “f…

  2. roodee says:

    You know, I’ve read your post here, thought about it and read it again. It makes a good argument for reducing scope and complexity. My fear, however, is that it is purely “attack” focused. What I mean is that you evaluate a given system based on a collection of known attacks. The effectiveness of this sort of method is strongly coupled to the reliability and completeness of your attack collection. I imagine a case could be made that we do in fact possess a reasonable set of attacks that can (and should) be used to this end, but I wonder if that is sufficient. What are you thoughts?

  3. Sammy Migues says:

    1) It is not sufficient when the stakes are high.
    2) It is the most that the vast majority of security risk assessors can hope to accomplish.
    3) It is a tremendous step forward from doing nothing, even if it isn’t done well.

    When risk assessment teams have the time and skill, I find it very effective to work from the key assets outward after working from the attacks inward. I started doing this in the early 80’s and had great success unearthing possible issues and the conditions required for their success, even if I couldn’t identify the exact attacks that would make it happen. I later found a small body of knowledge on Flaw Hypothesis Methodology that closely mirrored the process I followed.

    I feel that Flaw Hypothesis is nicely complemented by Failure Mode and Effects Analysis and the two can be used together in an effective bottom-up approach. Similarly, some manner of fault tree analysis, another old concept, nicely rounds out a top-down approach.