As I get back into the risk management arena after a sojourn in knowledge management (mainly designing knowledge-driven offerings and monetizing the associated intellectual property), I find yet another example of “the more things change, the more they stay the same.” I think the executive view of information security risk management techniques as viable decision support tools has come a long way, but the complaint I hear from most executives is that even the simplest risk modeling appears to take a very long time.
I find that information security risk management still has the unfortunate combination of being a lightning rod for snake oil in the marketplace, being a real polarizing and dogmatic topic that often defies reasoned discourse, and being something that the average person handles by gut and not by numbers. Practitioners, academics, and managers alike bemoan the scarcity of actuarial data on which to base decisions. And, just to round out the issues list, any given group of people will quickly fall into its own set of terms and meanings, making it very difficult for any two groups to have short, interesting arguments that actually advance the knowledge base.
In my recent reviews of what’s going on in the world, risk modeling exercises related to application security seem to stretch on for two primary reasons:
1. An obsession with knowing every “threat”
2. Not having a good rule for deciding when a threat-vulnerability-control coupling deserves no more scrutiny
What I’ve evolved over the past couple of decades to reduce this work is something I’ve called “Looking for zeros” and “Looking for ones.”
In my experience, knowing the exact threat (i.e., combination of attacker, attack, attack path, resources, and some intangible things such as motive) is often irrelevant. I call this “Looking for ones.” For example, if a particular attack always works (e.g., cross-site scripting in a particular web form), then it likely matters not whether the attacker is a national government, a terrorist, a criminal, a script kiddie, or someone who accidentally pastes HTML into the field — the “success value” for this attack-vulnerability tuple will always be ’1′. Knowledge of the attacker might give us a bit more information about what he or she might be ultimately trying to accomplish, but the decision to fix the problem should already have been made.
Similarly, this is why we decompose threat, vulnerability, assets, and other things into constituent components. I call this part “Looking for zeros” and it works really well. As soon we find a ’0′ for a material aspect of attacker, attack, attack path, resources, motivation, threat frequency, feasibility, accessibility, susceptibility, vulnerability prevalence, asset value, event cost, downside, caring, and so on for the various dimensions of “bad stuff happens,” then that scenario is over and we can move on. Again, even given a national government with lots of resources and a zero-day electronic attack that really works, we just don’t care if the attack works only on WebServerA and we’ve deployed WebServerB, or if the attack path required (e.g., site must be using JSessionID), or whatever else, simply doesn’t apply to us.
When we bring into the mix controls (for which we reason about preventive, detective, corrective, and deterrent values), then we try to kill off entire attacks (e.g., timeouts that prevents script kiddies doing half-open TCP stuff), attack paths (e.g., Internet routers that allow incoming packets with RFC 1918 addresses), motivations (e.g., dye packs that render stolen cash unspendable), and so on all at once; seldom should the IT or software development world try to work off “threats,” and specifically threat agents (attackers), one at a time. That just takes too long.
A comprehensive and holistic view of what makes “risk” happen will always make risk management much more efficient and make the results of risk management much more usable in day-to-day decision support.
[tags]risk management,threat modeling[/tags]