Badness-ometers are good. Do you own one?

by gem on Monday, March 19, 2007

Never one to mince words, I coined the term badness-ometer to describe “application security testing tools” like the ones made by SPI Dynamics and Watchfire. For whatever reason, people read more into the term than I intended. I guess they see the term as having only negative connotations. I stick by my nomenclature–black box application security testing tools are in fact badness-ometers–but badness-ometers are a good thing and everyone should use them!

Here’s part of what I wrote about badness-ometers in my book Software Security:

That said, application security testing tools can tell you something about security—
namely, that you’re in very deep trouble. That is, if your software fails any of the canned tests, you have some serious security work to do. The tools can help uncover known issues. But if you pass all the tests with flying colors, you know nothing more than that you passed a handful of tests with flying colors.

Put in more basic terms, application security testing tools are “badness-ometers,” as shown in [the Figure below]. They provide a reading in a range from “deep trouble” to “who knows,” but they do not provide a reading into the “security” range at all. Most vulnerabilities that exist in the architecture and the code are beyond the reach of simple canned tests, so passing all the tests is not that reassuring. (Of course, knowing you’re in deep trouble can be helpful!)

Badness-ometerI also wrote an article for Dr. Dobbs called “Beyond the Badness-ometer.” That article stresses the fact that solely relying on a badness-ometer as your only software security activity is a really bad idea.

So what’s good about badness-ometers, and why do I think you should buy one right away? Well, many organizations that build software are woefully in the dark about their software security risk. A badness-ometer can do wonders to turn the lights on with respect to software security, especially when it comes to Web applications.

The sad fact is that many purveyors of Web applications believe that their apps are “bulletproof” if they use simple security features like authentication and SSL. Of course we all know by now that software security goes well beyond security features and deep into enemy territory, concerning itself with things like software defects (bugs and flaws) that lead to software security failure and unacceptable business consequence. Badness-ometers can help expose this “myth of security features” for what it is–a myth–by automatically attacking and taking down Web applications through obvious everyday security tests. Automated testing is cheap and sometimes powerful.

The great irony of badness-ometers is that you can be sure that your enemy will use them. In fact, throughout the decade that I have been practicing software security, bad guys have been more adept at adopting advanced tools and techniques than the good guys generally have.

When it comes to deciding whether your organization needs a badness-ometer, you should ask yourself whether you already know your software is at risk or whether you need some more convincing. If you, or anyone else in your organization, need more convincing, grab a badness-ometer and find out whether your code is in “deep trouble.” I bet it is.

If you are using a badness-ometer today, and it’s finding issues in your code, don’t simply fix the issues and call it a day. Never forget that the badness-omemter can’t tell you that you’re secure. It can only tell you that you’re not. To get beyond simple badness-ometer tests or to fix the problems that your badness-ometer is finding, seek professional help from Cigital.

4 Responses to “Badness-ometers are good. Do you own one?”

  1. Scott Wright says:

    I am trying to think of an analogy. I guess a Badness-ometer is, to software security metrics, what an oil pressure gauge is to a car dashboard. Would that make a Gate Review Checklist item such as “Code-review-completed” something akin to an “Idiot-Light”?

    I can’t help but remember a McGraw/Ranum interview article in an IEEE publication about how, in Marcus Ranum’s opinion, penetration testing is the stupidest idea he’s ever heard of. Something about how ~”people who don’t understand how their system works hire other people who don’t understand how their system works, to simulute attacks by other people who don’t understand how their system works…”~

    It was a very amusing discussion, and almost convincing, if it weren’t for real-world constraints on most development organizations, and the fact that no matter how good the development cycle security techniques are, there will always be holes, especially in larger systems with complex interconnected parts.

    All this to say, I agree that the Badness-ometer is a much better thing than a “Code-review-completed” idiot light.

  2. gem says:

    Interesting analogy. I think I would liken the “gate review checklist” item…what might otherwise be termed an assurance activity with an associated quality gate…more to a regular and necessary maintenance activity than to something needed while you’re driving. However, if we limit ourselves to dashboard gauges, maybe code review is like the gas gauge? I love the “idiot light” idea though…have to think about this some more.

    With regard to whether is it better to use a badnessometer to look for really stupid mistakes than code review to look for really stupid mistakes, my intuition is that you will find more actionable results with code review than you will with security testing. In the end you should do both, and trying to determine relative value for either seems a waste of time.


    BTW, on the pen testing front, my views are not at all the same as marcus’s. I was simply the interviewer. You can find the complete podcast here

  3. […] to give it "a quick test" for certain issues.  That’s why I think the "badness-ometer" quote is so good and great advice.  In general I would like to see these tools catch up, […]

  4. […] This means that purely automated scanning is a shallow form of security testing. In many cases the precise tests performed, and how they were performed is hidden from the user. The result of the scan is a report that only contains vulnerabilities. You could think of a scanning tool as a Badness-ometer. […]