If it’s so hard why bother?

by jOHN on Wednesday, February 2, 2011

Recently, internal and external discussion hit on the topic of static tool comparison. The difficulty of this topic caused me to write up my thoughts as what became an InformIT article. This prompted some to respond,

If selecting and adopting a tool is so hard, even for experts, why should I bother?

Good question. The article was not written as indictment or defense of static analysis–it was a cautionary tale whose moral: “organizations can get wrapped around the wrong axle when selecting and adopting a tool” I believe in to the utmost.

If it is worth adopting static analysis, as I believe it is, then the question becomes: How do I avoid doing it poorly? I’ll start by revisiting some suggestions made in the article’s conclusion and continue.

Seek Experience
Expert consulting can dramatically improve the speed and effectiveness of an organization’s definition of and scaling static analysis and SCR practices. But, you don’t need experts’ help to successfully pick and adopt a tool.

Leverage communities you’re already amongst to absorb their experience. Reach out to:

  1. Your local OWASP chapter
  2. Organizations within your vertical
  3. Similarly sized/structured organizations within your geography

Within the communities, others have likely already selected and adopted a tool. Though some will not share their selection or adoption experiences openly, others will. And, I’ve found that human nature can’t resist sharing at least _some_ aspects of a good war story.

War stories hint at underlying data more valuable than the particular selected tool: the aspects of adoption that posed the most challenge or required the most effort. Having your troops pointed in the right direction when fighting breaks out will benefit you more than making the better choice between equipping troops with a carbine or a rifle. ;-)

During last year’s BSIMM conference in Annapolis, MD, I saw tremendous inter-organizational knowledge sharing on the static analysis front. Not only was I impressed, but as usual, I learned things. War stories? Those improved with volume imbibed.

Eschew Deep Scientific Comparison for Trial Experience
Unless you graduated with an advanced degree focused on static analysis, or spent the last ten years building/analyzing it, I don’t feel that a comparative study will be satisfying. Many have asked me, “With you experience–come-on–tell me… which is better, A or B?” Individuals asking me almost always hear a variant of the following candor:

Comparing the analysis engines of the two market leaders is a lot like wondering about the Audi S4 vs. the BMW M3. Both take a comparable approach to their flag-ship consumer performance car: both switch between ~3.xL six-cylinder blown engines or ~4.xL naturally-aspirated V8s. In the end, both gobble premium fuel and get to 62mph in 4.x seconds. Both are fine pieces of engineering and each has its own religious following.

Like the previous advice, knowing how to drive it and where the car’s limits are likely provides lower track times than the selection of one car over the other for the average-to-enthusiast driver. It simply doesn’t matter whether the trappings of boost pressure produced by the S4’s supercharger beat out the direct-injection, independent throttles of the M’s high-revving V8 (*1)

I’ve digressed a bit too far, haven’t I? Don’t do the same with your static tool ;-)

  • Use a representative sample of your applications to observe a static analysis tool’s performance, not a contrived test suite.
  • Consider the tool’s findings relative to pen-testing findings on the same app.
    • Did new and interesting findings result?
    • Did pen-testing findings found by the static tool provide adequate root-cause analysis and remediation advice to fix problems earlier?
  • How long did it take to on-board an app? How will this scale to your portfolio of apps?
  • How long did it take to triage the results? How will this scale?

As your trial encounters problems, make explicit note of them. As you move from 3-10 apps to the portfolio as a whole, can your organization withstand stresses at scale? If not, you may want to switch tools or approaches.

I’ve seen security managers I respect get incredibly far without any consulting help just by taking an iterative approach and asking questions like those listed above.

Worry about what you can control
You control your organization’s staff size, skill set, scanning policy, and infrastructure. You do not control the architecture, implementation, or bugs associated with the static analysis tool you buy and deploy.

So, as you talk to others about their experiences selecting and adopting a tool… as you conduct trials with a selected tool… and as you plan a larger roll-out at scale of your static practices, think about strengths and weaknesses in a chosen tool not in terms of how its competition might perform relatively but in terms of whether or not they compliment or expose your organization’s staff-based, skill, policy, and infrastructure weaknesses (or play to its strengths in those same areas).

And finally, don’t be afraid to ask for help.
-jOHN

(*1) – In the interest of full-disclosure, I cast my lot with the 2011 M3 over its Audi competition despite BMW’s ‘heretic’ departure from their I6 engine architecture heritage in the 3-series. I can make no such clear claim to my allegiance on the static front.

Leave a Reply