Myself and others have been getting Webinar invites from IBM’s developerworks regarding Rational’s AppScan Developer Edition.
This is of course part of the re-launch of the re-tooled AppScan product. It now includes a set of analysis types, some static and some dynamic. They’ve got fancy new names for subsets of each, some hair splitting to my ear and I’ve been reading/writing on this topic for 10 years.
The market for static analysis is plumping nicely year over year. Static analysis (SA) vendors, such as Fortify, are one-or-two revisions into producing dynamic analysis tools and suites that leverage hybrid approaches. With the big dynamic analysis tools controlled by IBM and HP a certain amount of this cross-over (and much more I predict) was inevitable.
It was also inevitable that that dynamic shops would take shots at the SA space. The dynamic guys got shot up pretty bad as their tools–sold as application security’s first ‘silver bullet’–begun to meet with resistance on how much manual effort was still required and on how many false positives and/or missed vulnerabilities remained. Static analysis swept in promising a better solution. “Look at the source code”, they smiled, “and you can be more complete and accurate than if you just poke the application from the outside.” Well, now it’s SA’s turn in the tank and the dynamic shops are enjoying the reversal.
But, in a hybrid world–where the major tool vendors offer both static and dynamic tooling this is technically absurd. AppScan’s explanation on “String Analysis” is particularly absurd. They’ve chosen to attack the effectiveness of static analysis with their ‘solution’, a static analysis technique.
It’s true that both Fortify and Ounce principally find SQL-injections through data flow techniques that propagate ‘taint’ and rely on tagging source, sink, and cleansing logic at the resolution of function calls. But I can say with authority that both tools model the potential values of strings as they propagate their data flow analysis as well. AppScan’s innovation isn’t so innovative.
Yes both SA tools, in my opinion, leave a bit to be desired in the core products’ support for what aspects of string propagation and manipulation they can follow. I won’t get into detail, but I advise security managers that it’s worth it to understand where the tool will fail you in this regard.
Coverity and the now defunct CodeAssure did an exceptionally good job modeling string values throughout code’s ‘speculative execution’, as part of their static analysis. When I tested these tools, they surprised me in both what they were able to find and what other tools’ false positives they left out of reports. Alas, these two don’t help today’s security manager much if they’re focusing on Java EE or .NET web applications. CodeAssure is gone and Coverity’s product has (in my opinion) limited language support outside C/C++ (though, again, I can not stress enough how positive my experience with it in C/C++ has been).
The dynamic shops had to level the playing field. But, as near as I can tell, the current situation is this:
- The major vendors believe there’s benefit to static and dynamic analysis
- There’s a lot of room for technical improvement in the market leaders’ SA products, with respect to modeling
- The future’s tool will leverage static and dynamic techniques, because each is suited for find a particular class of problems well.
You, as a security manager, will still need to sift through the marketectures and promises and figure out which tool works best on the kind of code your organization builds/buys. A major component of this will be how customizable the tool is.
Over-reliance on ANY automated tools (static or dynamic) leaves you with un-found vulnerabilities and a false sense of security. Cigital’s assessment services rely on these tools for speed and scale, and so we’ve taken great pains to understand where their modeling bends and where it breaks. In our own practice, we augment static techniques with dynamic tests and have even begun writing some next-generation static techniques to counter these limitations. We’ve spent hundreds of hours helping many many clients wring more out of their preferred suite of tools with the same understanding: opting instead for less invasive customization and tuning.
The bottom line is (and I’ve been saying this since at least ’03 now), there are strengths to each tool and each type of analysis. NONE will get you to the assessment finish line alone. Anyone who tells you otherwise is sellin’ you a load.
Let the posturing begin.