Why Aren’t We Learning From (Defect) History?

by Jim DelGrosso on Wednesday, July 23, 2014

I was recently part of Silver Bullet 100 where I was asked “How much progress have we made in the last ten years with Architecture Risk Analysis (that is, finding and fixing flaws in software design)?” My response surprised some folks here at Cigital when I explained that for the most part, I did not think we have made much progress at all. My response was based on the common defects we find analyzing the design of a system/application as part of our Architecture Analysis practice here at Cigital. We have clients all over the world, ranging in size from large to small, in very different markets, with very different business models, with different development methodologies, etc., etc., so I feel quite confident in my view of the [lack of] progress made over the last decade.

Fortunately, I am not the only one keeping tabs on commonly found defects. OWASP has been releasing their Top Ten lists for quite a number of years. Originally created in 2003, it was updated in 2004 and has been updated every three years since the 2004 release. I forget who I was talking to recently but a comment was made that most of the OWASP Top Ten has remained the same over the last ten years. My initial reaction was “No Way!!” How could we be making the same mistakes for ten years?

I decided to look at the Top Ten lists from 2004, 2007, 2010, and 2013 and see for myself how the list has changed over the years. I came across a PDF summarizing changes made to the Top Ten lists over the years and as I looked at the PDF it was immediately clear that sure enough, a number of entries have been in the list from the very beginning. Well that was depressing.

OWASP Release Comparison

Comparison of 2003, 2004, 2007, 2010 and 2013 Releases

Why are we seeing the same problems over such a long period of time? They are documented well, there are numerous sample applications built to show how the vulnerabilities work, and there are static and dynamic testing tools that export their findings mapped to OWASP Top Ten entries. Certainly we should know about the issues.

I suppose there are many good reasons the same defects keep occurring year after year. Maybe we aren’t teaching students about this as part of computer science. Maybe there is some hyper-focus on functionality and security is one of those things you swear you will get around to, but never do. Maybe these things are genuinely hard to solve so we keep making mistakes. But no matter what the reason may be, the fact seems to be that the same defects keep occurring.

Now to be fair, although quite a few defects have appeared on the list for a few iterations, some have had either a consistent drop or a recent significant drop. For example, CSRF first appeared in 2007 as A5. It remained in the fifth spot in 2010 but in 2013 it dropped all the way to A8. Progress! I wonder what happened. I suppose it is possible that developers embraced CSRF, learned all about this defect and started writing their code to have CSRF protections. Or maybe frameworks that developers use were enhanced to have a CSRF protection mechanism built right into the framework making it easy for developers to get this protection for “free”. I have no data to support this possibility but it at least sounds like a good idea. So the broader question is, what can we do to make it easier for developers to not make mistakes in the first place and not have them reinvent a wheel every time they want to get something done?

Over the coming weeks we will look at various examples of doing exactly this.

One Response to “Why Aren’t We Learning From (Defect) History?”

  1. FabricatorGeneral says:

    Clippy for Developers. “It looks like you’re not escaping characters before sending them to dynamic SQL. Who should I tell? a) QA b) Hackers c) Your Mother