Penetration Testing

by pravir on Wednesday, February 28, 2007

If I were to say penetration testing, what would you expect? After conversations with lots of colleagues and friends it’s become fairly clear to me that the term is massively overloaded. It actually got me thinking about the evolution of penetration testing over the last several years. And I’m not going to preach from the you-need-design-and-code-review-too pulpit. I’m just gonna talk about the innards of the pen-test itself (breaking a running system).

The old-school notion of a pen-test was one where you’d hire a bunch of l33+ h4x0r5 and turn them loose on a system to see what they are able to bust. The general idea here was to simulate a real-world attack scenario (black-box pen-test). Given adequate time, Cheetos®, and Red Bull® you’ll probably be left with some findings that are interesting. But take a closer look at the breakdown of how those testers spent their time. Those guys likely spent a ton of their time trying to figure out how the app works, trolling for non-obvious interfaces and partially reverse-engineering the logic of the app. When you’re paying by the hour, this is bad news since they didn’t spend as much time as they could have in actually trying to break the app (they spent a bunch in just getting their arms around it.)

Evolving from these obvious limitations in the completely black-box approach the notion of the “pen-test with code” was born. This was a similar scene, but instead of poking at the running app as much, the testers would go and read code. While the testers are still at arm’s length from the dev team, it’s still better since access to the code gives a truer picture of what’s actually happening under the hood without time spent on reverse engineering. Sometimes, there would even be design docs available which really helped move things along. In the end, this is close to the modern-day version of what many expect when they buy a “pen-test.”

The major weakness of this style of penetration testing is that there’s often no business context to ground the direction of the testing (on which aspects of the system you concentrate) or to baseline the value of the findings (to the company using the app, what do the discovered exploits really mean?) Some pen-testers are starting to bridge the technical-to-business gap, but they’re definitely in the minority. This step in the advancement of the pen-test is definitely a step in the right direction since it focuses the testers on generating and prioritizing findings that show demonstrable impact to your core business (as opposed to sheets of meaningless XSS vulnerabilities that are all prioritized “high” since XSS is bad.) The pen-tester gets to concentrate on breaking business logic in your application and scheming about combinations of technical vulnerabilities that lead to an interesting business problem. You’ll still get all the technical vulnerabilities, they’ll just be low priority if they can’t directly contribute to a bigger problem. Since this all plays into making risk management decisions, I call this style of white-box penetration testing the “risk-based penetration testing.”

If you’re using penetration testing today, you really want as close to a risk-based pen-test as you can get. Although I have heard a few reasons why you might not:

  • “I don’t have access to code or design docs.” I hear this reason all the time as justification for why a black-box approach is used. In the end, even if you don’t have code, someone has gotta know something about how the application actually works. Even if it’s only from a user or sysadmin perspective, it goes miles to setting the testers off in the right direction. Find those people and link them up with the pen-testers! And business risk mapping is still possible even without code (but you’ll probably need at least ad hoc design info).
  • “I use black-box penetration testing nowadays as a way to simulate real-world attacks.” This notion is just off-base due, perhaps, to the lack of understanding about real-world attacks. Yes, the people “in the wild” that might attack you will have a similar skill-set to your pen-testers. The kicker is that the time available to an outside attacker is virtually infinite compared to that of a for-hire pen-tester. When it comes down to exploiting software often it’s just a matter of how much time you spend bashing your head against the problem. Outsiders can spend 2 hours a day for 6 months working on getting a single exploit working. Your pen-testers have 8 hours a day for a week to find lots of impressive results. Thus, cheat the problem by giving the pen-testers all the info you have.

In any case, let’s get back to talking about that notion of the risk-based pen-test. When I think about the requisite skills to complete that job I see many similarities to what QA folks do on a routine basis. Foremost, the QA testers will already know the application components and UIs. They should know the business value of the app and in many cases they’ll also know how the business logic of the app is supposed to work. And also, they are routinely asked to assess the specific business impact of problems in the application (they do it every time they open a bug and assign a priority.) That’s a huge advantage over someone from the “outside” coming into a pen-test engagement. Now what about differences? The biggest red-flag is that the QA people aren’t trained to attack applications to find security flaws. That’s a big disadvantage since you need those skills in order to be even remotely effective at a pen-test. But do keep in mind, QA folks are very much trained to break applications in general (it is, in fact, their job.)

So where are we going with this? Simple: let’s give the penetration testing responsibility to the QA team. They’ve already got a leg-up since they know the application. To counter the point about them not being trained in security attacks employ two simple techniques: tools and training. Some of the automated penetration testing tools out there are really great now. They’re basically extremely potent packages of electronic subject-matter expertise. Now, I would never advocate just buying a tool and hitting the “go” button and calling the job done (you don’t get very good results at all this way.) Enter the training. The automated pen-testing tools, in many ways, are very similar to other types of tools in a QA tester’s belt. They run a series of attack test-cases and report vulnerabilities when a test fails (this is a massive oversimplification of what they’re doing under the hood, but in terms of usability, the analogy holds.) So, train the QA testers on effectively using those tools. Teach them about how to feed application-specific details to the tools to make sure your coverage is high and ensure the results are more accurate.

Further, teach them about the classic notion of security testing (starting with requirements and deriving test-cases to ensure that functional requirements are implemented securely) and show them how to automate it with the tooling. In fact, that’s where this is all ending up: a risk-based security test with a wide blast-radius for getting the bulk of the benefit of a pen-test. It’s cheaper in the long-run (front-loaded cost for buying tools and building the internal skill vs. repeated, indefinite fixed-price cost.) In terms of effectiveness vs. an external pen-test, it’s definitely on the positive side of the 80%-20% rule (and all the things you might miss in that 20% will likely be caught by code review or architecture analysis.) It’s even “faster” in the sense that an organization could move through each assessment more quickly, thus enabling more assessments within a given timeframe.

In summary, I see the future of penetration testing as a push toward QA environments. Since the notion of a code review and architecture review are becoming more mainstream, this makes sense. Why continue to spend a lot on a service that ultimately should be used as a sanity-check of the running system?

Red Bull® and Cheetos® are trademarks of their respective companies.

  • http://www.n0where.org Zach

    What about pen-testing with regard to PCI? Surely that nudges QA aside. Think we’ll still be stuck, then, with the, uh, “traditional” penetration test?

    Oh, and Red Bull and Cheetos?

    If you’re talking old school, it’d be more along the lines of Jolt and Doritos (or ramen, if you ever read the “Cyberpunk Handbook”).

  • pravir

    Well, PCI’s Requirement #11.3 is the one that specifically calls out the need to pen-test the application. If you check out the audit procedures for PCI the instructions for auditing against 11.3 basically call out 1) make sure they’re being done, and 2) make sure that the bad stuff that was identified is getting fixed. So, I don’t think that procludes anyone from doing their pen-testing in-house (e.g. in the QA env).

    And you’re totally right about Jolt v. RedBull… but I gotta be honest, Cheetos were always first in my heart ;)

  • http://www.plynt.com Roshen Chandran

    Interesting post, Pravir. Yes, taking a business risk approach to identifying vulnerabilities is more effective than banging a set of exploits blindly at an application in penetration tests.

    However, the automated tools we have seen do not “discover” business risks during penetration tests. So, giving automated tools to a QA engineer and training them in feeding the right values alone will not solve the problem.