Fortify Source Code Analyzer (SCA) is shipped with a set of generic rules intended to find vulnerabilities related to privacy or confidentiality breaches. Because these rules are part of the default Fortify SCA rule pack, they are intentionally generic and broad - one rule appears to flag most references of the word “password” in Java code or on a JSP page while another rule seems to declare that any class field name containing the letter sequence “SSN” or “ssn" should be considered a source of private data. Although I’ve seen these rules identify real vulnerabilities (i.e., true positives), I’ve also observed them to produce a lot of noise (i.e., false positives) and perhaps more concerning, come up empty handed and completely ignore application-specific API constructs that return private or confidential data, such as credit card numbers, customer names and addresses, tax identification numbers, and so on.
False positives are downright annoying, but when tools don’t truly understand code and therefore miss vulnerabilities (e.g., false negatives), tool users are left with incomplete scan results. No single tool is a silver bullet but interpreting incomplete scan results can lead to a false sense of security and in some cases may jeopardize the legitimacy of specially targeted assessments, such as reviews that aim to enforce PCI compliance. So, if you’re in charge of conducting tool-assisted code reviews using Fortify SCA, how do you gain assurance that SCA is really “seeing” your company’s confidential data?
Tip #1: Confirm that your application’s confidential data API constructs are listed as being “taint sources”
Fortify’s data flow analyzer keeps track of all sources of taint that rules identified during a scan. When triaging scan results using Fortify’s Audit Workbench, reviewers can examine this list by switching to the “tainted sources” filter in the Functions viewer. Often, I query this list for method names like “getVisa”, “getTaxId”, “getCardNumber”, “getStreetAddress”, etc. It’s helpful to briefly interview developers to get an idea of the types of confidential data that’s processed by the application and what classes are specifically used to represent that data.
Tip #2: Create data flow source rules to account for any missing application-specific API’s that process sensitive data
As an example, let’s assume that we know an application is logging Visa account numbers in plaintext form as they are entered by customers in production. Our objective is to identify this pattern in source code so we can inform developers that masking should be put in place to satisfy corporate standards. Further assume that a Java class named “com.cigital.entities.Card” has a method named “getVisaAcctNumber”, which returns a Visa credit card number. Fortify SCA doesn’t have a default rule for this API so a custom rule must be created. One way to satisfy this requirement is by creating a “data flow source rule” that instructs Fortify to consider any data returned by getVisaAcctNumber to be a source of private data:
<DataflowSourceRule formatVersion="3.6" language="java"> <RuleID>21093F7B-6F85-4863-983C-5A19756B87B2</RuleID> <TaintFlags>+PRIVATE</TaintFlags> <FunctionIdentifier> <NamespaceName> <Pattern>com.cigital.entities</Pattern> </NamespaceName> <ClassName> <Pattern>Card</Pattern> </ClassName> <FunctionName> <Pattern>getVisaAcctNumber</Pattern> </FunctionName> <ApplyTo implements="true" overrides="true" extends="true"/> </FunctionIdentifier> <OutArguments>return </DataflowSourceRule>
Tip #3: Increase reviewer efficiency by developing a code review plan for handling sensitive data
The new rule we wrote in Tip #2 may start to generate new findings, especially if credit card numbers are reflected on JSP pages or are logged using a common log API. What do we do if developers have been careful and have implemented data masking operations -- DataUtil.maskCreditCard(card.getVisaAcctNumber()) -- prior to using it? We have a few options. One option is to write a custom data flow cleanse rule that removes private taint from any data that passes through the maskCreditCard method -- beware, this option will remove findings from scan results. A second option is to allow the finding to be generated but activate an Audit Workbench project filter to drop any finding that has “maskCreditCard” in its data flow analysis trace into a custom “Masked Data” or “Sensitive Data Handling” bucket. The latter option allows reviewers to not only understand how confidential data is being used within the code base, but also allows reviewers to scrutinize the data masking operations for correctness.
By following these tips, assessors can start to ask questions regarding enforcement of corporate security standards targeting sensitive data, as in:
Mike Ware is a Security Consultant with Cigital. He can be reached at mwarecigital.com for more information on this article.