Moving to Mobile – New Threats

by jOHN on Tuesday, March 29, 2011

This post written in collaboration w/ Jason Rouse

In the software development circles I travel, the topic of adding a mobile channel onto existing application/system infrastructure has come up a bunch recently. With regards to threat modeling a ‘move to mobile’ represents an ideal opportunity to revisit threat modeling. Remember, I don’t necessarily prescribe threat modeling for well-known system arch-types (such as classic n-tier) and technology stacks as much as when teams attempt new and lesser known architectures.

The natural question: how do my threats change when I bring a mobile channel into my existing application? Consider, for instance, a generic application, as depicted below:

Existing Application w/ New Mobile Channel

Existing Application w/ New Mobile Channel

This application supported a User and CSR through a browser interface, and had other connections to RESTful services in its middle tier. Now, we’ve added more controller and presentation tier logic, perhaps opting to reuse almost all the former application’s model. In this case, the new functionality aims to provide a more numerous set of users (2-4X as many as the plain browser interface) access to their accounts and services but with considerably lower bandwidth. The intent is that a sub-set of services would be offered (rate comparison and ACH transfer being omitted) and all available services will be provided in a crisp, simple, and responsive format.

Since we’ve discussed this example (and many like it) in threat modeling classes, we’ll leave these users and threats to the original system alone for now.

New portions of the system, being constructed to support mobile, are depicted in blue. In a future post, we’ll discuss how to more clearly identify the change in attack surface due to these additions. For now, let’s just consider the new Threats themselves. Again:

Threat – A class of individuals or software agent executing on behalf of such an individual

When we model threats we describe a threat’s capabilities, level of access, and skills to start. Let’s apply a few of Threat Modeling techniques to identify new threats to consider.

1. Consider the system’s users
When teams add new functionality to a system, or when they start a new development effort entirely, they commonly create user stories or perhaps even detailed use cases and requirements.

The first (and easiest) way to identify new threats to the system is to mine user stories, use cases, and other usage documentation (even marketectures often work) for their users. Then, consider:

  • What evil or insidious behaviors could a user engage in?
  • What obnoxious or stupid behaviors could a user cause trouble with?

Two users come to mind as we look at proposed new mobile functionality:

  1. Mobile device user (in this case, a smart phone)
  2. Neighboring network user

Combining the first items from our two lists above, we come to our first threat: a malicious mobile device user.

1. Malicious Mobile Device User – Device users possess the credentials to the device (including any UI, ‘app store’, or other username/password tuples) and likely possess the carrier account/credentials. Access includes physical access to the device and use of both of its applications and browsers. This threat can install applications, sync, and explore device contents with their computer. Of course, this threat has access to device SDK and simulators as any developer would. See this threat depicted in the figure above with label “1”.

When we discuss attack surfaces, we’ll discuss what kinds of access and capabilities these threats possess in a technology-specific fashion. Some conceptual actions include:


  • Transfer device contents to computer for debugging, reversing, etc.

  • Install purpose-built (malicious) applications on device

  • Manipulate application settings, set up a proxy for web interaction, etc.

  • Use both browser and application to access the same services/resources

  • Jailbreak device

Consider Evil/Insidious User Action
The last behavior piques particular interest. When the user decides, with malicious intention, to jailbreak their device, additional malicious behaviors are available to them. Regardless of whether or not the jailbreak was motivated by malicious intent, the device’s security controls are compromised. In this case, applications can no longer trust that security features such as application signing apply or that a confidential path for their data exists between the application and network. Documenting threats in a traceability matrix allows one to show this escalation of privilege from one threat to another directly:

Privilege Escalation - Tabular Format

In summary, this behavior gives rise to another class of threat:

2. Malicious Mobile Device – This threat has all the capabilities and access of a malicious mobile device user, but can also compromise or augment the device OS and its drivers. The figure above depicts this threat labeling it “4”.

A broad range of capabilities results from this opportunity and if a device is compromised prior to the a victim application being loaded, malicious behavior can observe and affect the application installation process itself, or user signup/registration processes–which may include initial credential/key material exchange. This imagined scenario may quickly turn an application’s entire security proposition into a house of cards.

Consider Obnoxious/Stupid User Action
Once you start, it’s hard not to imagine a multitude of stupid things a user could do with their mobile device. Let’s focus on a common and important one: the user could leave the device somewhere, for it to be stolen. This scenario has two effects: first, this dramatically increases the population of Threat #1: malicious device user. Let’s split this threat into 1.A (previously described) and 1.B: malicious device user (same as 1.A but without the user’s credentials). However, Threat 1.B also increases the population of Threat #2–malicious devices because thieves can jailbreak devices even without user credentials.

An important third scenario impacts certain applications, like the generic one we describe above. If our threat traceability matrix (above) was filled out during a previous secure design exercise, it might include “password reset” use described in the mitigation column. Indeed, during many threat modeling courses jOHN teaches his participants indicate “we can remove the malicious CSR threat entirely by doing password reset out-of-band using a mobile phone.”

In the case that a phone is stolen and possesses the bank’s MobileBankingApp, perhaps which caches the user’s name, how effective is this control?

Many password reset implementations jOHN observes (using his accounts) suffer full compromise under the ‘stolen phone’ scenario. In this case, the identification of a new threat causes us to reconsider or augment the design of our system so that it again demonstrates the security properties we desire.

A Brief Diversion into the Mobile Threat Population
Looking superficially at search results, it appears as though ~120,000 phones are stolen in the UK annually; ~200,000 in Australia. Another site indicated 26M phones are stolen annually and resold. Regardless of how accurate these numbers are, this represents a much larger threat population than expected from security researchers and nefarious parties alone.

Not Stolen, “Something Borrowed”

Consider more generally the notion of the device being “out of sight” interaction briefly, perhaps through the following vectors:


  • Because stolen (unknown to the device’s user) and returned after being tampered with

  • The device is plugged into another individual’s machine (perhaps for charging or for app download)

  • Device lent to individual for momentary use (phone call, game demo, contact/weather lookup)

  • Grey-market phone, donated phones, recycled/returned phones

Each of these cases exposes the whole of the device’s attack surface to exploitation without visual cues to the phone’s user.

And Who ‘Owns’ These Devices Anyways

This is perhaps a good time to point out that the phone’s user may not be the phone’s owner. Family and corporate phone plans often provide access to the phone, remotely, unbeknownst to the user. In these cases, the phone’s owner must be considered a (yet unpictured in our diagram) threat from the user’s perspective.

From the phone (or account) owner’s perspective, the user represents a similar threat: the user can potentially add/modify/remove software without visual cues or notification. Our ‘single-user’ device actually serves multiple masters:


  1. Me

  2. The Account Owner

  3. Our Benevolent Dictators: Google/Apple… RIM/Microsoft?

  4. Carriers: ATT, VeriZon…

  5. Device Manufacturer

Device Owner Hierarchy
We know the benevolent dictators retain the right/capability to remove applications from our devices. What other capabilities must an application publisher be concerned about within their Threat Model?

Here in the diagram, you see the inevitable separation of the application store curator and the current set of benevolent dictators listed–Amazon’s new application store is a prime example.

Reconsider Old Model’s Threats
Having conducted a threat model on the classic n-tier system before, a man-in-the-middle (MiM) threat was undoubtedly considered. Reconsider this threat in terms of the new architecture.

A common mistake modelers make in considering mobile MiM is to forget that devices contain not only applications that make Internet connections but that they also possess browsers:

3. MiM – This threat has access to traffic sent over the Internet (OR carrier network) sent either from the app to server or
vice versa. When a device contains a browser, this threat can see both app to server traffic and browser to server traffic from the same account/device. This threat may be able to see traffic through (active or passive) interposition, or through landing code (script) w/in the device’s browser using classic web attacks (Depicted as Threat #4 in the figure above).

Whether or not one needs to consider both Internet and carrier-based sniffing depends on other factors within the threat model. Cigital has always considered carrier-level compromise part of its mobile threat model, at a low barrier to entry. However, it bears mentioning that as recently as 2004 it was challenge to convince organizations that individuals could “be the mobile network”. It’s somewhat vindicating to see this as a consumer-grade scenario now.

This isn’t the half of it though (and we’re getting ahead of ourselves a little bit in talking about attack surface here, but). Consider the following ‘radio-based’ surfaces on the phone:


  1. GSM Stack

  2. CDMA Stack

  3. 802.11

  4. Bluetooth

  5. NFC (coming soon to a device near you!!!)

Remember, vulnerability in any of these implementations may cause our user to suffer a “drive-by owning“, leaving him/her with Threat #2: Malicious Device.

Other Old Threat
Taking another cue from former efforts, modelers must consider the possibility that malicious applications will run alongside their applications within a mobile device. This scenario is no different than that of a browser, or any application running on the almost forgotten host computer.

4. Malicious Application – Threat represents either a compromised victim application on the device, a Trojan, or other malware (depicted in the figure above as #3). The causal vector may have been data interpreted and executed by a vulnerable app, a malicious application placed in an app store (or otherwise available for download), or different entirely. Such applications (malicious or corrupted) can attempt to poke and prod at your victim app directly, exploit the underlying device OS, or focus on server resources directly. The malicious app may possess a valid signature (or not, as advantageous) and has access to the device’s services (as per what the user has granted it).

Summary
Hopefully, this thought exercise has shown its reader threats they hadn’t considered. Hopefully it shows the reader the advantages of reconsidering a threat model as the architecture changes through normal means:

  1. Start with the users/user-stories

  2. Think maliciously

  3. Think ‘stupid’

  4. Re-consider old threats in the new architecture

  5. Understand, you’re not the only device ‘user’, ‘owner’

It continues to surprise me that these techniques pay off even when only threats (the ‘who’) themselves are considered.

Next, we’ll write up how consideration of the attack surface differs as we ‘move to mobile’ and begin to see how this expanded surface can have dramatic implications on the security posture of existing functionality.

4 Responses to “Moving to Mobile – New Threats”

  1. Andre Gironda says:

    Enough about mobile threat-modeling. Time to apply the threat-modeling to some real world examples.

    I, personally, enjoyed this prezo — https://media.blackhat.com/bh-ad-10/Nils/Black-Hat-AD-2010-android-sandcastle-slides.pdf

  2. jOHN says:

    Dre, there are a lot of good presentations on (general) mobile security, security features, & vulnerabilities on each platform; thank you for posting one you enjoy. In mentoring security assessor staff I too prescribe the link gathering exercise as quick-start and continuing ED. Treatment of platform features and vulnerabilities in these presentations remains sparse (some topics are covered deeply, others omitted).

    With this state of publication, organizations’ developers are left the task of collating material for not only a single platform but for all the platforms they target. I don’t believe this magpie’s approach to be optimal for developers/architects.

    Moreover, cobbling together these feature expos and vulnerability demonstrations would not lead a developer to the conclusion that a stolen phone equates to account access as the above threat modeling exercise–tragically incomplete though it is–would.

    The objective here is to “teach constructive folk to fish”. IFF those building software can learn to understand “who” (threats) attacks their software and “where” (attack surface) attackers can interact with their software, THEN they’ll be able to use those techniques (“what” and “how”) described by presentations like you shared.

    The mobile security space is well-quantified enough to skip the thought framework and proceed directly to the [check]list of “hows” yet.

Leave a Reply