Skip to main content
MSRC

The History of the !exploitable Crash Analyzer

At the CanSecWest conference earlier this month we made our first public release of the !exploitable Crash Analyzer. While an upcoming white paper and the CanSecWest slide deck go into detail on the technology involved, we thought it might be useful to explore the history of the tool.

Roots in Fuzzing

The technology and research that eventually became the !exploitable Crash Analyzer came out of the investment that MSEC (and Microsoft as a whole) has made in fuzzing technology. Preceding the launch of Windows Vista, there was a 14 month fuzzing effort totaling over 350 Million iterations. Upon examining crashes from the fuzzing effort, a number of observations were made about similarities in crashes. Several folks from what is now MSEC (Adel Abouchaev, Damian Hasse, Scott Lambert and Greg Wroblewski) published an article regarding some of these findings in the November 2007 edition of MSDN.

One of the nice benefits of fuzzing is that it eliminates any need to determine “is the problematic code reachable by an attacker”. Because the malformed data is provided in the same way that an attacker would provide it, we know that if we are able to generate an issue during fuzzing, a real attacker would in all likelihood be able to reach the same code.

Another observation was that a single issue in code could be reached via multiple vectors, creating crashes that appeared to be different, but with the same root cause. By grouping crashes together which occur in the same area of code, the number of crashes that need to be looked at can be dramatically decreased.

In the diagram below, we’ve seen the results from 2 weeks of fuzzing with 4 different fuzzers against 1 parser, which found 57 crashes, and VERY LITTLE overlap between fuzzers:

image

When the same 57 crashes are run through !exploitable Crash Analyzer, and grouped for similarity, we see that there are 15 unique issues, reducing the number of crashes to look at nearly 4-fold, and Fuzzers A and B found all but 2 of the issues, showing what fuzzers really give the best coverage for this application.

image

However, even when grouping similar crashes, there is a need to perform a rough-cut triage of the severity of the crashes found. The !exploitable Crash Analyzer was built to address these needs. Because of this, the tool assumes that the information in the faulting instruction is controlled by an attacker – the normal case when assessing results of fuzzing runs.

Implications when applied to other crashes

Once we move beyond fuzzing, the assumptions built into the tool make the results less reliable. Unlike our fuzzer generated crashes, we don’t actually know whether the crash was caused by information that could be controlled by an attacker. Even in this case, the stack trace hashes let us group similar issues. But we have to add an implicit caveat to the exploitability ratings provided by !exploitable: “If an attacker controlled the source data to the faulting instruction…”.

What does this mean for the developer? Effectively, it means that we don’t know whether or not we simply have a problematic coding issue or bug versus a true security vulnerability. A coding issue or bug becomes a security vulnerability only when an attacker is able to reach it, generally by providing invalid data. It may be that the problematic code cannot be reached by an attacker, in which case we merely have a bug. It may be that there are code paths (which we may or may not have found) that expose the problematic code to attacker controlled data. Or it may be that a yet to be implemented feature will expose the issue. But for the software developer, especially when these coding issues are found early in development, the knowledge that there is a potentially problematic issue in the code should be enough to get the fix created and implemented or made available for users to install as appropriate.

How Exploitable is Exploitable?

Even in the cases where the crash was caused by data supplied by an attacker, we don’t know how much control the attacker has. For example, if we look at a faulting memory copy, it’s possible that the attacker could control the destination address, the source address, the move length, or some combination of all three. Inside of the !exploitable Crash Analyzer, we assume that the attacker has control of all three. While this is probably not the case, we are willing to accept the over-assessment of risk in this case because the coding issue is considered severe enough that the ensuing false positive rate is something we consider acceptable.

When analyzing a crash, !exploitable is looking at the details, and categorizing the severity based on reasonably coarsely grained heuristics. You can read the output of !exploitable as “This is the sort of crash that experience tells us is likely to be exploitable”, and for the software developer, that should be all of the information that is necessary. It’s well beyond the scope of the tool to figure out how an exploit could be delivered; that sort of analysis tends to require highly skilled humans.

Moreover, even in the case where a vulnerability is exploitable, exploit mitigations built into the compiler and the platform may be sufficient to prevent actual exploitation. This doesn’t mean that the root problem shouldn’t be fixed, any more than having airbags and wearing your seatbelt means it is acceptable to not repair your brakes. But it does mean that sometimes the end user is protected, even if everything else went wrong.

The Target Audience

Fundamentally, this is a defensive tool, aimed at the software developer, especially those without deep expertise in security threats. By grouping common issues, identifying cases where multiple code paths flow to the same underlying issue, and providing a rough cut of the security implications of individual crashes, we think it provides a valuable service to software developers triaging bugs during development. We’ve certainly found that to be the case inside of Microsoft.

Dave Weinstein and Jason Shirk
Microsoft Security Engineering Center

*Postings are provided “AS IS” with no warranties, and confers no rights.*


Related Posts

How satisfied are you with the MSRC Blog?

Rating

Feedback * (required)

Your detailed feedback helps us improve your experience. Please enter between 10 and 2,000 characters.

Thank you for your feedback!

We'll review your input and work on improving the site.