Thursday, October 20, 2016

NYC Area Security Folks – Come to SOS!

Every year the NYU School of Engineering hosts Cyber Security Awareness Week (CSAW) – the largest student-run security event in the country. This year, we're trying something new that combines two of my favorite things: security and open source.

The inaugural Security: Open Source (SOS) workshop, held this November 10 at NYU Tandon will feature the creators of some really cool new security tools talking about their projects. It's happening the day before one of the best CTF challenges out there, so we're expecting an audience that's not afraid of technical detail :)

What will you hear about at SOS? Here some of the cool speakers and topics:

  • Félix Cloutier will tell us about his open-source decompiler, fcd. This is a great example of incorporating cutting edge academic research into an open-source tool that anyone can use. Félix is also a former CSAW CTF competititor.
  • Mike Arpaia, co-founder of Kolide, will talk about osquery, a new open-source operating system instrumentation framework and toolset he created while at Facebook. Mike will talk about his experience managing an open-source security project and how to make it successful.
  • Patrick Hulin from MIT Lincoln Laboratory will talk about a new differential debugging technique he's devised. Patrick is one of the lead developers on PANDA, and he'll talk about how he used another great open-source tool, Mozilla rr, to automatically do root-cause debugging on devilishly tricky record/replay bugs.
  • Jamie Levy, one of the core developers on the Volatility memory forensics framework, will talk about taking memory forensics to the next level. Jamie is one of the most talented forensic investigators and developers I know and this should be a great talk!
  • Jonathan Salwan and Romain Thomas from Quarkslab will present a deep dive on Triton, their exciting binary analysis platform that combines symbolic execution and dynamic taint analysis, and demonstrate how it can be used to defeat virtualization-based obfuscation techniques.
  • Ryan Stortz from Trail of Bits will talk about how they took the DARPA Cyber Grand Challenge test programs and ported them to run on OS X and Linux. This opens up some really cool possibilities for using them to evaluate the effectiveness of different security tools!
  • Yan Shoshitaishvili of UCSB will talk about angr, their Python-based binary analysis platform that aims to bring together tons of state-of-the-art analyses under one roof. They've recently used it to get third place in the DARPA Cyber Grand Challenge, and it's become a popular tool for CTF players around the world.
SOS will take place in the Pfizer Auditorium at the NYU Tandon School of Engineering in Brooklyn from 10:30am-5:30pm on November 10, the day before the CSAW CTF.

So what are you waiting for? Go register!

Saturday, October 8, 2016

The LAVA Synthetic Bug Corpora

I'm planning a longer post discussing how we evaluated the LAVA bug injection system, but since we've gotten approval to release the test corpora I wanted to make them available right away.

The corpora described in the paper, LAVA-1 and LAVA-M, can be downloaded here: (101M)

Quoting from the included README:

This distribution contains the automatically generated bug corpora used in the paper, "LAVA: Large-scale Automated Vulnerability Addition".

LAVA-1 is a corpus consisting of 69 versions of the "file" utility, each of which has had a single bug injected into it. Each bug is a named branch in a git repository. The triggering input can be found in the file named CRASH_INPUT. To run the validation, you can use, which builds each buggy version of file and evaluates it on the corresponding triggering input.

LAVA-M is a corpus consisting of four GNU coreutils programs (base64, md5sum, uniq, and who), each of which has had a large number of bugs added. Each injected, validated bug is listed in the validated_bugs file, and the corresponding triggering inputs can be found in the inputs subdirectory. To run the validation, you can use the script, which builds the buggy utility and evaluates it on triggering and non-triggering inputs.

For both corpora, the "backtraces" subdirectory contains the output of gdb's backtrace command for each bug.


Thursday, July 21, 2016

Fuzzing with AFL is an Art

Using one of the test cases from the previous post, I examine what affects AFL's ability to find a bug placed by LAVA in a program. Along the way, I found what's probably a harmless bug in AFL, and some interesting factors that affect its performance. Although its interface is admirably simple, AFL can still require some tuning, and unexpected things can determine its success or failure on a bug.

American Fuzzy Lop, or AFL for short, is a powerful coverage-guided fuzzer developed by Michal Zalewski (lcamtuf) at Google. Since its release in 2013, it has racked up an impressive set of trophies in the form of security vulnerabilities in high-profile software. Given its phenomenal success on real world programs, I was curious to explore in detail how it worked on an automatically generated bug.

I started off with the toy program we looked at in the previous post, with a single bug added. The bug added by LAVA will trigger whenever the first four bytes of a float-type file_entry are set to 0x6c6175de or 0xde75616c, and will cause printf to be called with an invalid format string, crashing the program.

After verifying that the bug could be triggered reliably, I compiled it with afl-gcc and started a fuzzing run. To get things started, I used a well-formed input file for the program that contained both int and float file_entry types:

Because I'm lucky enough to have a 24 core server sitting around, I gave it 24 cores (one using -M and the rest using -S) and let it run for about 4 and a half days, fully expecting that it would find the input in that time.

This did not turn out so well.

Around 20 billion executions later, AFL had found zilch.

At this point, I turned to Twitter, where John Regehr suggested that I look into what coverage AFL was achieving. I realized that I actually had no idea how AFL's instrumentation worked, and that this would be a great opportunity to find out.

Diving Into AFL's Instrumentation

The basic afl-gcc and afl-clang tools are actually very simple. They wrap gcc and clang, respectively, and modify the compile process to emit an intermediate assembly code file (using the -S option). Finally they do some simple string matching (in C, ew) to find out where to add in calls to AFL's coverage logging functions. You can get AFL to save the assembly code it generates using the AFL_KEEP_ASSEMBLY environment variable, and see exactly what it's doing. (There's actually also a newer way of getting instrumentation that was added recently using an LLVM pass; more on this later.)

Left, the original assembly code. Right, the same code after AFL's instrumentation has been added.

After looking at the generated assembly, I noticed that the code corresponding to the buggy branch of the if statement wasn't getting instrumented. This seemed like it could be a problem, since AFL can't try to use coverage to reach a part of the program if there's no logging to tell it that an input has caused it to reach that point.

Looking into the source code of afl-as, the program that instruments the assembly code, I noticed a curious bit of code:

AFL skips labels following p2align directives in the assembly code.

According to the comment, this should only affect programs compiled under OpenBSD. However, the branch I wanted instrumented was being affected by this even though I was running under Linux, not OpenBSD, and there were no jump tables present in the program.

The .L18 block should be instrumented by AFL, but won't be because it's right after an alignment statement.

Since I'm not on OpenBSD, I just commented out this if statement. As an alternate workaround, you can also add "-fno-align-labels -fno-align-loops -fno-align-jumps" to the compile command (at the cost of potentially slower binaries). After making the change I restarted, once again confident AFL would soon find my bug.

Alas, it was not to be. Another 17 hours of fuzzing on 24 cores yielded nothing, and so I went back to the drawing board. I am still fairly sure I found a real bug in AFL, but fixing it didn't help find the bug I was interested in. (Note: it's possible that if I had waited four days again it would have found my bug. On the other hand, AFL's cycle counter had turned green, indicating that it thought there was little benefit in continuing to fuzz.)

5.2 billion executions, no crashes :(

“Unrolling” Constants

Thinking about what would be required to find the bug by AFL, I realized that its chances of hitting our failing test case were pretty low. AFL will only prioritize a test case if it has seen that it leads to new coverage. In the case of our toy program, it would have to guess one of the two exact 32-bit trigger values at exactly the right place in the file, and the odds of this happening are pretty slim.

At this point I remembered a post by lcamtuf that described how AFL managed to figure out that an XML file could contain CDATA tags even though its original test cases didn't contain any examples that used CDATA. He also calls out our bug as exactly the kind of thing AFL is not designed to find:

What seemed perfectly clear, though, is that the algorithm wouldn't be able to get past "atomic", large-search-space checks such as:
if (strcmp(header.magic_password, "h4ck3d by p1gZ")) goto terminate_now;
if (header.magic_value == 0x12345678) goto terminate_now;

So how was AFL able to generate a CDATA tag out of thin air? It turns out that libxml2 has a set of macros that expand out some string comparisons into character-by-character comparisons that use simple if statements. This allows AFL to discover valid strings character by character, since each correct character will add new coverage, and cause further fuzzing to be done with that input.

We can also apply this to our test program. Rather than checking for the fixed constant 0x6c6175de, we can compare each byte individually. This should allow AFL to identify the trigger value one byte at a time. The new code looks like this:

The monolithic if statement has been replaced by 4 individual branches.

Once we make this change and compile with afl-gcc, AFL finds a crash in just 3 minutes on a single CPU!

AFL has found the bug!

This also makes me wonder if it might be worthwhile to implement a compiler pass that breaks down large integer comparisons into byte-sized chunks that AFL can deal with more easily. For string comparisons, one can already substitute in an inline implementation of strcmp/memcmp; an example is available in the AFL source.

A Hidden Coverage Pitfall

While investigating the coverage issues, I noticed that AFL has a new compiler: afl-clang-fast. This module, contributed by László Szekeres, performs instrumentation as an LLVM pass rather than by modifying the generated assembly code. As a result, it should be less brittle and allow for more instrumentation options; from what I can tell it's slated to become the default compiler for AFL at some point.

However, I discovered that its instrumentation is not identical to the instrumentation done by afl-as. Whereas afl-as instruments each x86 assembly conditional branch (that is, any of the instructions starting with "j" aside from "jmp"), afl-clang-fast works at the level of LLVM basic blocks, which are closer to the blocks of code found in the original source. And since by default AFL adds -O3 to the compile command, multiple conditional checks may end up getting merged into a single basic block.

As a result, even though we have added multiple if statements to our source, the generated LLVM looks more like our original statement – the AFL instrumentation is only placed in the innermost if body, and so AFL is forced to try and guess the entire 32-bit trigger at once again.

Using the LLVM instrumentation mode, AFL is no longer able to find our bug.

We can tell AFL not to enable the compiler optimizations, however, by setting the AFL_DONT_OPTIMIZE environment variable. If we do that and recompile with afl-clang-fast, the if statements do not get merged, and AFL is able to find the trigger for the bug in about 7 minutes.

So this is something to keep in mind when using afl-clang-fast: the instrumentation does not work in quite the same way as the traditional afl-gcc mode, and in some special cases you may need to use AFL_DONT_OPTIMIZE in order to get the coverage instrumentation that you want.

Making AFL Smarter with a Dictionary

Although it's great that we were able to get AFL to generate the triggering input that reveals the bug by tweaking the program, it would be nice if we could somehow get it to find the bugs in our original programs.

AFL is having trouble with our bugs because they require it to guess a 32-bit input all at once. The search space for this is pretty large: even supposing that it starts systematically flipping bits in the right part of the file, it's going to take an average of 2 billion executions to find the right value. And of course, unless it has some reason to believe that working on that part of the file will get improved coverage, it won't be focusing on the right file position, making it even less likely it will find the right input.

However, we can give AFL a leg up by allowing it to pick inputs that aren't completely random. One of AFL's features is that it supports using a dictionary of values when fuzzing. This is basically just a set of tokens that it can use when mutating a file instead of picking values at random. So one classic trick is to take all of the constants and strings found in the program binary and add them to the dictionary. Here's a quick and dirty script that extracts the constants and strings from a binary for use with AFL:

Once we give AFL a dictionary, it finds 94% of our bugs (149/159) within 15 minutes!

Now, does this mean that LAVA's bugs are too easy to find? At the moment, probably yes. In the real world, the triggering conditions will not always be something you can just extract with objdump and strings. The key improvement needed in LAVA is a wider variety of triggering mechanisms, which is something we're working on.


By looking in detail at a bug we already knew was there, we found out some very interesting facts about AFL:

  • Its ability to find bugs is strongly related to the quality of its coverage instrumentation, and that instrumentation can vary due both to bugs in AFL and inherent differences in the various compile-time passes AFL supports.
  • The structure of the code also heavily influences AFL's behavior: seemingly small differences (making 4 one-byte comparisons vs one 4-byte comparison) can have a huge effect.
  • Seeding AFL with even a naïve dictionary can be devastatingly effective.

In the end, this is precisely what we hoped to accomplish with LAVA. By carefully examining cases where current bug-finding tools have trouble on our synthetic bugs, we can better understand how they work and figure out how to make them better at finding real bugs as well.


Thanks to Josh Hofing, Kevin Chung, and Ryan Stortz for helpful feedback and comments on this post, and of course Michal Zalewski for making AFL.

Monday, July 11, 2016

The Mechanics of Bug Injection with LAVA

This is the second in a series of posts about evaluating and improving bug detection software by automatically injecting bugs into programs. Part one, which discussed the setting and motivation, is available here.

Now that we understand why we might want to automatically add bugs to programs, let's look at how we can actually do it. We'll first investigate an existing approach (mutation testing), show why it doesn't work very well in our scenario, and then develop a more sophisticated injection technique that tells us exactly how to modify the program to insert bugs that meet the goals we laid out in the introductory post.

A Mutant Strawman that Doesn't Work

One way of approaching the problem of bug injection is to just pick parts of the program that we think are currently correct and then mutate them somehow. This, essentially, is the idea behind mutation testing: you use some predefined mutation operators that mangle the program somehow and then declare that it is now buggy.

For example, we could take every instance of strncpy and change it to strcpy. Presumably, this would add lots of potential buffer overflows to a program that previously had none.

Unfortunately, this method has a couple problems. First, it is likely that many such changes will break the program on every input, which would make the bug trivial to find. The following program will always fail if strncpy is changed to strcpy:

We also face the opposite problem: if the bug doesn't trigger every time, we won't necessarily know how to trigger it when we want to. This will make it hard to prove that there really is a bug, and violates one of the requirements we described last time: each bug must come with a triggering input that proves the bug exists. If we wanted to find the triggering input for a given mutation, we'd have to find an input that reaches our mutant, which is actually a large part of what makes finding bugs hard!

Dead, Uncomplicated and Available Data

Instead of doing random, local mutations, LAVA first tries to characterize the program's behavior on some concrete input. We'll run the program on an input file, and then try to see where that input data reaches in the program. This solves the triggering program because we will know a concrete path through the program, and the input needed to traverse that path. Now, if we can place bugs in code along that path, we will be able to reach them using the concrete input we know about.

We need a couple other properties. Because we want to create bugs that are triggered only for certain values, we will want the ability to manipulate the input of the program. However, doing so might cause the program to take a different path, and the input data may get transformed along the way, making it difficult to predict what value it will have when we actually want to use it to trigger our bug.

To resolve this, we will try to find parts of the program's input data that are:

  • Dead: not currently used much in the program (i.e., we can set to arbitrary values)
  • Uncomplicated: not altered very much (i.e., we can predict their value throughout the program's lifetime)
  • Available in some program variables

We'll call data that satisfies these three properties a DUA. DUAs try to capture the notion of attacker-controlled data: a DUA is something that can be set to an arbitrary value without changing the program's control flow, is available somewhere along the program path we're interested in, and whose value is predictable.

Measuring Liveness and Complication with Dynamic Taint Analysis

Having defined these properties, we need some way to measure them. We'll do that using a technique called dynamic taint analysis1. You can think of dynamic taint analysis like a PET scan or a barium swallow, where a radionuclide is introduced into a patient, allowed to propagate throughout the body, and then a scan checks to see where it ends up. Similarly, with taint analysis, we can mark some data, allow it to propagate through the program, and later query to see where it ended up. This is an extremely useful feature in all sorts of reverse engineering and security tasks.

Like a PET scan, dynamic taint analysis works by seeing where marked input ends up in your program.

To find out where input data is available, we can taint the input data to the program – essentially assigning a unique label to each byte of the program's input. Then, as the program runs, we'll propagate those labels as data is copied around the program, and query any variables in scope as the program runs to see if they are derived from some portion of the input data, and if so, from precisely which bytes.

Next, we want to figure out what data is currently unused. To do so, we'll extend simple dynamic taint analysis by checking, every time there's a branch in the program, whether the data used to decide it was tainted, and if so, which input bytes were used make the decision. At the end, we'll know exactly how many branches in the program each byte of the input was used to decide. This measure is known as liveness.

Liveness measures how many branches use each input byte.

Finally, we want some measure of how complicated the data in each tainted program variable is. We can do this with another addition to the taint analysis. In standard taint analysis, whenever data is copied or computed in the program, the taint system checks if the source operands are tainted and if so propagates the taint labels to the destination. If we want to measure how complicated a piece of data is – that is, how much it has been changed since it was first introduced to the program – we can simply add a new rule that increments a counter whenever an arithmetic operation on tainted data occurs. That is, if you have something like c = a + b; then the taint compute number (TCN) of c is tcn(c) = max(tcn(a),tcn(b)) + 1.

TCN measures how much computation has been done on a variable at a given point in the program.

On the implementation side, all this is done using PANDA, our platform for dynamic analysis. PANDA's taint system allows us to taint an input file with unique byte labels. To query the state of program variables, we use a clang tool that modifies the original program source code2 to add code that asks PANDA to query and log the taint information about a particular program variable. When we run the program under PANDA, we'll get a log telling us exactly which program variables were tainted, how complicated the data was, and how live each byte of input is.

PANDA's taint system allows us to find DUAs in the program.

After running PANDA, we can pick out the variables that are uncomplicated and derived from input bytes with low liveness. These are our DUAs, approximations of attacker controlled data that can be used to create bugs.

Finding Attack Points

With some DUAs in hand, we now have the raw material we need to create our bugs. The last missing piece is finding some code we want to have an effect on. These are places where we can use the data from a DUA to trigger some buggy effect on the program, which we call attack points (ATP). In our current implementation, we look for places in the program where pointers are passed into functions. We can then use the DUA to modify the pointer, which will hopefully cause the program to perform an out of bounds read or write – a classic memory safety violation.

Because we want the bug to trigger only under certain conditions, we will also add code at the attack point that checks if the data from the DUA has a specific value or is in a specific range of values. This gives us some control over how much of the input space triggers the bug. The current implementation can produce both specific-value triggers (DUA == magic_value) and range-based triggers of varying sizes (x < DUA < y).

Each LAVA bug, then is just a pair (DUA, ATP) where the attack point occurs in the program trace after the DUA. If there are many DUAs and many attack points, then we will be able to inject a number of bugs roughly proportional to the product of the two. In large programs like Wireshark, this adds up to hundreds of thousands of potential bugs for a single input file! In our tests, multiple files increased the number of bugs roughly linearly, in proportion to the amount of coverage achieved by the extra input. Thus, with just a handful of input files on a complex program you can easily reach millions of bugs.

Our "formula" for injecting a bug. Any (DUA, ATP) pair where the DUA occurs before the attack point is a potential bug we can inject.

Modifying the Source Code

The last step is to modify the source code to add our bug. We will insert code in two places:
  1. At the DUA site, to save a copy of the input data to a global variable.
  2. At the attack point, to retrieve the DUA's data, check if it satisfies the trigger condition, and use it to corrupt the pointer.
By doing so, we create a new data flow between the place where our attacker-controlled data is available and the place where we want to manifest the bug.

A Toy Example

To see LAVA in action, let's step through a full example. Have a look at this small program, which parses and prints information about a very simple binary file format:

We start by instrumenting the source code to add taint queries. The queries will be inserted to check taint on program variables, and, for aggregate data structures, the members inside each structure. The result is a bit too long to include inline, since it quadruples the size of the original program, but you can see it in this gist.

When we compile and run that program on some input inside of PANDA with taint tracking enabled, we get information about taint compute numbers and the liveness of each byte of the input. For example, here's the liveness map for a small (88 byte) input:

Liveness map for the input to our toy program. The bytes with a white background are completely dead – they can be set to arbitrary values without affecting the behavior of the program.

LAVA's analysis finds 82 DUAs and 8 attack points, for a total of 407 potential bugs. Not all of these bugs will be viable: because we want to measure the effect of liveness and taint compute number, the current implementation does not impose limits on how live or complicated the DUAs used in bugs are.

To make sure that an injected bug really is a bug, we do two tests. First, we run the modified program on a non-triggering input, and verify that it runs correctly. This ensures that we didn't accidentally break the program in a way we weren't expecting. Second, we run it on the triggering input and check that it causes a crash (a segfault or bus error). If it passes both tests we deem it a valid bug. This could miss some valid bugs, of course – not all memory corruptions will cause the program to crash – but we're interested mainly in bugs that we can easily prove are real. Another approach might be to run the buggy program under Address Sanitizer and check to see if it flags any memory errors. After validation, we find that LAVA is able to inject 159 bugs into the toy program, for a yield of around 39%.

Let's look at an example bug (I've cleaned up the source a little bit by hand to make it easier to read; programmatically generated code is not pretty):

On lines 6–15, after parsing the file header, we add code that saves off the value of the reserved field3, which our analysis correctly told us was dead, uncomplicated, and available in head.reserved. Then, on line 20, we retrieve the value and conditionally add it to the pointer ent that is being passed to consume_record (checking the value in both possible byte orders, because endianness is hard). When consume_record tries to access fields inside the file_entry,  it crashes. In this case, the DUA and attack point were in the same function, and so the use of a global variable was not actually necessary, but in a larger program the DUA and attack point could be in different functions or even different compilation units.

If you like, you can download all 407 buggy program versions, along with the original source code and triggering inputs. Note that the current implementation does not make any attempt to hide the bugs from human eyes, so you will very easily be able to spot them by looking at the source code.

Next Time

Having developed a bug injection system, we would like to know how well it performs. In the next post, we'll examine questions of evaluation: how many bugs can we inject, and how do the liveness and taint compute measures influence the number of viable bugs? How realistic are the bugs? (much more complicated than it may first appear!) And how effective are some common bug-finding techniques like symbolic execution and fuzzing? We'll explore all these and more.

1 Having worked with dynamic program analysis for so long, I sometimes forget how ridiculous the term "dynamic taint analysis" is. If you're looking for another way to say the same thing, you can use "information flow" but dynamic taint analysis is the name that seems to have stuck.

2 Getting taint information by instrumenting the source works, but has a few drawbacks. Most notably, it causes a huge increase in the size of the source program, and slows it down dramatically. We're currently finishing up a new method, pri_taint, which can do the taint queries on uninstrumented programs as long as they have debug symbols. This should allow LAVA to scale to larger programs like Firefox.

3 The slightly weird ({ }) construct is a non-standard extension to C called a statement expression. It allows multiple statements to be executed in a block with control over what the block as a whole evaluates to. It's a nice feature to have available for automatically generated code, as it allows you to insert arbitrary statements in the middle of an expression without worrying about messing up the evaluation.

Tuesday, June 7, 2016

How to add a million bugs to a program (and why you might want to)

This is the first in a series of posts about evaluating and improving bug detection software by automatically injecting bugs into programs. You can find part two, with technical details of our bug injection technique, here.

In this series of posts, I'm going to describe how to automatically put bugs in programs, a topic on which we just published a paper at Oakland, one of the top academic security conferences. The system we developed, LAVA, can put millions of bugs into real-world programs. Why would anyone want to do this? Are my coauthors and I sociopaths who just want to watch the world burn? No, but to see why we need such a system requires a little bit of background, which is what I hope to provide in this first post.

I am sure this will come as a shock to most, but programs written by humans have bugs. Finding and fixing them is immensely time consuming; just how much of a developer's time is spent debugging is hard to pin down, but estimates range between 40% and 75%. And of course these errors can be not only costly for developers but catastrophic for users: attackers can exploit software bugs to run their own code, install malware, set your computer on fire, etc.

Weekly World News has known about this problem for years.

It should come as little surprise, then, that immense effort has been expended in finding ways to locate and fix bugs automatically. On the academic side, techniques such as fuzzing, symbolic execution, model checking, abstract interpretation, and creative combinations of those techniques, have been proposed and refined for the past 25 years. Nor has industry been idle: companies like Coverity, Fortify, Veracode, Klocwork, GrammaTech, and many more will happily sell (or rent) you a product that automatically finds bugs in your program.

Great, so by now we must surely have solved the problem, right? Well, not so fast. We should probably check to see how well these tools and techniques work. Since they're detectors, the usual way would be to measure the false positive and false negative rates. To measure false positives, we can just run one of these tools on our program, go through the output, and decide whether we think each bug it found is real.

The same strategy does not work for measuring false negatives. If a bug finder reports finding 42 bugs in a program, we have no way of knowing whether that's 99% or 1% of the total. And this seems like the piece of information we'd most like to have!

Heartbleed: detectable with static analysis tools, but only after the fact.

To measure false negatives we need a source of bugs so that we can tell how many of them our bug-finder detects. One strategy might be to look at historical bug databases and see how many of those bugs are detected. Unfortunately, these sorts of corpora are fixed in size – there are only so many bugs out there, and analysis tools will, over time, be capable of detecting most of them. We can see how this dynamic played out with Heartbleed: shortly after the bug was found, Coverity and GrammaTech quickly found ways to improve their software so that it could find Heartbleed.

Let me be clear – it's a good thing that vendors can use test cases like these to improve their products! But it's bad when these test cases are in short supply, leaving users with no good way of evaluating false negatives and bug finders with no clear path to improving their techniques.

This is where LAVA enters the picture. If we can find a way to automatically add realistic bugs to pre-existing programs, we can both measure how well current bug finding tools are doing, and provide an endless stream of examples that bug-finding tools can use to get better.

LAVA: Large-scale Automated Vulnerability Analysis

Goals for Automated Bug Corpora

So what do we want out of our bug injection? In our paper, we defined five goals for automated bug injection, requiring that injected bugs
  1. Be cheap and plentiful
  2. Span the execution lifetime of a program
  3. Be embedded in representative control and data flow
  4. Come with a triggering input that proves the bug exists
  5. Manifest for a very small fraction of possible inputs
The first goal we've already discussed – if we want to evaluate tools and enable "hill climbing" by bug finders we will want a lot of bugs. If it's too expensive to add a bug, or if we can only add a handful per program, then we don't gain much by doing it automatically – expensive humans can already add small numbers of bugs to programs by hand.

The next two relate to whether our (necessarily artificial) bugs are reasonable proxies for real bugs. This is a tricky and contentious point, which we'll return to in part three. For now, I'll note that the two things called out here – occurring throughout the program and being embedded in "normal" control and data flow – are intended to capture the idea that program analyses will need to do essentially the same reasoning about program behavior to find them as they would for any other bugs. In other words, they're intended to help ensure that getting better at finding LAVA bugs will make tools better at understanding programs generally.

The fourth is important because it allows us to demonstrate, conclusively, that the bugs we inject are real problems. Concretely, with LAVA we can demonstrate an input for each bug we inject that causes the program to crash with a segfault or bus error.

The final property is critical but not immediately obvious. We don't want the bugs we inject to be too easy to find. In particular, if a bug manifests on most inputs, then it's trivial to find it – just run the program and wait for the crash. We might even want this to be a tunable parameter, so that we could specify what fraction of the input space of a program causes a crash and dial the difficulty of finding the right input up or down.

Ethics of Bug Injection

A common worry about bug injection is that it could be misused to add backdoors into legitimate software. I think these worries are, for the most part, misplaced. To see why, consider the goals of a would-be attacker trying to sneak a backdoor into some program. They want:
  1. A way to get the program to do something bad on some secret input.
  2. Not to get caught (i.e., to be stealthy, and for the bugs to be deniable).
Looking at (1), it's clear that one bug suffices to achieve the goal; there's no need to add millions of bugs to a program. Indeed, adding millions of bugs harms goal (2) – it would require lots of changes to the program source, which would be very difficult to hide.

An attempted Linux kernel backdoor attempt from 2003. Can you spot the bugdoor?

In other words, the benefit that LAVA provides is in adding lots of bugs at scale. An attacker that wants to add a backdoor can easily do it by hand – they only need to add one, and even if it takes a lot of effort to understand the program, that effort will be rewarded with extra stealth and deniability. Although the bugs that LAVA injects are realistic in many ways, they do not look like mistakes a programmer would have naturally made, which means that manual code review would be very likely to spot them.

(There is one area where LAVA might help a would-be attacker – the analysis we do to locate portions of the program that have access to attacker controlled input could conceivably speed up the process of inserting a backdoor by hand. But this analysis is quite general, and is useful for far more than just adding bugs to programs.)

The Road Ahead

The next post will discuss the actual mechanics of automated bug injection. We'll see how, using some new taint analyses in PANDA we can analyze a program to find small modifications that cause attacker-controlled input to reach sensitive points in the program and selectively trigger memory safety errors when the input is just right.

Once we understand how LAVA works, the final post will be about evaluation: how can we tell if LAVA succeeded in its goals of injecting massive numbers of realistic bugs? And how well do current bug-finders fare at finding LAVA bugs?


The idea for LAVA originated with Tim Leek of MIT Lincoln Laboratory. Our paper lists authors alphabetically, because designing, implementing and testing it truly was a group effort. I am honored to share a byline with Patrick Hulin, Engin Kirda, Tim Leek, Andrea Mambretti, Wil Robertson, Frederick Ulrich, and Ryan Whelan.

Tuesday, January 5, 2016

PANDA Plugin Documentation

It's been a very long time coming, but over the holiday break I went through and created basic documentation for all 54 currently-available PANDA plugins. Each plugin now includes a manpage-style document named in its plugin directory.

You can find a master list of each plugin and a link to its man page here:

Hopefully this will help people get started using PANDA to do some cool reverse engineering!

Friday, October 2, 2015

PANDA VM Update October 2015

The PANDA Virtual machine has once again been updated, and you can download it from:

Notable changes:

  • We fixed a record/replay bug that was preventing Debian Wheezy and above from replaying properly.
  • The QEMU GDB stub now works during replay, so you can break, step, etc. at various points during the replay to figure out what's going on. We still haven't implemented reverse-step though – hopefully in a future release.
  • Thanks to Manolis Stamatogiannakis, the Linux OS Introspection code can now resolve file descriptors to actual filenames. Tim Leek then extended the file_taint plugin to use this information, so file-based tainting should be more accurate now, even if things like dup() are used.
  • We have added support for more versions of Windows in the syscalls2 code.