Automatic Generation of Control Flow Hijacking Exploits for Software Vulnerabilities Essay

University of Oxford Computing Laboratory MSc Computer Science Dissertation Automatic Generation of Control Flow Hijacking Exploits for Software Vulnerabilities Author: Sean Heelan Supervisor: Dr. Daniel Kroening September 3, 2009 Contents List of Figures v List of Tables vii List of Code Listings ix Acknowledgements xi Abstract 1 1 Introduction 3 1. 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1. 2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1. 3 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1. 4 Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1. 5 Contributions of this Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1. 6 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2 Problem Definition 7 2. 1 Operating System and Architecture Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2. 1. 1 CPU Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2. 1. 2 Operating system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2. 2 Run-time protection mechanisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2. 2. 1 Address Space Layout Randomisation . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2. 2. 2 Non-Executable Memory Regions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2. 2. 3 Stack Hardening . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2. 2. 4 Heap Hardening . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2. 2. 5 Protection Mechanisms Considered . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2. 3 Computational Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2. 4 Security Vulnerabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2. 4. 1 When is a Bug a Security Vulnerability? . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2. 4. 2 Definition of an Exploit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2. 5 Direct Exploits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2. 5. 1 Manually Building Direct Exploits . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2. 5. 2 Components Required to Generate a Direct Exploit . . . . . . . . . . . . . . . . . . . 17 2. 6 Indirect Exploits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2. 6. 1 Manually Building Indirect Exploits . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2. 6. 2 Components Required to Generate an Indirect Exploit . . . . . . . . . . . . . . . . . . 20 2. 7 The Problem we Aim to Solve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 i 3 Algorithms for Automatic Exploit Generation 21 . 1 Stage 1: Instrumentation and run-time Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 22 3. 1. 1 Dynamic Binary Instrumentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 3. 1. 2 Taint Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 3. 1. 3 Building the Path Condition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3. 2 Stage 2: Building the Exploit Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 3. 2. 1 : An Exploit Generation Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 3. 2. 2 Processing Taint Analysis Information to Find Suitable Shellcode Buffers . . . . . . . 37 3. 2. 3 Gaining Control of the Programs Execution . . . . . . . . . . . . . . . . . . . . . . . . 39 3. 2. 4 Building the Exploit Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 3. 3 Stage 3: Solving the Exploit Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 3. 3. 1 Quantifier-free, Finite-precision, Bit-vector Arithmetic . . . . . . . . . . . . . . . . . . 41 3. 3. 2 Decision Procedures for Bit-vector Logic . . . . . . . . . . . . . . . . . . . . . . . . . 42 4 System Implementation 45 4. 1 Binary Instrumentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 4. 1. 1 Hooking System Calls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 4. 1. 2 Hooking Thread Creation and Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 4. 1. 3 Hooking Instructions for Taint Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 46 4. 1. 4 Hooking Instructions to Detect Potential Vulnerabilities . . . . . . . . . . . . . . . . . 48 4. 1. 5 Hooking Instructions to Gather Conditional Constraints . . . . . . . . . . . . . . . . 48 4. 2 Run-time Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 4. 2. 1 Taint Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 4. 2. 2 Gathering Conditional Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 4. 2. 3 Integrity Checking of Stored Instruction and Function Pointers . . . . . . . . . . . . . 52 4. 2. 4 Integrity Checking for Write Instructions . . . . . . . . . . . . . . . . . . . . . . . . . 53 4. 3 Exploit Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 4. 3. 1 Shellcode and Register Trampolines . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 4. 3. 2 Locating Potential Shellcode Buffers . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 4. 3. 3 Controlling the EIP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 4. 3. 4 Constructing the Path Condition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 4. 3. 5 From Formula to Exploit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 5 Experimental Results 61 5. 1 Testing Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 5. 1. 1 Shellcode Used . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 5. 2 Stored Instruction Pointer Corruption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 5. 2. 1 Generating an Exploit for a strcpy based stack overflow . . . . . . . . . . . . . . . . 62 5. 2. 2 Generating an Exploit for the XBox Media Center Application . . . . . . . . . . . . . 64 5. 3 Stored Pointer Corruption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 5. 3. 1 Generating an Exploit in the Presence of Arithmetic Modifications . . . . . . . . . . 66 5. 4 Write Operand Corruption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 6 Conclusion and Further Work 71 6. 1 Automatic Memory-Layout Manipulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 6. 2 Multi-Path Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 6. 3 Identifying Memory Corruption Side-Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 6. 4 Write-N-Bytes-Anywhere Vulnerabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 6. 5 Automatic Shellcode Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 6. 6 Defeating Protection Mechanisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 6. 7 Assisted Exploit Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 6. 8 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 ii Bibliography 77 Appendices 85 A Sample Vulnerabilities 85 A. 1 Vulnerability 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 A. 2 XBMC Vulnerability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 A. 3 Function Pointer Vulnerability (No Arithmetic Modification of Input) . . . . . . . . . . . . . 87 A. 4 Function Pointer Vulnerability (Arithmetic Modification of Input) . . . . . . . . . . . . . . . 88 A. 5 Corrupted Write Vulnerability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 B Sample Exploits 91 B. 1 Stack overflow (strcpy) Exploit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 B. 2 XBMC Exploit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 B. Function Pointer Exploit (No Arithmetic Modification of Input) . . . . . . . . . . . . . . . . . 94 B. 4 Function Pointer Exploit (Arithmetic Modification of Input) . . . . . . . . . . . . . . . . . . . 95 B. 5 Write Operand Corruption Exploit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 B. 6 AXGEN Sample Run . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 iii iv List of Figures 2. 1 Stack convention diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2. 2 Updating M: mov DWORD PTR [eax], 0x00000000 . . . . . . . . . . . . . . . . . . . . 12 2. 3 Updating M: mov DWORD PTR [eax], 0x01020304 . . . . . . . . . . . . . . . . . . . . . 12 2. 4 Stack configuration before/after the vulnerable strcpy . . . . . . . . . . . . . . . . . . . . . 15 2. 5 Returning to shellcode via a register trampoline . . . . . . . . . . . . . . . . . . . . . . . . . . 17 3. 1 High level algorithm for automatic exploit generation . . . . . . . . . . . . . . . . . . . . . . . 22 3. 2 A simple lattice describing taintedness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 3. 3 A taint lattice ordered on arithmetic complexity . . . . . . . . . . . . . . . . . . . . . . . . . 9 3. 4 A taint lattice ordered on arithmetic and conditional complexity . . . . . . . . . . . . . . . . 31 v vi List of Tables 5. 1 Run-time Analysis Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 5. 2 Taint Buffer Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 5. 3 Exploit Formula Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 5. 4 Run-time Analysis Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 5. 5 Taint Buffer Analysis (Top 5 Shellcode Buffers, ordered by size) . . . . . . . . . . . . . . . . 65 5. 6 Exploit Formula Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 5. 7 Run-time Analysis Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 5. 8 Taint Buffer Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 5. 9 Exploit Formula Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 5. 10 Run-time Analysis Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 5. 11 Taint Buffer Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 5. 12 Exploit Formula Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 vii viii List of Code Listings 2. 1 “Stack-based overflow vulnerability” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2. 2 “Stack-based overflow vulnerability (write offset corruption)” . . . . . . . . . . . . . . . . . . 18 4. 1 “Filtering x86 instructions” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 4. 2 “Determining the operand types for a mov instruction” . . . . . . . . . . . . . . . . . . . . 47 4. 3 “Inserting the analysis routine callbacks for a mov instruction” . . . . . . . . . . . . . . . . . 47 4. 4 “Inserting a callback on EFLAGS modification” . . . . . . . . . . . . . . . . . . . . . . . . . . 49 4. 5 “Inserting callbacks on a conditional jump” . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 4. 6 “Simulating a mov instruction” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 4. 7 “The variables defined in a TaintByte object” . . . . . . . . . . . . . . . . . . . . . . . . . . 51 4. 8 “A structure for storing the operands involved in modifying the EFLAGS” . . . . . . . . . . 52 4. 9 “Updating the operands associated with EFLAGS indices” . . . . . . . . . . . . . . . . . . . 52 4. 10 “Checking if the value at ESP is tainted for a ret instruction” . . . . . . . . . . . . . . . . . 53 4. 11 “Constructing a set of shellcode buffers” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 4. 12 “Gaining control of the EIP for a direct exploit” . . . . . . . . . . . . . . . . . . . . . . . . . 56 4. 13 “Recursively building the path condition” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 4. 14 “Encoding conditional jump instructions” . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 5. 1 “A strcpy based stack overflow” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 5. 2 “A function pointer overflowz“ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 5. 3 “Arithmetic modification of the input” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 5. 4 “A function containing a write corruption vulnerability” . . . . . . . . . . . . . . . . . . . . . 68 6. 1 “Single-path analysis leads to incomplete transition relations” . . . . . . . . . . . . . . . . . . 72 A. 1 “A strcpy vulnerability” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 A. 2 “XBMC vulnerable function” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 A. 3 “A function pointer overflow” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 A. 4 “A function pointer overflow with linear arithmetic” . . . . . . . . . . . . . . . . . . . . . . . 88 A. 5 “A write-based vulnerability” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 B. 1 “A stack overflow exploit” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 B. 2 “An exploit for XBMC” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 B. 3 “A function pointer overflow exploit” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 B. 4 “A function pointer overflow exploit with linear arithmetic” . . . . . . . . . . . . . . . . . . . 95 B. 5 “An exploit for write operand corruption” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 ix x Acknowledgements First and foremost I would like to thank my supervisor, Dr. Daniel Kroening, for his assistance and feedback during the past few months. I am indebted to Dr.

Kroening as he initially proposed this project when I had little idea what I wanted to work on beyond ’something combining verification techniques and security vulnerabilities’. I am also grateful to the developers of the DynamoRIO and Pin binary instrumentation tools. They have developed excellent platforms for the construction of program analysis tools and, just as importantly, they have countless examples, detailed documentation and active user-groups that provide timely answers. As with all projects I’ve worked on in the security domain I have once again received advice from countless individual researchers and hackers.

We will write a custom essay sample on
Automatic Generation of Control Flow Hijacking Exploits for Software Vulnerabilities Essay
or any similar topic only for you
Order now

I would particularly like to mention the #formal members who are among the most intelligent and motivated people I’ve had the pleasure of talking with. Their help was crucial as I got to grips with the practical problems of real-world automated program analysis. In particular I would like to thank William Whistler who is essentially a walking copy of the Intel manuals and endured many of my early thesis drafts. Finally, I would like to thank my family who have supported me during the course of this dissertation. Without their assistance this work would not have been possible. i xii Abstract Software bugs that result in memory corruption are a common and dangerous feature of systems developed in certain programming languages. Such bugs are security vulnerabilities if they can be leveraged by an attacker to trigger the execution of malicious code. Determining if such a possibility exists is a time consuming process and requires technical expertise in a number of areas. Often the only way to be sure that a bug is in fact exploitable by an attacker is to build a complete exploit. It is this process that we seek to automate.

We present a novel algorithm that integrates data-flow analysis and a decision procedure with the aim of automatically building exploits. The exploits we generate are constructed to hijack the control flow of an application and redirect it to malicious code. Our algorithm is designed to build exploits for three common classes of security vulnerability; stack-based buffer overflows that corrupt a stored instruction pointer, buffer overflows that corrupt a function pointer, and buffer overflows that corrupt the destination address used by instructions that write to memory.

For these vulnerability classes we present a system capable of generating functional exploits in the presence of complex arithmetic modification of inputs and arbitrary constraints. Exploits are generated using dynamic data-flow analysis in combination with a decision procedure. To the best of our knowledge the resulting implementation is the first to demonstrate exploit generation using such techniques. We illustrate its effectiveness on a number of benchmarks including a vulnerability in a large, real-world server application. 1 2 Chapter 1 Introduction 1. 1 Introduction

In this work we will consider the problem of automatic generation of exploits for software vulnerabilities. We provide a formal definition for the term “exploit” in Chapter 2 but, informally, we can describe an exploit as a program input that results in the execution of malicious code1. We define malicious code as a sequence of bytes injected by an attacker into the program that subverts the security of the targeted system. This is typically called shellcode. Exploits of this kind often take advantage of programmer errors relating to memory management or variable typing in applications developed in C and C++.

These errors can lead to buffer overflows in which too much data is written to a memory buffer, resulting in the corruption of unintended memory locations. An exploit will leverage this corruption to manipulate sensitive memory locations with the aim of hijacking the control flow of the application. Such exploits are typically built by hand and require manual analysis of the control flow of the application and the manipulations it performs on input data. In applications that perform complex arithmetic modifications or impose extensive conditions on the input this is a very difficult task.

The task resembles many problems to which automated program analysis techniques have been already been successfully applied [38, 27, 14, 43, 29, 9, 10, 15]. Much of this research describes systems that consist of data-flow analysis in combination with a decision procedure. Our approach extends techniques previously used in the context of other program analysis problems and also encompasses a number of new algorithms for situations unique to exploit generation. 1. 2 Motivation Due to constraints on time and programmer effort it is necessary to triage software bugs into those that are serious versus those that are relatively benign.

In many cases security vulnerabilities are of critical importance but it can be difficult to decide whether a bug is usable by an attacker for malicious purposes or not. Crafting an exploit for a bug is often the only way to reliably determine if it is a security vulnerability. This is not always feasible though as it can be a time consuming activity and requires low-level knowledge of file formats, assembly code, operating system internals and CPU architecture. Without a mechanism to create exploits developers risk misclassifying bugs.

Classifying a security-relevant bug incorrectly could result in customers being exposed to the risk for an extended period of time. On the other hand, classifying a benign bug as security-relevant could slow down the development process and cause extensive delays as it is investigated. As a result, there has been an increasing interest into techniques applicable to Automatic Exploit Generation (AEG). 1We consider exploits for vulnerabilities resulting from memory corruption. Such vulnerabilities are among the most common encountered in modern software.

They are typically exploited by injecting malicious code and then redirecting execution to that code. Other vulnerabililty types, such as those relating to design flaws or logic problems, are not considered here. 3 The challenge of AEG is to construct a program input that results in the execution of shellcode. As the starting point for our approach we have decided to use a program input that is known to cause a crash. Modern automated testing methods routinely generate many of these inputs in a testing session, each of which must be manually inspected in order to determine the severity of the underlying bug.

Previous research on automated exploit generation has addressed the problem of generating inputs that corrupt the CPU’s instruction pointer. This research is typically criticised by pointing out that crashing a program is not the same as exploiting it [1]. Therefore, we believe it is necessary to take the AEG process a step further and generate inputs that not only corrupt the instruction pointer but result in the execution of shellcode. The primary aim of this work is to clarify the problems that are encountered when automatically generating exploits that fit this description and to present the solutions we have developed.

We perform data-flow analysis over the path executed as a result of supplying a crash-causing input to the program under test. The information gathered during data-flow analysis is then used to generate propositional formulae that constrain the input to values that result in the execution of shellcode. We motivate this approach by the observation that at a high level we are trying to answer the question “Is it possible to change the test input in such a way that it executes attacker specified code? ”.

At its core, this problem involves analysing how data is moved through program memory and what constraints are imposed on it by conditional statements in the code. 1. 3 Related Work Previous work can be categorised by their approaches to data-flow analysis and their final result. On one side is research based on techniques from program analysis and verification. These projects typically use dynamic run-time instrumentation to perform data-flow analysis and then build formulae describing the programs execution.

While several papers have discussed how to use such techniques to corrupt the CPU’s instruction pointer they do not discuss how this corruption is exploited to execute shellcode. Significant challenges are encountered when one attempts to take this step from crashing the program to execution of shellcode. Alternatives to the above approach are demonstrated in tools from the security community [37, 28] that use ad-hoc pattern matching in memory to relate the test input to the memory layout of the program at the time of the crash. An exploit is then typically generated by using this information to complete a template.

This approach suffers from a number of problems as it ignores modifications and constraints applied to program input. As a result it can produce both false positives and false negatives, without any information as to why the exploit failed to work or failed to be generated. The following are papers that deal directly with the problem of generating exploits: (i) Automatic Patch-Based Exploit Generation is Possible: Techniques and Implications – This paper [11] is the closest academic paper, in terms of subject matter, to our work.

An approach is proposed and demonstrated that takes a program P and a patched version P0, and produces a sample input for P that exercises the vulnerability patched in P0. Using the assumption that any new constraints added by the patched version relate to the vulnerability they generate an input that violates these constraints but passes all others along a path to the vulnerability point (e. g. the first out of bounds write). The expected result of providing such an input to P is that it will trigger the vulnerability.

Their approach works on binary executables, using data-flow analysis to derive a path condition and then solving such conditions using the decision procedure STP to produce a new program input. As the generated program input is designed to violate the added constraints it will likely cause a crash due to some form of memory corruption. The possibility of generating an exploit that results in shellcode execution is largely ignored. In the evaluation a specific case in which the control flow was successfully hijacked is given, but no description of how this would be automatically achieved is described. ii) Convicting Exploitable Software Vulnerabilities: An Efficient Input Provenance Based Approach – This paper [35] again focuses on exploit generation but uses a “suspect input” as its starting point instead 4 of the differences between two program binaries. Once again data-flow analysis is used to build a path condition which is then used to generate a new input using a decision procedure. User interaction is required to specify how to mutate input to meet certain path conditions. As in the previous case, the challenges and benefits involved in generating an exploit that result in shellcode execution are not discussed. iii) Byakugan – Byakugan [28] is an extension for the Windows debugger, WinDbg, that can search through program memory attempt to match sequences of bytes from an input to those found in memory. It can work with the Metasploit [39] tool to assist in generation of exploits. In terms of the desired end result, this is similar to our approach although it suffers from the limitations of pattern matching. When searching in memory the tool accounts for common modification to data such as converting to upper/lower case and unicode encoding but will miss all others.

It makes no attempt at tracking path conditions and as a result can offer no guarantees on what parts of the input are safe to change and still trigger the vulnerability. (iv) Automated Exploit Development, The future of exploitation is here – This document [37] is a whitepaper describing the techniques used in the Prototype-8 tool for automated exploit generation. The generation of control flow hijacking exploits is the focus of the tool. This is achieved by attaching a debugger to a running process and monitoring its execution for erroneous events as test cases are delivered to the program.

When such an event occurs the tool follows a static set of rules to create an exploit based on what type of vulnerability was discovered (i. e. it distinguishes between stack and heap overflows). These rules attempt to determine what parts of the input data overwrote what sensitive data and hence may be used to gain control of the program execution. Once this is determined these values are used to generate an exploit based on a template for the vulnerability type. No attempt is made to determine constraints that may exist on this input or to customise the exploit template to pass these constraints. v) Automatic Discovery of API-Level Exploits – In this paper [25] a framework is presented to model the details of the APIs provided by functions such as printf. Once the effects of these API features have been formalised they can be used in predicates to specifying conditions required for an exploit. These predicates can then be automatically solved to provide API call sequences that exploit a vulnerability. This approach is restricted to creating exploits where all required memory corruption can be introduced via a single API, such as printf.

As well as the above papers, the BitBlaze project [50] has resulted in a number of papers that do not deal explicitly with the generation of exploits but do solve related problems. Approaching the issue of automatically generating signatures for vulnerabilities [9, 10] they describe a number of useful techniques for gathering constraints up to a particular vulnerability point and using these constraints to describe data that might constitute an exploit. There is also extensive previous work on data-flow analysis, taint propagation, constraint solving and symbolic execution.

Combinations of these techniques to other ends, such as vulnerability discovery [27, 14], dynamic exploit detection [43] and general program analysis [29] are now common. 1. 4 Thesis Our thesis is as follows: Given an executable program and an input that causes it to crash there exists a sound algorithm to determine if a control flow hijacking exploit is possible. If a control flow hijacking exploit is possible there exists an algorithm that will automatically generate this exploit. The purpose of this work is to investigate the above thesis and attempt to discover and implement a satisfying algorithm.

Due to the sheer number of ways in which a program may crash, and a vulnerability be 5 exploited, it is necessary to limit our research to a subset of the possible exploit types. In our investigation we impose the following practical limits2: 1. Data derived from user input corrupts a stored instruction pointer, function pointer or the destination location and source value of a write instruction. 2. Address space layout randomisation may be enabled on the system but no other exploit prevention mechanisms are in place. 3. Shellcode is not automatically generated and must be provided to the exploit generation algorithm. 1. Contributions of this Work In the previous work there is a gap between the practicality of systems like Byakugan and the reliability and theoretical soundness of systems like [11]. In an attempt to close this gap we present a novel system that uses data-flow analysis and constraint solving to generate control flow hijacking exploits. We extend previous research by describing and implementing algorithms to not only crash a program but to hijack its control flow and execute malicious code. This is crucial if we are to reliably categorise a bug as exploitable or not [1]. The contributions of this dissertation are as follows: . We present the first formalisation of the core requirements for a program input to hijack the control flow of an application and execute malicious code. This contains a description of the conditions on the path taken by such an input for it to be an exploit, as well as the information required to generate such an input automatically. This formalisation is necessary if we are to discuss generating such exploits in the context of existing research on software verification and program analysis. The formalisation should also prove useful for future investigations in this area. Chapter 2). 2. Building on the previous definitions we present several algorithms to extract the required information from a program at run-time. First, we present instrumentation and taint analysis algorithms that are called as the program is executed. We then describe a number of algorithms to process the data gathered during run-time analysis and from this data build a propositional formula expressing the conditions required to generate an exploit. Finally, we illustrate how one can build an exploit from such a formula using a decision procedure. (Chapters 3 & 4). 3.

We present the results of applying the implementation of the above algorithms to a number of vulnerabilities. These results highlight some of the differences between test-case generation and exploit generation. They also provide the test of our thesis and, to the best of our knowledge, are the first demonstration of exploit generation using data-flow analysis and a decision procedure. (Chapter 5). 4. We outline a number of future research areas we believe are important to the process of automatic exploit generation. These areas may provide useful starting points for further research on the topic. Chapter 6). 1. 6 Overview Chapter 2 consists of a description of how the exploit types we will consider function, followed by a formalisation of the components required to build such exploits. Chapter 3 contains the main description of our algorithm and the theory it is built on. In Chapter 4 we outline the implementation details related to the algorithms described in Chapter 3. Chapter 5 contains the results of running our system on both test and real-world vulnerabilities. Finally, Chapter 6 discusses suggestions for further work and our conclusions. The meaning and implications of these limits are explained in later chapters. 6 Chapter 2 Problem Definition The aim of this Chapter is to introduce the vulnerability types that we will consider and describe the main problems involved in generating exploits for these vulnerability types. We will then formalise the relevant concepts so they can be used in later chapters. We begin by describing some system concepts that are necessary for the rest of the discussion. 2. 1 Operating System and Architecture Details 2. 1. 1 CPU Architecture CPU architectures vary greatly in their design and instruction sets.

As a result, we will tailor our discussion and approach towards a particular standard. From this point onwards, it is assumed our targeted architecture is the 32-bit Intel x86 CPU. On this CPU a byte is 8 bits, a word is 2 bytes and a double word, which we will refer to as a dword, is 4 bytes. The x86 has a little-endian, Complex Instruction Set Computer (CISC) architecture. Each assembly level instruction on such an architecture can have multiple low-level side effects. Registers The 32-bit x86 processors define a number of general purpose and specialised registers.

While the purpose of most of these registers is unimportant for our discussion we must consider four in particular. These are as follows 1. Extended Instruction Pointer (EIP) – Holds the memory location of the next instruction to be executed by the CPU. 2. Extended Base Pointer (EBP) – Holds the address of the current stack frame. This will be explained in our description of the stack memory region. 3. Extended Stack Pointer (ESP) – Holds the address of the top of the stack. Again, this will be explained in our description of the stack. 4.

Extended Flags Register (EFLAGS) – This register represents 32 different flags that may be set or unset (usually as a side effect) by an instruction. Some common flags are the zero flag, sign flag, carry flag, parity flag and overflow flag, which indicate different properties of the last instruction to be executed. For example, if the operands to the sub instruction are equal then the zero flag will be set to true (a number of other flags may also be modified). While registers are dword sized, some of their constituent bytes may be directly referenced.

For example, a reference to EAX returns the full 4 byte register value, AX returns the first 2 bytes of the EAX register, AL returns the first byte of the EAX register, and AH returns the second byte of the EAX register. A full 7 description of all referencing modes available for the various registers can be found in the Intel documentation [19]. 2. 1. 2 Operating system As exploit techniques vary between operating systems (OS) we will have to focus our attention on a particular OS where implementation details are discussed.

We decided on Linux1 due to the availability of a variety of tools for program analysis [41, 8, 36] and formula solving [23, 12] that would prove useful during our implementation. The most common executable format for Linux binaries is the Executable and Linking Format (ELF). At run-time, each ELF binary consists of a number of segments, the details of which are important for our discussion. Of particular interest to us in this section are the stack and . dtors segments. The Stack The stack is a region of memory used to store function local variables and function arguments.

As mentioned earlier, the top of the stack is pointed to by the ESP register. The stack grows from high to low memory addresses, as illustrated in figure 2. 1, and memory can be allocated on it by subtracting the number of bytes Data is written Each slot holds 4 bytes 0xbfXXXXXX 0x00000000 towards 0x00000000 The stack grows towards 0xbfXXXXXX Figure 2. 1: Stack convention diagram required from the ESP. The push and pop instructions insert/remove their operand from the stack and decrement/increment the value in ESP2.

Allocation of storage space on the stack is considered static, in that the compiler can decide at compile time the size of allocations and embed the required instructions to modify the ESP. As well as storing local variables and function arguments, the stack is used during function calls to store metadata concerning the caller and the callee. When function A calls function B it is necessary to store the current value of the EIP, in order for A to continue execution once B has returned. This is done automatically by inserting the value of the EIP at the top of the stack when a call instruction is executed.

In discussing the x86 registers we mentioned that the EBP register points to the current stack frame. A function’s stack frame contains its local variables and the EBP register is used as an offset base to reference these variables. As there is only one EBP register per CPU, it is necessary to store the current value when a new function is called, and to restore it when the function returns. Like the EIP, this data is stored in-band on the stack and is inserted directly after the stored EIP. Once the current EBP value has been pushed to the stack the ESP value is copied to the EBP register. Ubuntu 8. 04, kernel 2. 6. 27. 2The amount the stack pointer changes by when a push, pop, ret, or call instruction is encountered depends on the default width of the current architecture. For 32-bit x86 processors that is 4 bytes (32 bits). 8 At the end of every function3 it is then necessary to restore the stored EBP and EIP. This is handled by a sequence of instructions inserted by the compiler. It typically looks like the following, although the leave instruction can be used to replace the mov and pop combination. Example 2. 1 x86 function epilogue ov esp, ebp ; Move the address of the stored EBP into the ESP register pop ebp ; Pop the value of the stored EBP into the EBP register ret ; Pop the value of the stored EIP into the EIP register Constructors and Destructors The gcc compiler allows programmers to register functions that will be called before and after the main function. Such functions are referred to as constructors and destructors, the addresses of which are stored in the . ctors and . dtors segments respectively. These segments exist even if there are no programmer registered functions4.

The layout of both sections is as follows 0xffffffff … 0x00000000 The . dtors section in particular is relevant as it will allow us to demonstrate a particular class of exploit in later sections. 2. 2 Run-time protection mechanisms All major operating systems now include a variety of techniques to make it more difficult for exploits to succeed. These protection mechanisms need to be considered as they change the possibilities available when creating an exploit. 2. 2. 1 Address Space Layout Randomisation

Address Space Layout Randomisation (ASLR) aims to introduce a certain degree of randomness into the addresses used by a program at run-time [52]. It is designed to prevent an attacker from predicting the location of data within the memory regions of a running program. ASLR can be enabled for different memory regions independently and thus it is possible that the stack is randomised but the heap is not. Due to such possibilities, ASLR on its own is often easily defeated by an exploit [32, 40], especially on 32-bit architectures [48] where the number of bits available for randomisation is relatively small. . 2. 2 Non-Executable Memory Regions This mechanism, typically called NoExec or Data Execution Prevention (DEP), is an approach to preventing exploits from succeeding by marking certain pages of a programs address space as non-executable [53]. It relies on software or hardware to prevent the execution of data within these pages. This protection mechanism is based on the observation that many exploits attempt to execute shellcode located in areas not typically required to be marked as executable e. g. the stack and heap. As with ASLR, it is possible to bypass NoExec under certain conditions [49]. As an optimisation, in some situations the compiler may generate code that does not update the EBP when creating a new stack frame. In such cases the old EBP will not be stored when the function is called. 4The most recent versions of gcc have started adding in checks at compile time to prevent this technique. http://intheknowsecurity. blogspot. com/2009/08/exploitation-via-dtors-and-mitigation. html. 9 2. 2. 3 Stack Hardening Compilers for a number of operating systems now include techniques that aim to prevent stack overflows being used to create exploits.

This can consist of a variety of run-time and compile-time checks and changes but the most common are the implementation of a stack ’canary’ [54] and the reordering of stack-based variables [24]. A stack canary is a supposedly unpredictable value placed below the stored instruction pointer/base pointer on the stack that is checked just before the function returns. If it is found to be corrupted the program aborts execution. The other common method involved in stack hardening is to rearrange local variables so that the buffers are placed above other variables on the stack.

The aim is to ensure that if a buffer overflow does occur then it corrupts the stack canary rather than other local variables. Without this rearranging an attacker may be able to gain control of the program before the stack canary is checked by corrupting local variables. 2. 2. 4 Heap Hardening Heap hardening primarily consists of checking the integrity of metadata that is stored in-band between chunks of data on the heap. This metadata describes a number of features of a given chunk of heap data, such as its size, pointers to its neighbouring chunks, and other status flags.

When a buffer overflow occurs on the heap it is possible to corrupt this metadata. As a result, many major operating systems perform a number integrity checks on the stored values before allocating/deallocating memory. The aim is, once again, to make exploits for heap overflows more difficult to build by forcing any corrupted data to satisfy the integrity checks. 2. 2. 5 Protection Mechanisms Considered In our approach we will specifically consider the problems posed by ASLR. ASLR is encountered on Windows, Linux and Mac OS X and the methods for defeating it are relatively similar.

There are methods of evading the other protection mechanisms and in many cases it may be possible to use an approach similar to ours to do so. For instance, the typical approach to avoiding DEP consists of redirecting the instruction pointer into existing library code, instead of attacker-specified shellcode. In this case the attacker injects a chain of function addresses and arguments instead of shellcode but the method of automatically generating such an exploit is otherwise the same. Similarly, in cases where the stack hardening is flawed it may be possible to predict the canary value.

Our approach could also be extended to cover this situation if provided with the predicted value. Heap-based metadata overflows are an importanty category of vulnerability not considered here. Further research is necessary in order to determine the feasibility of automatically generating exploits for such vulnerabilities. Due to heap hardening our approach on its own is not suitable for the generation of heap exploits. Often, heap based vulnerabilities require careful manipulation of the heap layout in order to generate an exploit.

Determining methods to do this automatically is left as further work and will be critical in extending AEG techniques to heap based vulnerabilities. 2. 3 Computational Model In this section we will provide formal definitions for the computational model used when discussing the analysis of a program P. Definition 1 (Definition of a Program). In general, every program P consists of a potentially infinite number of paths . Each path ! 2 is a potentially infinite sequence of pairs [i0, i1, … , in, … ], where each pair i contains an address iaddr and the associated assembly level instruction at that address iins.

Each pair can appear one or more times. The nth instruction of a path ! is denoted by ! (n). Two paths ! j and ! k are said to be different if there exists a pair ! (n) such that ! j(n) 6= ! k(n). 10 Definition 2 (Definition of an Instruction). When referring to the instruction iins in a pair i we will use the identifier of the pair itself i, and specify the address explicitly as iaddr if we require it. Instructions can have source and destination operands, where a source operand is read from and a destination operand is written to.

The set of source operands for an instruction i is referenced by isrcs, while the set of destination operands is referenced by idsts. The sets idsts and isrcs can contain memory locations and/or registers depending on the semantics of the instruction as defined in [19]. Definition 3 (Definition of a Concrete Path). A concrete path ! c is a path where the value of each operand of an instruction i is known 8i 2 ! c. This contrasts with static analysis where we may have to approximate the value of certain operands. In our analysis we assume that for a given input I, there is exactly one corresponding program path.

Therefore, a run of a program in our framework can be defined by P(I) = ! c, where I is an input to the program P resulting in the path ! c being executed. In our approach we consider single, concrete paths in P for analysis. The final concepts of interest to us in the definition of P are its run-time memory M and registers R. We will use P to denote a running instance of P. Definition 4 (Definition of Program Memory). Assuming A is the set of all valid memory addresses in P and B is the set of all byte values {0x00, … , 0xff} then M is a total function from A to B.

Every running program has such a function defined and any instruction that writes to memory modifies M(a) for all memory addresses a 2 insdsts. We divide the domain of M, the memory addresses, into two subsets. Those memory addresses a 2 A that can legitimately be used to store data tainted5 by user input constitute the set Mu 6. The complement of Mu in M is the set of addresses a 2 A that should not used to store data tainted by user input. We call this set Mm 7. A subset of Mm is the set MEIP , containing the memory addresses of all values currently in memory that may eventually be used as the instruction pointer, e. . stored instruction pointers on the stack. To illustrate Definition 4 consider the following assembly code: Example 2. 2 Modifying the memory state of a program lea eax, 0xbfffdf1a mov DWORD PTR [eax], 0x00000000 mov DWORD PTR [eax], 0x04030201 The lea instruction does not modify any memory addresses and so the functionM is unchanged. The first mov will cause M(a) to be updated 8a 2 (0xbfffdf1a, 0xbfffdf1b, 0xbfffdf1c, 0xbfffdf1d), resulting in the mapping illustrated in figure 2. 2. The second mov instruction modifies the mapping again, resulting in the mapping illustrated in figure 2. 3. Note that in Figure 2. the constituent bytes of 0x04030201 are stored in reverse order due to the little-endianness of the x86 architecture. As well as modifying the domain-to-range mappings of M each instruction may also add and remove values from Mm. For example, if ESP contains the value 0xbfffabcd and a call instruction executes then the address 0xbfffabcd will be added to Mm (and more specifically, MEIP ) as it now contains a stored instruction pointer. When the function called returns 0xbfffabcd is removed from Mm as at that point in the program it does not contain a value we know may eventually be used as the instruction pointer.

Definition 5 (Definition of CPU Registers). The total function R is a mapping from set of CPU registers to integer values. The domain of R is a set with cardinality equal to the number of available registers on the 5In Chapter 3 we will introduce the concept of tainting and taint analysis directly. For now, we consider data tainted if it is directly derived from user input or, recursively, derived from other tainted data. 6For example, the addresses of local variables on the stack, or the range of addresses spanned by a chunk of heap allocated memory. For example, addresses used to store heap metadata, stack canary values, stored instruction pointers, function pointers and locations that will be used as the destination operands in write instruction. 11 0xbfffdf1d . . 0x01 0x02 0x00 . . . . . . 0xFF 0xbfffdf1a 0xbfffdf1b 0xbfffdf1c . Figure 2. 2: Updating M: mov DWORD PTR [eax], 0x00000000 0xbfffdf1d . . 0x00 0x01 0x02 0x03 0x04 . . . . . . 0xFF 0xbfffdf1a 0xbfffdf1b 0xbfffdf1c . Figure 2. 3: Updating M: mov DWORD PTR [eax], 0x01020304 12 CPU under consideration. Its members are register identifiers, e. g. EAX, BX, ESP and so on.

The range of R is the set of dword values {0x0, … , 0xffffffff}8. 2. 4 Security Vulnerabilities 2. 4. 1 When is a Bug a Security Vulnerability? For our purposes we will consider a bug in a program P to be an input B that causes the corruption of some memory address in Mm, and results in P terminating abnormally. We define abnormal termination to be the termination of P as a result of a CPU-generated exception or the failure of an operating system or library-level integrity check, such as the protection mechanisms mentioned earlier. Such a path P(B) is a sequence of instructions [i0, …. in], where one or more instructions result in a write to an address in Mm, and where the instruction in triggers an exception, or is the final instruction in a path triggered by the failure of an integrity check that terminates the program. In some cases, a bug as just described may be categorised as a security vulnerability. This is the Denial of Service (DoS) vulnerability class and it is this class of vulnerabilities that previous work focused on triggering [11, 25, 35]. In other cases, a DoS vulnerability is not a security threat, e. g. , a bug that causes a media player to crash.

In cases where DoS does not equate to a security threat it would be premature to immediately triage the bug as not being indicative of a security flaw. At this point, one must analyse the path taken ! c and attempt to discover if it is possible for an attacker to manipulate the memory addresses corrupted in such a way as to avoid the abnormal termination of P, and instead execute malicious code. If such an input is possible then the bug is a security vulnerability and we would describe the input X that results in the execution of malicious code as an exploit for P. . 4. 2 Definition of an Exploit As discussed, previous work on exploit generation has focused on generating exploits that can be categorised as Denial of Service (DoS) attacks. This is a satisfactory first step, but it ignores the difference in severity between a DoS exploit and one that results in malicious code execution. The latter can have a much more costly impact, but are more difficult to create. By extending the definition of an exploit to reflect this distinction we can triage vulnerabilities at a finer level of granularity.

Definition 6 (Definition of an Exploit). We consider an exploit X to be an input to P such that the path P(X) contains the following three components (B,H, S): 1. B is a sequence of instructions that directly corrupt one or more bytes in Mm, the set of memory locations that should not be tainted by user input. 2. H is a sequence of instructions that corrupt a memory location meip 2 MEIP with the address of the injected shellcode. For example, a stored instruction pointer on the stack, an entry in the . dtors segment, a function pointer and so on.

In some cases the sequence H is the same as the sequence B, expressed H  B, such as when a buffer overflow directly corrupts a stored instruction pointer. In other cases H 6 B , such as when a buffer overflow corrupts a pointer (B) that is then later used as the destination operand in a write operation. As the destination operand is under our control we can force the instruction to corrupt a value in MEIP when this write instruction takes place. 3. S is the shellcode injected into the process by the exploit. We use the symbol |= to denote that a path contains one or more of the above sequences.

If X is an exploit then P(X) |= B, P(X) |= H and P(X) |= S, or more concisely P(X) |= (B,H, S). 8All registers can be legitimately used for data directly derived from user input except for the EIP, ESP and EBP when it is in use as a base pointer. We currently consider these three registers and the general purpose registers in our analysis. 13 Listing 2. 1: “Stack-based overflow vulnerability” 1 #include 2 #include 3 #include 4 #include 5 6 void func(char *userInput) 7 { 8 char arr[32]; 9 10 strcpy(arr, userInput); 11 } 12 13 int main(int argc, char *argv[]) 14 { 15 int res, fd = -1; 16 char *heapArr = NULL; 17 fd = open(argv[1], O_RDONLY); 8 19 heapArr = malloc(64*sizeof(char)); 20 res = read(fd, heapArr, 64); 21 func(heapArr); 22 23 return 0; 24 } In all exploits we consider the goal is to corrupt an address in MEIP with the aim of redirecting execution to attacker specified shellcode. We categorise the type of exploit where H  B as a direct exploit, and as an indirect exploit when H 6 B. That is, they differ in how the address in MEIP is corrupted. A direct exploit modifies an address in MEIP as part of the initial memory corruption whereas an indirect exploit modifies an address in Mm, but not in MEIP , that later causes a modification to a value in MEIP .

The problems encountered when generating both types of exploit overlap in certain areas, but are significantly different in others. We will therefore deal with them separately from this point onwards. 2. 5 Direct Exploits Direct exploits are exploits in which H  B. The equivalence of the sequences H and B indicates that in the exploit the value that will be used as the instruction pointer is corrupted during the buffer overflow. This type of exploit is typically targeted at a stack-based overflow of an instruction pointer, or an overflow of a function pointer.

In this section we will first describe how a direct exploit typically functions, using a standard stack-based buffer overflow, and then formalise the problem of automatically generating such an exploit. 2. 5. 1 Manually Building Direct Exploits The purpose of this section is to illustrate how direct exploits are manually constructed. We do this by providing a concrete example that demonstrates some of the core issues. We will describe the process for two vulnerability types that can be exploited by direct exploits. These are stored instruction pointer corruption and function pointer corruption.

Stored Instruction Pointer Corruption The C code in Listing 2. 1 shows a program that is vulnerable to a stack-based buffer overflow, and will be used as a working example in this section. 14 The vulnerability is in the function func, which neglects to check the size of userInput before strcpy is used to move it into the local array arr. If more than 32 bytes of input are read in by the program then the call to strcpy will exceed the bounds of arr. We can illustrate the problem by demonstrating the effect on the stack of running the program with the following string as input9: [CCCC*8] + [BBBB] + [AAAA]

Before the strcpy at line 10, the stack is arranged like the left hand side of figure 2. 4, whereas after the 0x43434343 . . . . . Stored EBP Stored EIP 0xbfc84908 0x0804849f arr, 32 byte array Stored EBP Stored EIP arr, 32 byte array 0x41414141 0x42424242 . Figure 2. 4: Stack configuration before/after the vulnerable strcpy strcpy it is arranged like the right hand side. We can see that 32 ’C’ characters (0x43) have filled arr, and the extra 8 bytes have corrupted both the stored EBP, and the stored EIP, with four Bs (0x42) and four As (0x41) respectively. Exploiting Stored Instruction Pointer Corruption

For an input X to be considered an exploit for P we must ensure the path P(X) contains the three components of the tuple (B,H, S), introduced in Definition 6. For a direct exploits this means it must overflow a buffer and corrupt an address in MEIP , in such a way as to result in the execution of injected shellcode. Approaches to this problem have been described in a variety of sources, but the first complete discussion is usually attributed to Aleph1 in the article Smashing the Stack for Fun and Profit [2]. To explain the technique let us use the example program in Listing 2. 1 as P.

As discussed, it is possible to overflow the stored EBP, and then the stored EIP, by supplying more than 32 bytes of input. Let our initial candidate exploit X be the following string: [CCCC*8] + [BBBB] + [AAAA] As in Figure 2. 4, the stored EBP is overwritten with ’BBBB’ and the stored EIP with ’AAAA’. The application will then continue execution until the function returns. At this point, the ret instruction will pop ’AAAA’ (0x41414141) into the EIP, which will cause an exception to be generated when the program attempts to execute code at this address, as it is usually not mapped as usable.

We will call the address of the ret instruction the pivot point, as it is the instruction where control flow pivots from the standard flow to our injected shellcode. We will denote the address of the pointer value we aim to corrupt as mEIP , where mEIP 2 MEIP . The value at this address is called the pivot destination, as it is the location control flow pivots to at the pivot point. We observe that currently P(X) |= (B,H). For 9AB represents the repetition of the string of characters A B times. The + operator is used to represent concatenation of two elements within square brackets. 5 P(X) |= (B,H, S) to hold we must change our input to include shellcode, and ensure that at the pivot point the pivot destination is the address of this code. Our approach to achieve this will depend on whether ASLR is enabled or not. If ASLR is not enabled, then the addresses of variables on the stack are constant between runs of the program. In this case, the array arr could be used to contain our shellcode and its address used to overwrite mEIP 10. It is possible to hardcode this address into the exploit as there is no randomisation in the address space.

Once the value at mEIP is moved to the EIP register by the ret instruction the code in arr will be executed. Our input string could appear as follows: [32 bytes of shellcode] + [4 bytes11 (stored EBP)] + [Address of arr] If ASLR of the stack is enabled then we cannot overwrite the stored EIP with a hardcoded address, as the memory addresses will change between runs of the program. In this case, we need to use what is called a trampoline register [32] to ensure a reliable exploit. A trampoline is an instruction, found at a non-randomised address in P, that transfers execution to the address held in a register e. . jmp ECX or call ECX will both put the value of the ECX register into the EIP register. We can use a register trampoline if at the pivot point there exists a register r that contains the address of an attacker controllable memory buffer b. The purpose of the trampoline, is to indirectly transfer control flow, using the address stored in r, to b. Instead of overwriting mEIP with the address of b we instead overwrite it with the address of a register trampoline that uses r12. For example, if the register ECX contains the address of b, then an instruction like jmp ECX is a suitable trampoline.

We can search for such an instruction in any part of the address space of P that is marked as executable and non-randomised. For the code in Figure 2. 1, it turns out that at the ret instruction, the register EAX points to the start of arr. This means that if we can find a trampoline that uses EAX as its destination, we can then modify our input to defeat ASLR. Our exploit would then look as follows: [32 bytes of shellcode] + [4 bytes (stored EBP)] + [Trampoline address] Providing an input string matching the above template would cause the following events to occur: i.

On line 10 the strcpy function call will fill the 32 byte buffer arr and then copy 4 bytes over the stored EBP and the address of our trampoline over the stored EIP ii. When the function returns the address of our trampoline will be put in the instruction pointer and execution will continue iii. First the trampoline will be executed which will jump to address stored in EAX, the start of arr, at which point our shellcode will be executed. This is illustrated in figure 2. 5, where we have found the instruction call EAX at 0x0804846f. Function Pointer Corruption

Function pointers are a feature of C/C++ that allow the address of a function to be stored in a variable and later use that variable to call the function. Function pointers can be stored in almost any data segment and thus can be corrupted by buffer overflows that occur within these segments. Vulnerabilities resulting from function pointer corruption are conceptually similar to those resulting from stored instruction pointer 10Finding this address can be done manually using a debugger [2] 11These 4 bytes overwrite the EBP and can be any value except for 0x0. x0 would be interpreted as the end of string by strcpy, and would break the exploit. 12It should be noted this is a description of one type of trampoline and approach for defeating ASLR. There are many others as described in [32] and [40], but the essential idea remains the same. We find an instruction at a static address and use it to redirect execution into attacker-controllable shellcode. 16 [EAX=0xbfd4901c] 0x42424242 0x0804846f [Shellcode] 0xbfd4901c Stored EBP Stored EIP 0x0804846f: call EAX Figure 2. 5: Returning to shellcode via a register trampoline corruption.

The primary difference is in how control flow is transferred; the instruction at the pivot point is a call instruction instead of a ret. The steps in exploiting function pointer corruption are almost identical to those involved in exploiting corruption of a stored instruction pointer. Instead of overwriting the stored EIP, we need to overwrite the function pointer variable. Once again, the value we chose to overwrite it with depends on whether ASLR is in use or not. If it is disabled we can use the address of an attacker controllable memory buffer whereas if it is enabled then we need to use a trampoline.

When the corrupted variable is used as an argument to a call instruction shellcode can be executed, directly or through a register trampoline. As the details are so similar an example of this type of vulnerability is not presented. 2. 5. 2 Components Required to Generate a Direct Exploit Incorporated in the details of the above exploit are features common to all direct exploits we will consider. In the case of a direct exploit mEIP is implicit and will be the address of the overwritten stored instruction pointer or function pointer.

In order to generate a direct exploit Xd we require the tuple (P, ,C,, ), where 1. P is a running process of the program-under-test P. 2.  is a valid address in the address space of P that will be used as the pivot destination. This could be the address of a buffer, or of a register trampoline. 3. C is our shellcode, a valid sequence of assembly instructions. The overall goal in creating Xd is the execution of these instructions. 4.  is a program input that triggers a bug. Our analysis will be done over the path P(). 5. is our exploit generation function. It takes ,  and C, and analyses P().

The goal of is to produce a formula where a satisfying solution to its constraints is an input Xd such that P(Xd) |= (B,

×

Hi there, would you like to get such a paper? How about receiving a customized one? Check it out