Program Analysis for Cybersecurity (2020)
This post contains the materials for the Program Analysis for Cybersecurity (PAC) training I’ll be teaching at the 2020 US Cyber Challenge security boot camps. The training is aimed at audiences with little or no experience in program analysis.
Please use the links below to download the training materials.
|Hacking Live Virtual Machine||pac:badpass||This virtual machine covers labs 1 and 2. It is known to only play in VirtualBox players.|
|PAC2020 Virtual Machine||pac:badpass||This virtual machine covers labs 3 through 9. In order to complete lab 4, be sure that this virtual machine is the same network as the Windows XP SP3 x86 machine and that the two virtual machines can ping each other. This virtual machine will import into VMWare, but using VirtualBox is recommended. Note that malware samples are zip encrypted with the password: infected.|
|Windows XP SP3 x86||N/A||The provided file contains an ISO that can be used as installation media in VirtualBox. You are not provided with a license key (the activation servers have been disabled anyway). When prompted for the license key, skip the prompt to continue installation and use the 30-day evaluation mode to complete the lab. This machine will serve as the victim machine for an attack performed in lab 4 from the PAC 2020 virtual machine.|
Note: When using the VirtualBox player the mouse integration feature is turned on be default. If the feature is not working properly (particularly in the HackingLive VM) you can disable it my navigating to Input > Mouse Integration and toggle it on and off.
Note: These materials are a major revision of past course materials, which can be found here.
By the end of this course you should be able to:
- Demonstrate basis exploitation, bug hunting, and evasion skills
- Describe commonalities between vulnerability analysis and malware detection
- Describe fundamental limits in program analysis
- Challenge conventional viewpoints of security
- Confidently approach large third party software
- Critically evaluate software security products
- Locate additional relevant resources
The course material is broken into several modules that cover both defensive and offensive materials.
First we will become intimately familiar with one particular type of bug, a buffer overflow. We will iteratively develop exploits for a simple Linux program with a buffer overflow and examine various mitigations and potential mitigation bypasses before we move on to developing an exploit for a Windows web server called MiniShare.
- Lab 0: Turing machine to buffer overflow simulators
- Lab 1: 32-bit buffer overflow (executable stack)
- Lab 2: 32-bit buffer overflow (ret2libc)
- Lab 3: 64-bit buffer overflow
- Lab 4: Development of remote exploit for Windows web server CVE-2004-2271
Fundamentals of Program Analysis
Next we will discuss program analysis and how it can be used to analyze programs to detect bugs and malware. We will also consider some fundamental challenges and even limitations of what is possible in program analysis. Through this module we will explore analysis techniques that range from graph based static analysis to fuzzing with dynamic analysis to symbolic execution with SAT/SMT solvers (and more in between!).
- Lab 5: Source code static analysis of CVE-2004-2271 with Atlas
- Lab 6: Fuzzing with AFL
- Lab 7: Symbolic execution with Angr
Since antivirus is used to actively thwart exploitation attempts, we will take a detour to examine techniques to bypass and evade antivirus. Specifically we will examine what is necessary to manually modify a year 2012 browser drive by attack to become undetectable by all modern antivirus. We will also build a tool to automatically obfuscate and pack our exploit.
This module discusses relationships between bugs and malware, as well as strategies for integrating human intelligence in automatic program analysis. You will be presented with an enormous task of quickly locating malware in a large Android application (several thousand lines of code). Through this activity you will be challenged to develop strategies for auditing something that is too big to personally comprehend in the time you are allocated to perform the task. As a group we will collectively develop strategies to audit the application and then we will use those strategies to develop automated techniques for detecting malware.
- Lab 9: Human-in-the-loop analysis of large Android applications (with and without source code)
In this final module, we explore future directions in the field and examine some open problems in the context of what we learned in the previous modules.
The labs in this course are designed to push everyone in this course. Likely there will be some subject that you feel ill equipped to try, but don’t let that be a barrier. Attempt the lab to the best of your ability and try your best to learn the core ideas behind each activity. Then attempt the lab again when you have more time. Please send questions, thoughts, and comments to [email protected] and I will be happy to help you find your way to success for any of the labs. There are multiple solutions to each lab, and in some cases there are no right answers!