Saturday, December 18, 2010

Finally some GUI progress

As mentioned in the last post, I think I'm finally getting the hang of Qt. Although my code has many flaws, I have something resembling a scrolling hex view:
Now that I'm starting to get the hang of things, hopefully I'll start to figure out the various issues and get a basic usable GUI rolled out.

Made a more generic dynamically generated text scrolling widget:
QtDesigner plugin for both a generic implemetnation and a hexdump more defined version. Handles window resizing events. Horizontal scrolling and other issues are not handled well yet though. It seems this may be hard to implement as width well be dynamically generated. Is it annoying to the user for the horizontal scroll to be changing width? I'll need to solve this problem in more detail as I try rendering the disassembly output, which is variable width. Maybe I'll have it fixed scroll to some largish value and allow it to occasionally scroll further if a really long line is present?
One of the main issues still present is that there will be no way for a user to select text and other interaction issues. I should be able to implement detecting which line was clicked on with little effort though, which should allow for basic navigation. I personally like to copy and paste from IDA/Olly, so I'd sorely miss this feature. I can do a full export or screen copy with little effort though, so I can do that for now.
Now that I'm more comfortable with Qt, I looked more into what it would take to implement this by replacing the document instance in QTextEdit/QPlainTextEdit. Unfortunately, the key functions needed are not virtual, so without some binjitsu magic (which I'd really prefer not to do...), it is not possible to replace these with dynamic text generation.
The widgets are also starting to be implemented in the main GUI and it is going to go through a major revision. Maybe not usable yet, but getting closer.

Thursday, December 16, 2010

Conquering my Qt fears and semester results

The three main goals I was hoping to accomplish this semester:
-Architecture improvements, especially regarding a plugin system
-License scanner
-Get a basic fully functional GUI

Most of the semester focused on objective 1 and I think I did fairly well in that regard. I feel there is at least a solid foundation for a plugin system and additional interfaces will be added as needed. Regarding objective 2, I didn't get the full application I was hoping to due to various issues, but I did improve the FLIRT support and wrote a research paper on the limitations of FLIRT demonstrated using my toolkit and verifying against IDA. I'll hopefully be releasing it soon, ask me for a draft if you'd like to see it. Right now its titled "Issues with FLIRT Aware Malware." Back to the objective, I honestly just didn't put the effort into accomplishing objective 2. After seeing a lot of the issues with FLIRT, I also wasn't sure if it really was a good function recognition algorithm to spend time creating signatures for. Presumably though, I can automatically create signatures once I gather up the libraries, so it might not be such a big deal.

I did get a basic GUI going, but not to the level I was hoping to. Several things got in the way of this. First, I knew it was a risk that I didn't know Qt very well. I wouldn't say that I know it well yet, but I'm beginning to become competent. Second, someone offered some help, but didn't follow through. This made me focus on other things hoping they were going to help me get a code example of the widget I needed. I did get some help here and there and one of the main things that became clear was that I needed to subclass QAbstractScrollArea for proper support.

So what was the widget I needed? Basically, rendering all of the disassembly area ahead of time had numerous issues. It took too much time to render ahead of time, a pain to keep track of position, and more. The solution: a custom widget that intelligently rendered on demand as the window is scrolled. The problem: the limited Qt work I did was through Qt designer using stock widgets.

Maybe this wouldn't have been so bad if I at least had done work with other GUI frameworks and knew what phrases like model view controller (MVC) meant. As an example, I was referenced to the Okteta KDE project which has a widget very similar to what I needed. However, while the code seems to be designed well, there were several issues. First, it was designed to work as a library, but I wasn't really sure how to build it as it was in some KDE/CMake build system hybrid or something. I don't know CMake and get confused easily when "cmake ." or w/e doesn't work due to some error message. This isn't a huge deal because, all things considered, there were that many files and I could just use my own build script. It is somewhat annoying though that I have this library installed on my computer, but there seems to be no -dev package for it. Second, it used a very flexible enterprise style model view controller design. Normally this would be a solution and not a problem, but I don't really know MVC, so it didn't work out well. This may have been solved as I realized there was a Qt designer plugin for their widgets. Unfortunately, the dependencies seemed to explode for it and I didn't get a chance to try to finish it. Before I was importing source files as needed, I may instead just import the entire project (a couple hundred source files I think).

One of the things as I was doing more and more of this was that there were actually two distinct problems of implementing my widget. The first was that I needed to understand QAbstractScrollArea itself. That is, how the viewport object worked and such. Second, I needed to learn how to make a widget to display hex to set to the viewport. Part of my confusion on this was the what I'd consider a poor example of QAbstractScrollArea: a widget that scrolls another widget. This functionality was too linear and didn't really help one not really familiar with Qt what was really going on.

To fix some of these problems, I've been trying to read up on more Qt stuff such as MVC architecture. As I was reading through one of the examples, codeeditor, I quickly realized this strangely may show what I need to solve my problems despite looking quite different than what I was looking for. They key has to do with the line numbering on the side. I think I ran into this example before, but maybe didn't realize the significance. This widget demonstrates two things: how to render a text based widget and how to use the viewport. This is essentially exactly what I need to get my application rolling. I may consider some point in the future to use something like the Okteta library, but for now I think I finally have the starting point I need to develop my widgets and at least get something working. I'd like to write a small tutorial on harnessing QAbstractScrollArea for beginners as I really think there could have been a better example for it.

Sunday, November 14, 2010

Python API alpha

I've fixed some of the issues I was having with Python. A simple example:
[mcmaster@gespenst bin]$ ipython

In [1]: import uvudec

In [2]: uvd = uvudec.uvd.getUVDFromFileName('candela.bin')

In [3]: dissassembly = uvd.disassemble()
In [4]: print dissassembly[0:200]
LJMP #0x0026
LJMP #0x0DA9
Basically, this is the sort of construct I needed:
%typemap(in, numinputs=0) UVD ** (UVD *temp)
$1 = &temp;

%typemap(argout) (UVD **)
PyObject *to_add = SWIG_NewPointerObj(*$1, $descriptor(UVD *), SWIG_POINTER_OWN);
$result = SWIG_AppendOutput($result, to_add);
I initially had some issues with appending objects to the none type generated in the default exception handler (I actually need to look more into why this was required in the first place), but they seem to have gone away now. The issue was that if you appended an object to a none type, it would return a list with the object being the only member of that list...w/e.
Things seem to work now at least at a basic level, but there's a bunch of things in both C++ and Python/SWIG that will need to be cleaned up for this to be convenient to use. I guess the next big thing will be to figure out how to make my iterators translate cleanly. In particular, it doesn't look like they are being compared correctly. Maybe need to add some sort of generator translation functionality as well?
Example of current iterator code:
itr = uvd.begin()
while itr is not uvd.end() and itr.getPosition() < 0x10:
print '0x%04X: %s' % (itr.getPosition(), itr.getCurrent())

0x0000: LJMP #0x0026
0x0003: MOV R7, A
0x0004: MOV R7, A
0x0005: MOV R7, A
0x0006: MOV R7, A
0x0007: MOV R7, A
0x0008: MOV R7, A
0x0009: MOV R7, A
0x000A: MOV R7, A
0x000B: LJMP #0x0DA9
0x000E: MOV R7, A
0x000F: MOV R7, A

Wednesday, November 3, 2010

Python API

One of the things I've been playing around with recently is using SWIG to generate a Python API. The following issues have/had to be solved
-Translate my error code return types to exceptions (DONE)
-Call UVDInit() on module load, UVDDeinit() on module unload (DONE)
-Fix some argument parsing related issues (DONE...sorta, my argument parsing code needs some redesign)
-Translate Object ** in stuff to returned instances (in progress)
The first item was done with this code (still some technicalities, but the general idea anyway):
%include "typemaps.i"
%typemap(out) uv_err_t
if( UV_FAILED($1) )
SWIG_exception(SWIG_RuntimeError, uv_err_str($1));
The next issue was solved with this:
%pythoncode %{
# Seems to work
class InitDeinit:
def __init__(self):
# print 'Calling UVDInit()'

def __del__(self):
# print 'Calling UVDDeinit()'
# Dummy instance to get created and destroyed
# We could get init to be executed globally...but I don't know about deinit
obj = InitDeinit()
The last issue is only partially solved:
%typemap(in, numinputs=0) UVD **out (UVD *temp)
$1 = &temp;
Which removed UVD **out from the function input arguments and generated a temporary variable, UVD *temp, to pass into the C++ function. And, once needed, the types needed to be translate can be found with:
find -mindepth 3 -name '*.h' -exec fgrep '**' {} ';' |sed 's/^.*[(]//g' |sed 's/[)].*$//g' |awk -F ',' '{ for(i=1;i<=NF;++i) print $i }' |fgrep '**' |fgrep -v '***' |tr -d '[:blank:]' |grep -v '^$' |fgrep UVD |awk -F '**' '{ print $1 }' |sort -u
The problem is I need something to return the value in the temp variable. Its something related to "%typemap(argout) UVD **out", but I can't figure out the exact syntax for the correct result. On a last note, IDAPython manually translates all of their functions. It results in about 8,000 lines of C/C++ code. Although this will have some automatically generated code that will be much longer, it seems I can get this done in under 500 lines. The current code is about 300 lines including the SWIG .i file, a Makefile, and a utility .h and .cpp file. With the functions I currently included for wrapping, its generating a 19125 C++ interface file. On that note, the code is also much more verbose that if written by hand, but I guess all things considered if it works, I don't care if the automatically generated source file is a bit long. In any case, the effort to support interfaces will be (in theory) as simple as %include "uvd/core/uvd.h" as I've done for the first few test files. Granted, there will likely have to be some special cases, but overall SWIG seems to be pretty powerful at automating this. Some of this will be to simply name the input arguments appropriately as SWIG can match rules based on argument names.

Also, I started talking to Silvio Cesare about library recognition since he seems to be doing some related research. I mostly focused on implementing the existing FLIRT algorithm, while he's working on trying to improve on some of its failures. For example, someone posted on his blog this about malware using FLIRT's simplistic library recognition algorithm to hide themselves. Basically, all a virus has to do is match the prefix and write some relocation free code padded with some bytes to create a CRC16 collision, which is relatively easy. These are good reminders about the issues with FLIRT, but its still a good starting place. A lot of my current interest is with API reverse engineering and thus does not typically see such attacks.

On another note, I was reading some details about Stuxnet, which is quite an impressive virus. My friend was shocked to know I had missed this given my interests and being employed as a malware analyst when it came out. Basically, I don't read/watch the news or anything. Anyway, if someone had told me about it, I would have said "yes its possible, but the sheer amount of effort makes it highly unlikely to happen in the near future." Well, someone cared enough and lo and behold, we have a worm using multiple 0 day windows vulnerabilities, multiple stolen certificates, and to top it off its payload installs rootkits onto PLC controllers to attack critical industrial processes. Yummy. Maybe I heard about it, thought it was "just another virus," and dismissed it.

I've also decided I'm going to write console object loader plugins. Video games provide an interesting scenario for library recognition. Many of the ROMs and the toolchain are kept very proprietary. That is, you can't easily get even the development toolchain, let alone any sourcecode for it. So, what I was thinking to try was to run some clustering algorithms on the ROMs to see if I could identify the stock libraries / assembly routines used within a vendor or given by the manufacturer. Since this would be much easier to do in Python than C++, this was the excuse to write the Python bindings. Alternatively, I would have had to write the data to intermediate files and would not be able to directly interface to the engine.

Finally, I wrote some basic autoconf support. I don't think I'm using the correct macros for everything. I might migrate to CMake at some point, but for now I'd rather have it work for dev than spend a lot of time reworking the build system.

Friday, October 29, 2010

Revenge of the unit test

I wrote some unit tests a while ago, but I was too lazy to run them. I looked into FOSS Continuous Integration (CI) testing solutions, but couldn't really find anything that really caught my eye.
I played around some more with CDash (, which was at the top of my list. Unfortunately, it currently still seems pretty SVN/CVS oriented and without previous experience with CDash, the entry barrier to use it seems relatively high. Its natural zone of comfort seems to be SVN/CMake/Doxygen. There is a test server on the Kitware website at . I also tried to setup a local server which I may have had more luck with, but I couldn't get one of the dependencies installed. I'm told that Kitware is moving to git for one of their projects, so support for git might be cleaner in the near future.
I was also recommended to look into CIJoe ( Unfortunately, it seems to crash for me. This may be because Fedora runs an olllld version of Ruby. Since this will be a dedicated virtual server anyway, I'll try to setup a VM to give it a better shot. They have a cool logo thought and even sell merchandise with it:

In the end, I decided my current needs are very modest and it would be better to get some crude hacked together server running than nothing at all. So, enter UVBuilder. It can be found at util/uvbuilder . Basically it uses a JSON config file to checkout, update, build, and run the code. It then e-mails results if there seems to be some change in status. Its very simplistic and has some dumb features like needing to checkout two copies of the code that can be solved with minor effort if I care.
I also played some more with using Valgrind to extensivly test the code. I had noticed that sometimes I would hexdump a data object and valgrind would trigger on that object if there was an error. So, I added a function that I called "poke" that would iterate over a block of memory that effectively does nothing. However, it makes the control flow appear to depend on the values by executing a statement like if( *ptr ) doNothing(); on each value. I found that it seems that std::map might leave uninitialized values though, so I'll have to make more custom Valgrind ignore files if I want testing to be truly effective at these tests.
So, next steps are to get all of the existing unit tests to pass (3 regressions, they all seem related to the same issue), and then beef up the unit tests now that I don't have to keep manually running them.

EDIT: all original unit tests pass after various fixes. Now to write a bunch of FLIRT tests.
EDIT: obj2pat unit tests created. And I'll try to get back to the comment below in the near future

Wednesday, October 27, 2010

FLIRT nearing completion

bfd based .pat generation is probably at an acceptable level. The behavior for handling short length names is kinda ill defined, so I'm not sure if there's much I can do about that. Additionally, FLAIR implements some x86 specific linker relocation fixup that I currently don't support since its currently all architecture independent. Also, while the basic architecture is there, much of the code should be moved out of the uvdbfd plugin and into the main engine. If I write a .pat generator for the uvdasm plugin (configuration file based architecture), this should be accomplished at the same time. I also figured out what one of the bits meant in the .sig format that has been annoying me for a bit. The reference .sig dumper had showed some of the function offsets being negative which didn't make sense to me. However, I finally figured out that this refers to local symbols (ie the a static global function in C/C++).
Generally, I'd consider .pat generation much harder than .sig since .pat is very platform specific and I'm guessing .pat stuff more or less isn't. I'm hoping that by Monday I should have uvpat2sig working smoothly. For starters, the old signature file dumping code was not integrated into the engine. Now, the signature file is actually loaded and then printed by dumping the loaded database. This is critical since in order to actually do signature matching, I'll need to load these up.
There are several issues with the current FLIRT engine. First, I haven't nailed down the overall file checksum computation. I'm guessing its just a CRC16 on the tree section (ie excluding the header), but haven't confirmed this. Second, compression/decompression isn't implemented. This isn't a high priority item and can be done later with presumably little impact on the loading mechanism. Next, the .sig file seems to leave out a lot of items from the .pat file. I need to figure out more accurately what items it leaves out and why. In particular, it looks like it only keeps one (the first?) external reference in a function. Finally, there is some attribute in the .sig file I don't understand. It seems to be some sort of referenced data with an offset and value, but I haven't yet devoted time to figure out what it refers to.

Sunday, October 17, 2010

Misc progress

There were some irregularities in the FLAIR output vs my output. The first is that I realized I had misunderstood what FLAIR calls modules. When I had seen FLAIR stuff before, it had been all windows junk and every line corresponded to one function. However, the test file I worked with was a .a file. Since a .a file contains .o files, this made it so each library module was actually a collection of functions in each .o file. I think was done because of the way linking groups object files on desktop / operating systems. However, this assumption is not valid for embedded systems due to the heavy static analysis done. I'll add a policy option to control how these signatures are generated. Additionally, there was a bug resulting in skipping functions that had multiple defined names. Code did something like size = next - current positions. If size is 0, assume undefined symbol and skip. However, this 0 size turns out to because of multiple names at the same file position and I was mentally mixing up the tables for all symbols vs defined symbols.
Regarding GUI progress, I've been trying to learn how to write a custom QAbstractScrollArea. This is the long term solution to disassembly, hex, and other complex scroll area issues. Instead of generating just the current page using hacks, a proper widget will be implemented that can fix things up as needed. Not going so well so far, but at least I know roughly what I need to do. It looks like the options are basically to implement QAbstractScrollArea directly or subclass QTextEditor or similar. I tried messing with subclassing QTextEdit, and it looks like it would become a hackish mess quickly. Signals were changing some of the values I tried to override and who knows how it would stand up to future or past Qt revisions. The other solution seems more manageable and proper way to do things. I'll probably implement a hex viewer first since its simpler to toy with than using the full uvudec engine.

Saturday, October 9, 2010

Object architecture plugin-able

The object engine is now pluginable. This was the last major planned core architecture enhancement for the semester. Object plugins allow say, to create a plugin for Microchip's C18 compiler if it used some format (I think it did, but I don't remember for sure). However, this did result in one hack that I haven't figured out if is actually a good idea or really is a hack. There is now a function getPrimaryExecutableAddressSpace() that returns the area where the main program is expected to reside. It seems this will work in most cases for the general "just disassemble the binary" case, so maybe this isn't an issue. Or maybe I should iterate over all executable segments? The issue with the latter is that often these aren't desirable to print. For example, on Linux x86 with ELF files, those aren't desirable. However, one of the engine enhancements was grouping object file format, architecture, operating system (future placeholder), and debugger (future placeholder) into a UVDRuntime class. This makes it so that if enough information is given, it is possible to combine attributes to resolve these special cases.
There are still several parts of the code that need to be improved and will be hammered out more as I need to support a more advanced architecture like x86. The specific semantics of how to read data from within a memory section need to be refined. For example, should the encapsulated data object actually do the mappings for the virtual address space? For example, say the address space actually starts at 0x08400000. Should the first valid address be 0 or 0x08400000 on that data object? One issue with the latter is that if an architecture does not provide byte level addressing, a lot of functions may not make sense. It seems most if not all architectures base their addressing on bytes, not words, so this may not be an issue. Since the number of memory spaces should be fairly small with perhaps up to a thousand on a large Windows program with many DLL dependencies, it may not hurt to provide objects that can map the addresses in both ways as a convenience since it looks like each mapper object only consumes about 24 bytes (virtual func table, target data, start target address, end target address, start read address, end read address) which is relatively small.
Now that the core engine enhancements are done, I will work on the library recognition enhancements. There are several large action items. First, make the BFD output matches FLAIR output exactly. There are some advanced reference features and my lack of understanding of the symbol references that are getting in the way of that. Second, decouple FLIRT code from bfd code. This will mean that if someone can write an object module that finds relocations, it can generate pat files. Finally, implement library recognition in the analysis engine. This is a lower priority since the generated .pat and .sig files can be used in other programs, so are not useless by themselves.
I'm also still trying to figure out GUI issues. The short term solution seems to be to make the scroll bar based on address only and only render the area currently shown. An example software that does this is the Metasm GUI.
  • Much faster render time
  • It will be easier to figure out scroll bar position. Changing data analysis does not mean we'd have to recalculate our position. For a large program, and most are probably large enough, the user probably wouldn't notice this difference anyway due to the huge number of elements at each position
  • We must perform our own logic when an address is clicked on instead of using tags. This was probably going to happen eventually anyway in case we jumped address spaces
  • To make scrolling be line (which seems more intuitive to me), we will have to buffer some of the nearby rendering and feed it line by line
  • Some advanced level text editor/viewer features may be lost (copy text over multiple pages, etc)

Sunday, October 3, 2010

Plugin away at architecture enhancements

Okay so I wrote this at 1:30 AM or something and I'm really tired, but its hopefully at least somewhat coherent brain dump.
With the new semester comes many new goals. One of the major goals was to get a plugin system up and running.
The first part of this was to modularize the architecture instead of assuming UVD Disassembler config file based architecture which is obviously not gong to work in every case. I spent most of a day or two a few weekends ago and this is now done. First, I separated the code out so that it worked as a dynamically assigned UVDArchitecture object. The second part was to make it a plugin. Although I don't like the way load paths and such are treated (security issues for starters), I have at least a basic system running. Example output:
[mcmaster@gespenst bin]$ ./uvudec.dynamic --help
uvudec version 0.3.0
libuvudec version 0.3.0
Copyright 2009-2010 John McMaster
Portions copyright GNU (MD5 implementation)

Usage: ./uvudec.dynamic
--debug-general (debug.flag.general): set given debug flag
--debug-flirt (debug.flag.flirt): set given debug flag
--help ( print this message and exit
--version (action.version): print version and exit
--plugin ( load given library name as plugin
--plugin-path (plugin.path.append): append dir to plugin search path
--plugin-path-prepend (plugin.path.prepend): prepend dir to plugin search path
--input (target.file): source file for data
--output (output.file): output program (default: stdout)

Loaded plugins (1 / 1):

Plugin uvdasm:
--config-language (config.language): default config interpreter language (plugins may require specific)
python: use Python (default)
--config-language-interface (config.language_interface): how to access a specifc interpreter (options as availible)
exec: execute interpreter, parse results
API: use binary API to interpreter (default)
--arch-file (arch.file): architecture/CPU module file
Which reminds me, the GNU hash thing doesn't need to be in there and should be removed since its GPL contamination. Plugins can be also added easily by third parties as the recursive make simply tries to execute all makefiles one level down from the $ROOT/plugin dir. So, also you have to do is copy an example plugin and you can try it out without editing any of the core makefiles. Maybe I'll do something similar to Linux style kernel modules where you latch onto an installed build system. There is an issue with current code installing nicely and having modules compile the same because installed headers will all be prefixed with uvd/, but for dev they currently aren't. I could put all headers in a uvd dir. I was thinking of simply sym linking uvd dir to main code dir. This wouldn't work on native Windows builds, but I don't know if I will ever support that anyway. Probably Cygwin or mingw at best. I should look at some projects like libxml2 that I know do this and see how their build system handles it.
The other major area that needs to be improved, and more so than architecture, is object file abstraction. Assuming input files were only raw binaries leads to lazy addressing throughout the code where its assumed all addresses are absolute. This isn't quite true as RAM is in a different address space as code (on some architectures), but since I'm not doing heavy analysis yet, these haven't been well separated. These improvements will be primarily driven by the need to abstract the object file formats and provide the ability to write plugins
I've never written a plugin architeiture before and so I've played around with several design patterns to try to make it work smooth. Ultimately, it seems different components need different interfaces to work well. What its seeming to boil down to though is that analysis events are best broadcasted as events and if people need to listen to them they register to a single callback and filter out the ones they don't want. However, with heavier duty objects such as architecture instantiation, I haven't quite decided on the exact mechanism. There are several constraints influencing this decision. First, ability to register loaders should be accessible for non-plugins as I haven't strictly made binary loaders and such have to be dynamic libraries and such yet. This requirement may change in the future as it would make the code more regular. Second, it is probably a good idea for factory methods to automatically unregister themselves if a plugin is unloaded. Third, for interactive purposes, it is desirable to be able to probe how good of a match a loader is to an input format without actually instantiating the engine, if possible.
To deal with the first issue, there needs to be a non-plugin way to register creation callbacks. So, simply iterating over the loaded plugins and calling a loadObjectFile or such function is not enough. At the very least, there would also need to be an additional list of registered callbacks. So, this seems to favor the latter method exclusively as it simplifies the code. The main downside is that plugins may have to be more aware of architectural changes. I might make it so that the plugin engine registers loader handlers to give an additional option if its not too much effort.
For the second, using the hybrid plugin and registration method would solve this and I might do that. For the last issue, this creates the issue of needing to maintain correlated function sets. Its certainly easier to do this with plugins since they provide coordinated data structures. Otherwise, I would have to create some hash maps during registration to correlate them. And, since there was no identifier returned to know what engine (plugin) it was, it would be difficult to track which engine it was that provided the best loading. These factors seem to indicate that the best long term solution would be to make every architecture and binary format loader its own plugin.
As far as the actual design of the object abstraction itself, I've been looking into several existing object abstraction frameworks. Mostly though I'm going off of binutils since its what I'm most familiar with. They seem to have a bias towards ELF, but do in fact support a lot of architectures and object formats.
On the GUI side. I tried adding clickable address links, but led to performance issues with large documents. I'll have to look into this some more. I'm thinking of making a custom "scroll" bar that shows the address locations and generates the screen only as needed. I know from time to time I like to copy and paste stuff though, and this might make doing that difficult, but possible, over areas larger than one screen. Also, IDA 6 has been released with a new spiffy Qt driven GUI. I probably won't get to try it out at least until February if I go work for MIT Lincoln Labs full time. But I did get to look at some screenshots and it looks nice.

Friday, September 17, 2010

On disassembler graphical user interfaces

There is now a minimal GUI up and running:

The demonstration here is not so much that I've yet made a usable program, but that I learned the basics of Qt and can start moving forward with what it will take to make a usable GUI.
As part of this process, I wanted to review some of the leading disassembler environments out there and what I like and don't like about their interfaces. I might interject a few things that are more feature related, but try to stick to usability.
The biggest open source player is probably Metasm. I had forgotten about their GUI component until someone reminded me of it recently. I had mostly looked at them before because, during the course of this project, they released a decompiler module. I only played with it briefly, but porting it to a plugin might server as a good starting point for me. In any case, this is what I came up with their GUI disassembling objdump:
Hmm it didn't disassemble it. Not a huge deal, it wasn't hard to figure out how to start it disassembling:
And the decompile feature was pretty easy to use as well:
To be fair, they do label it as as "sample application" of the metasm library rather than a full application bundled with metasm. That said, here are my comments:
-I would expect it to disassemble by default. If the user had a truly good reason why it shouldn't, maybe they should use a "open wizard" menu option that gives advanced open options.
-The graph view is decent. I'd like to see thicker lines
-It doesn't seem like the GUI is a main focus. If it wants to be a serious competitor to IDA, a full GUI should be made. But, RE's typically don't like GUI stuff, so I can see why it might not happen
-Needs more keyboard shotcuts
-Couldn't get the comment function to work. For some reason it seems to decompile
-Crashed on me after using it for only a few minutes
-I'd probably prefer tabs to floating UI elements across the desktop. At the very least, I'd prefer them bound to a main window. Some software. such as Xilinx ISE allows either if people really cared.
-The underlying engine seems pretty powerful and with work, a nicer GUI could be made for it. As such, I might consider writing a metasm plugin if it seems I can really leverage some of their features, such as the decompiling. I'd need to see if I can get uvsync and metasmsync plugins though so they can cleanly exchange analysis data and maybe avoid the issue of linking C++ against Ruby.
-I shouldn't need to know Ruby to run it. Since this is intended as a Ruby sample, seems acceptable given their target audience

Here's what Hex-Rays shows for IDA (
Overall, the GUI is decent since it has so many features. I'm not a high IDA user as I don't have a license for it, but there is the older version available. Also, I've just used it on occasion as some of the companies I've done work for have had a license I can use. So, some of my complaints may be inaccurate and I might not just know how to custimize it / where the correct command is. Some of the areas I think I could improve on:
-Price tag: $$$
-Structure definitions. Supposedly you can import C header files instead of using their janky structure defining tool. I couldn't figure it out. My other complaint about structures is that it doesn't seem you can define a structure on the stack
-Debugging: Far behind OllyDbg. WinDbg is okay, but seriously it should support GDB server out of the box. And WTF is the Bochs debugger?
-The IDA Pro book is pretty clear the closest thing to undo in IDA is to close the workspace and open it again
-IDC script: fortunately, IDAPython exists. Admittedly though, IDC was made at a time when such tools at Python weren't (well?) developed. Plus, IDC is still a million times better than OllyScript
-Graph view is overall decent. I see it sometimes will go out of its way to interleave unrelated loop basic blocks together to try, I think, to make consistent loop entry point shapes.

I use OllyDbg quite a bit of Windows RE. Since I'm too lazy to fire it + VMWare up, grabbed a screenshot from
For a free product, its nice, but I don't really understand why they don't release the source code. Speaking of that, don't they admit to using a GPL'd assembler?!?!? Seems like anyone could clearly request the source code from them. Anyway what I think:
-Overall, debugging interface is way better than IDA
-I like the CPU view especially
-Stack view is good. I find it has a tendency to force ESP view upon you and for some programs its nicer to use EBP view. Showing function args is very nice.
-User interface could be more flexible. I go long stretches without using the binary view below, making it wasted space

I'd use Immunity debugger more since I like Python, but the phone home bit creeps me out. Granted, I do typically operate on the assumption the machines I'm working on are owned, but its the principle I guess. Anyway, here's their screenshot of it (
I don't know enough about this product to really write about it. I do know they tried to conform to OllyDbg stuff, so their GUI is similar. I think theres is more powerful though with such additions as the graph view shown above. The real kicker for me not using it is that it seems to claim some compatibility with OllyDbg plugins, but I found they mostly just crash.

The last big one I know of is PyDbg. If i recall though, it depends on PaiMei, which is a tutorial in itself to get running. Or maybe the dependency is the other way around. I think PaiMei in turn requires IDA, another discouragement to me. In any case, when someone showed it to me before, I wasn't very experienced in RE and it was over my head. So, I can't make any real observations about it other than I remember it being difficult to setup due to a large number of dependencies. Here's a token screenshot of PaiMei:
I must say, it has a somewhat appealing UI.

So, in summary, here are the features most lacking in other environments I want:
-Undo. Sure, it might be hard to undo a function call during debugging. But, is it really so hard to undo converting between code and data or undo a function rename?
-Good keyboard shortcuts. I'd like to think of what would be my "ideal" shortcut layout given no history, and then think about what I'd like to do to preserve compatibility. At the very least, I'd like a config menu where you can select IDA or such compatibility mode.
-Efficient use of screen area. While I tend to write source code generously spaced, I like data tightly packed. If someone wants to have a sparser UI, they should be able to drag widgets around to make it fit their taste. IDA and Metasm both have some elements of this, but there is still room for improvement.
-Easy to setup/run. I could be able to run some shell script or executable and it should just run. I shouldn't need to know anything about what language its implemented in.
-Multi platform. Many of these are Windows only
-Free base product. I don't mind paying for plugins if I want some extra kick, but the base product should be free.

Those are the main points for now. I'll probably update this more as I figure out what's important.

Tuesday, August 17, 2010

FLIRT progress

Last semester, my goal was to get a working FLIRT implemetnation up and running. However, with it being my most intense semester to date, I didn't get as far as I wanted Most of the work involved code fixes and some crude code fixes to the Red Plait (rpat) pattern generator.
Really though, rpat needed a full rewrite to be truly useful in my project. While I could have maintained in its original C form, I really would rather have had a highly functionalized version using STL data structures. As of commit e719d6ddba27bf1fa4d1ace18d3426a356d19fa5, this is done. Example output:
[mcmaster@gespenst bin]$ ./uvobj2pat.dynamic --input=../obj2pat/main_d.o
5589E583EC18E8........C1E81F84C07426C744240C........C74424085800 2B E01B 004B :011F _Z14initProgConfigv ^0007 _Z21initFLIRTSharedConfigv ^0016 ^0026 ^0032 uv_err_ret_handler ^0039 g_config ^0040 5589E583EC18........FFC1E81F84C07426........5E0100........085800........240406000000C7
5589E55383EC24C745E8FFFFFFFFC745EC00000000C745F000000000C745F4FF FF F3C5 0349 :01A4 _Z6uvmainiPPc ^0024 _Z13UVDGetVersionv ^0037 strcmp ^0045 _Z13UVDGetVersionv ^005B ^0060 printf ^0065 stdout ^006D fflush ^0072 _Z7UVDInitv ^0081 ^0091 ^009D uv_err_ret_handler ^00A7 g_config ^00B8 ^00C8 ^00D4 uv_err_ret_handler ^00DE _Z14initProgConfigv ^00ED ^00FD ^0109 uv_err_ret_handler ^0127 _ZN9UVDConfig9parseMainEiPKPc ^0138 ^0148 ^0154 uv_err_ret_handler ^0173 ^017F printf_debug_level ^0186 g_flirt ^018B _ZN8UVDFLIRT8getFLIRTEPPS_ ^0199 ^019E puts ^01AF g_flirt ^01BB ^01CB ^01D7 uv_err_ret_handler ^01E1 g_uvd ^01ED ^01FD ^0209 uv_err_ret_handler ^0213 g_uvd ^0222 ^0232 ^023E uv_err_ret_handler ^0248 g_uvd ^0261 ^0271 ^027D uv_err_ret_handler ^0290 _ZNKSt6vectorISsSaISsEE5emptyEv ^029B ^02A0 puts ^02A5 _Z7UVDHelpv ^02AD ^02BD ^02C8 uv_err_ret_handler ^02DD ^02E2 puts ^02EF _Z9UVDDeinitv ^02FE ^030E ^031A uv_err_ret_handler ^0324 ^0334 ^033F uv_err_ret_handler 5589E553........45E8FFFFFFFFC745EC00000000C745........00C745F4FFFFFFFFE8FC........C3E82DFEFFFF895C2404890424E8FCFFFFFF........C0........E8........89C3E80C........5C........2404C7042460000000E8FC........00000000890424E8FCFFFFFF........FFC1E81F84C07429........57010000C744........0000C744240406000000C70424........E8FCFFFFFFE99D020000A100........45EC837DEC007529........57010000C744........0000C744240406000000C7........FFFFE8FCFFFFFFE966020000........FFC1E81F84C07429........57010000C744240881000000C744240406000
5589E583E4F083EC20C7442404........C7042405000000E8........8B450C 3C F345 005C :04ED main ^000D ^0019 printf_debug_level ^002B _Z6uvmainiPPc ^0037 ^0043 printf_debug_level 5589E583E4F083EC20C744........0000C70424050000........FFFF8B450C894424........890424E8FCFFFFFF8944241CC74424044B010000C7
Which seems consistent with the assembly:
4ed: 55 push %ebp
4ee: 89 e5 mov %esp,%ebp
4f0: 83 e4 f0 and $0xfffffff0,%esp
4f3: 83 ec 20 sub $0x20,%esp
4f6: c7 44 24 04 3e 01 00 movl $0x13e,0x4(%esp)
4fd: 00
4fe: c7 04 24 05 00 00 00 movl $0x5,(%esp)
505: e8 fc ff ff ff call 506
50a: 8b 45 0c mov 0xc(%ebp),%eax
50d: 89 44 24 04 mov %eax,0x4(%esp)
511: 8b 45 08 mov 0x8(%ebp),%eax
514: 89 04 24 mov %eax,(%esp)
517: e8 fc ff ff ff call 518
51c: 89 44 24 1c mov %eax,0x1c(%esp)
520: c7 44 24 04 4b 01 00 movl $0x14b,0x4(%esp)
527: 00
528: c7 04 24 05 00 00 00 movl $0x5,(%esp)
52f: e8 fc ff ff ff call 530
534: 83 7c 24 1c 00 cmpl $0x0,0x1c(%esp)
539: 79 07 jns 542
53b: b8 01 00 00 00 mov $0x1,%eax
540: eb 05 jmp 547
542: b8 00 00 00 00 mov $0x0,%eax
547: c9 leave
548: c3 ret

However, the correctness of this won't be known until I actually try processing it. This version should be much easier to maintain and be more stable. There are some assorted points to fix, but it was definitely a step in the right direction. Next steps will be to implement pattern generation for libuvudec based disassembling and verify FLAIR's makesig produces the same output and/or is valid for input to pat2sig. One of the challenges I will of course face is how to support all of the architectures. I'm thinking the best way will be for any arch IDA does not support, I'll put a different magic number at the start of the file and add a new architecture field
Part of the reason I got working on this again is someone pointed out to me someone posted the .sig binary format here. I was hoping to do this last semester, but mostly due to time constraints, didn't get to it. With the format out there now, it will be much easier to implement a pat2sig equivalent and the matching loader.
Finally, I made a quick stab at a Qt GUI:
All it does is display the function list. I was trying to figure out how to get the window to resize to fit the whole screen but needs more work. I don't have a lot of GUI experience, so there will be a learning curve to get things decent. Hopefully though, if this project becomes popular, it shouldn't be too hard for someone to design something better and swap it out. The major issue that became readily apparent is the engine is currently designed for one shot analysis. It cannot respond to annotations. Thus, some sort of analysis event system will have to be implemented. This will be important for plugins as well as they may need to respond to various engine events if they want to do more than just respond to keystrokes. In the near future for the GUI, I'd like to:
-Get the window to resize correctly
-Display a dead listing
-Ability to double click on functions and have that location appear in the dead listing
-Write comments
-Load and save a workspace (I briefly toyed with this)
-Load FLIRT signatures. This may be sooner than later as I'm currently writing uvpat2sig which will mean I'll have at least most of first revision signature engine up.

Sunday, July 25, 2010

New utility program: uvstructoffset

Under util/uvstructoffset/
Here and there I find the need to verify the APIs I create have the correct structure alignment. I would do this manually with something like:
printf("some_member: 0x%.4X\n", offsetof(struct my_struct_t), some_member);
But this can be automated if there was a program to parse the C code. pycparser seemed like the most convenient C parser to use. After some coding, I could transform structures:

struct sig_header_t
uint8_t dell1800FP;
short atomicForceMicroscope;
uint16_t fearAndLoathingInLasVegas;
uint8_t oatmeal;
char paperTowels;
uint16_t jointedGlasswares;
uint8_t micrometer;
int orangina;
char accidentWaitingToHappen[16];
uint8_t SATADrive;
uint16_t fastSteeringMirror;
uint32_t hamburgers;
} __attribute__((__packed__));
Into this:
struct sig_header_t
dell1800FP @ 0x0000
atomicForceMicroscope @ 0x0001
fearAndLoathingInLasVegas @ 0x0003
oatmeal @ 0x0005
paperTowels @ 0x0006
jointedGlasswares @ 0x0007
micrometer @ 0x0009
orangina @ 0x000A
accidentWaitingToHappen @ 0x000E
SATADrive @ 0x001E
fastSteeringMirror @ 0x001F
hamburgers @ 0x0021

Formatting doesn't line up on the web, but you get the idea. The C parser only parses the structure and doesn't try to guess the actual offsets. This is done with gcc since this is really the only reliable way to do it based on all of the different data types and such are considered. For decompiler use, config files will have to be referenced for data sizes and alignment.
After working with JSON, I've decided it is the sort of data exchange format I've been looking for for a while. I've never been a fan of XML because I find it overcompicated to parse and work with for general use. If I need any sort of convient data exchange between programs where performance in't an issue but convenience is, I'll probably use it as the format. As required, these can be migrated to higher performance formats. This will include configuration files and structures definitions.
Eventually I will be needing to parse out structures in the decompiled/disassembled files. Since these files don't need to be parsed often and for other data exchange reasons, I will be using the aforementioned Python parser to output a JSON structure definition.

Saturday, July 10, 2010

ELF object dump repaired

As commented in the previous post, the old ELF code was very ugly and a thorn in my side. It is now fixed and running better than other and much easier to maintain. Example object file:
[mcmaster@gespenst bin]$ objdump --syms --reloc analysis/sub_0EC3.elf

analysis/sub_0EC3.elf: file format elf32-i386

00000000 l df *ABS* 00000000 candela_pltl1_rev_3.bin
00000000 l d .text 00000000 .text
00000000 *UND* 00000000 sub_0FCB
00000000 *UND* 00000000 sub_0FE5
00000000 *UND* 00000000 sub_101C
00000000 *UND* 00000000 sub_75E3
00000000 g F .text 00000107 sub_0EC3

0000009e R_386_16 sub_0FCB
000000c0 R_386_16 sub_0FCB
00000103 R_386_16 sub_0FE5
000000a1 R_386_16 sub_101C
000000c3 R_386_16 sub_75E3
I'm still using 386 object file format and still unsure how I want to deal with that in the future. For now it seems like the logical thing so that tools like objdump will play with it. I could in theory generate a patch for binutils, but that would be annoying for decompiler users and a pain to maintain. Maybe I can make a plugin patch for binutils and try to get it into the mainline? It ha some very limited (read: not useful enough) plugin capability. I didn't look into it too much, but from a quick grep only nm and ar support binutils plugins. At first it didn't look like there is any notion of full architecture plugins, but now I'm thinking its just that only that for some odd reason only certain tools allow use of uninstalled plugins. Maybe I'll send an e-mail to their mailing list for advice. clang/llvm is a more modern project and might work better with this stuff. Unfortunately, I haven't yet spent any time to learn it. From a quick look at it, they expose a lot of API stuff that might be good if I wanted to allow my arch files compile executables, but they seem to still depend on binutils for day to day object inspection. clang can do some linking, but I don't know where this functionality comes from. For the short term, I might consider making a very basic binutils skeleton to work with these files, probably in Python. I'm afraid this will feature creep and never get replaced though and is ultimately not the right way to do things.

After some more thinking and research, I've decided by and by the far easiest and cleanest way to do this is to simply provide wrapper programs over binutils. The basic idea will be to parse out binutils files and redirect them to files temporarily fixed up to be compatible with binutils. Basically, this will involve temporarily changing the object type to EM_386 since that's what all constants will be based off of. While this will obviously run into a number of issues, it unfortunate seems by far the cleanest solution.

Sunday, July 4, 2010

Summer progress: installer, licensing, fixing bugs

This summer I'm doing malware analysis. The first time I've held a full time reverse engineering job, I'm getting time to try out a lot more tools and develop skills. These will undoubtably be valuable to this project.
The following is written with a huge patch queue sitting on my laptop (probably over 4k lines of git diff, maybe more once I'm done). I was hoping to stabilize the ELF code before commiting, hopefully my laptop won't blow up in between.
I played around briefly with Installjammer (website, github) and at first glance it seems pretty nice. It does seem more targetted towards Windows, but I was able to make an InstallShield type installer within minutes for Linux and was impresed. The biggest thing lacking I saw at first glance was no support for .rpm or .deb files, so I might consider asking what it would take to get those supported if I am still interested. Screenshot of the quick test installer running on Fedora 13:
I will probably be dropping support for SpiderApe in the near future. My previous system had unstable Python code which towards the end of its life I was working on fixing, but this was unbeknonst to me when I first tried using the Python APIs and was quite dissapointed with them. I'm still dissapointed in their lack of good error handling (Python_Init() or w/e its called returns void), but when it works it seems to work decent. Fedora 13 doesn't seem to ship a static lib, so I might try to do a prefixed installation to grab that.
On that note, I fixed the build process to be a lot cleaner with regards to using PREFIX variables on dependencies. The original reason why I started playing with using package uninstalled was mostly related to binutils not exporting everything I'd like it to that was needed for the binutils .sig generation when using rpat as a reference.
For a number of legal reasons, I'm going to dual license the project under BSD/GPL. I still like the freedom that BSD provides, but for a couple of reasons, including the otherwise inability to distribute binaries, I'm going to tack on a GPL licensing option.
The ELF/object generation system is being rewritten. It used a fixup based linking approach that just didn't work well. The code is much larger than it could have been using a sevearl pass approach. This is likely responsible for the errors that have been occuring and the rewrite seems nearly done and is resulting in much cleaner code.
Tonight I will hopefully finish the ELF rewrite or be very close. But, tonight is the night of fire, and I like fire. So I might be busy. Happy 4th!

Monday, May 31, 2010

Bug Trac-ing

I've been meaning to try out Trac for some time and decided this project would be a good excuse. After looking around for a bit, I found free Trac hosting at Assembla which seems to provide pretty decent free hosting. While I would have preferred it through a non-profit organization, I decided its not a big deal since I can just export the Trac database and move hosting if I like Trac and want to move hosting later. The uvudec bug tracking system can be found here. The system just has a few issues in it now and I'll be building up the database over time each time I see one of my XXX or FIXMEs in code.

Friday, May 14, 2010

uvromstat: statistical ROM image integrity testing

I've become aware of several different issues that can arise during ROM imaging. There are three main classes of errors I've come across:
-Blank or nearly blank ROM
-Missing address pin. Results in repeat blocks
-Bad selection switches, improper placement, or other read routing issues. With my Willem programmer anyway, this results in bad rips with several interesting, human recognizable properties. More on this later.
Blank ROMs are easy to detect. I do a simple frequency analysis. If only one byte showed up, its definitely blank. If below a certain threshold, defaulted to 16, I consider this a mostly blank ROM that is either severely damaged or otherwise poorly used. Frequency distribution from a nearly blank ROM:
Raw data looks something like this:
00000000 04 7f 25 7f 45 7f 7f 7f 04 7f 7f 7f 7f 7f 7f ff |..%.E...........|
00000010 04 7f 7f 7f 67 7f 7f ff 65 7f 7f ff 7f 7f 7f ff |....g...e.......|
00000020 04 7f 7f 7f 7f 7f 7f 7f 65 7f 7f 7f 7f 7f 7f ff |........e.......|
00000030 45 7f 7f 7f 7f 7f 7f ff 7f 7f 7f ff 7f ff 7f ff |E...............|
00000040 04 7f 7f 7f 7f 7f 7f 7f 45 7f 7f 7f 7f 7f 7f ff |........E.......|
00000050 45 7f 7f 7f 7f 7f 7f ff 7f 7f 7f ff 7f ff 7f ff |E...............|
Missing address pin is also fairly easy to detect. I'll assume that pins have been disconnected and will hold a value, say 0, when disconnected. I do not consider the case where it floats and randomly selects data. For the assumed behavior, just look for repeated blocks of data of size 2**n. For example, say the original data was 0123 4567 89AB CDEF. If the highest address pin was disconnected, we will get 0123 4567 0123 4567. If the second highest address pin was disconnected, you'd get 0123 0123 89AB 89AB. Both broken would result in 0123 0123 0123 0123.
For the last case , this can occur several ways. The first observed was when a 27C eprom was put into the flash socket by mistake. A separate adapter board is actually required to rip 27C PLCC32 chips. Additionally, similar output appears for some reason I have yet to look into when no chip is inserted into the Willem and told to rip anyway. An excert from such a rip of a 27C put into the flash socket on a Willem:
000003d0 d5 d5 00 00 d5 d5 00 00 d6 d6 00 00 d7 d7 00 00 |................|
000003e0 e0 e0 00 00 e1 e1 00 00 e2 e2 00 00 e2 e2 00 00 |................|
000003f0 e3 e3 00 00 e4 e4 00 00 e5 e5 00 00 e6 e6 00 00 |................|
00000400 04 06 04 06 04 06 04 06 04 06 04 06 04 06 04 06 |................|
00000500 14 16 14 16 14 16 14 16 14 16 14 16 14 16 14 16 |................|
00000600 04 06 04 06 04 06 04 06 04 06 04 06 04 06 04 06 |................|
00000700 14 16 14 16 14 16 14 16 14 16 14 16 14 16 14 16 |................|
Graphical frequency analysis (optional command line option that uses gnuplot-python) showed some obvious trends on such chips:

These chips skip frequencies in blocks. In the first one, the first 8 bytes are represented, the second 8 aren't. This continues for the rest of the set. The same trend is seen in the second one, but with a block size of 128 instead of 8.
These algorithms are being developed in util/uvromstat/ (UVet ROM STATistics). Here are some of the ones it detected:
[mcmaster@gespenst ROM]$ uvromstat $(find . -name '*.bin' |fgrep -v .svn) |fgrep ERROR
./buffer/quantum_first_prob_bad.bin: ERROR: expect did not position for rip correctly! (27C check)
./buffer/quantum_first_prob_bad.bin: ERROR: block size: 16
./buffer/prompro7_junk/27C512_IBM_sticker.bin: ERROR: expect did not position for rip correctly! (27C check)
./buffer/prompro7_junk/27C512_IBM_sticker.bin: ERROR: block size: 256
./buffer/prompro7_junk/27C512_IBM_sticker.bin: ERROR: an address pin is probably missing
./buffer/prompro7_junk/27C512_IBM_sticker.bin: ERROR: repeated block size: 0x2000 (8192), address bit: 0x000D (13)
./buffer/prompro7_junk/2732_1.bin: ERROR: few distinct numbers, expect nearly blank
./buffer/prompro7_junk/2732_1.bin: ERROR: expect did not position for rip correctly! (27C check)
./buffer/prompro7_junk/2732_1.bin: ERROR: block size: 8
./buffer/prompro7_junk/27C256_0.bin: ERROR: an address pin is probably missing
./buffer/prompro7_junk/27C256_0.bin: ERROR: repeated block size: 0x2000 (8192), address bit: 0x000D (13)
./buffer/prompro7_junk/27C256_0.bin: ERROR: an address pin is probably missing
./buffer/prompro7_junk/27C256_0.bin: ERROR: repeated block size: 0x4000 (16384), address bit: 0x000E (14)
./arch/8051/80C32/cisco_catalyst_5000__supervisor_I/80C32.bin: ERROR: an address pin is probably missing
./arch/8051/80C32/cisco_catalyst_5000__supervisor_I/80C32.bin: ERROR: repeated block size: 0x20000 (131072), address bit: 0x0011 (17)
Another thing to note is that if one check fails, other might as well. This did in fact discover several images that I didn't realize had issues. The Catalyst 5000 image I had recognized during my initial rip was having issues. I was using a breadboarded ripper at the time and could not figure out why the image appeared to repeat twice. It is possible that I messed something up and would double check with the WIllem now, but it appears that is the only chip out of the set from that board I didn't keep. In any case, it does appear to be unused at the end, so the image is not likely truncated.
In conclusion, even a simple utility has proven useful for discovering bad rips. When I rip ROMs in the future, I'll try to run them through it to discover common issues before I throw the ROM away.

Thursday, May 13, 2010

ROM ripping rampage

When I bought the PromPro-7, it came with a lot of EPROM and related MCU chips. Unfortunately, the picture of it didn't migrate over when I switched accounts with this blog and I can't find the original picture. Since the PromPro-7 didn't work very well, they never got ripped. Although I really wasn't looking to get a parallel port unit, I finally got the tried and true Willem programmer since other things weren't working out. My Linux file server hasn't been up for a while since the software only works in Windows and I didn't feel like reconfiguring all of my shares and drives.
In addition to the chips the Prompro 7 came with, I've had a bunch of other boards and chips laying about my room. The largest was probably a stack of boards from a Siemens PBX that was thrown out and I got to torture. One of the first stumbling blocks I encountered from this was that there was a 1Mb EPROM on it. This is an issue since the stock Willem programmer can only handle 512Kb EPROMs. So, I ordered an adapter board and ripped it. I had a few other large EPROMs, so this turned out to be a good idea. One notable was from a Rockwell Collins Pro 2000-SL transciever/GPS unit that I found cheap ($20 shipped) on eBay a bit back. This unit should prove interesting for a variety of RE purposes since it seems to be military grade. Here is a picture of it:
The until is composed of 4 main board assemblies: power supply (+module), CPU board, GPS translator, and GPS. The GPS translator board is the only board that contained an easily accessible (socketed) ROM:
ROM data like GPS_SOLUTION_LOCK gives it a military feel. A 28F flash chip is also visible. Especially since I broke some of the flimsy cables on the PSU module, I might just rip the unit to shreds and rip all of the chips since its probably a better use of time than getting the unit running anyway. With some luck, there will be assorted ASICs using rare standard cell libraries that I can submit to Silicon Zoo.
There are several types of chips that are still in my rip queues though. The first are 24C SOIC EPROMs. This is what a small group of us has been doing to experiment with some radio units we've been messing with. Hopefully, more on that later as it develops. Although I have some adapter boards, I don't feel like soldering to them each time I want a clean rip. I might be able to get away with just gently placing them on the pads, but really I need a ZIF socket. I'll hopefully be ordering 150 mil and 200 mil SOIC8 compatible (probably SOIC16 and I'll just slide it into position as needed) ZIF sockets.
The second type are 27C PLCC32 type EPROMs. I didn't realize originally that I needed a seperate adapter for those and that the flash sockets wouldn't work for those. Next, I have the stack of MCS-48 combo MCU/EPROM ICs that also require an adapter to work with the Willem. Finally, I have some Altera ROMs I'm trying to rip. The bus pirate might be a good choice, but I actually have some old Altera programmers (Altera Programming Units (APU)) that I was hoping would be a nice way to rip them. Unfortunatly, it seems I can't see to read the ROMs unless I setup a project or something. Lame. Maybe there is a better behaved command line utility in the suite or I can port the code to an open source Linux program.
Of all of the boards and ROMs I've ripped, my favorite so far has come from an IBM RS/6000 43P Model 260. In one of the ROMs, I found the string:
16-6, Go Horns !
Take that, Dave
Probably thought that nobody would ever read that. This ROM I think was labeled 0373.

Saturday, May 8, 2010

Install magic and RCOS goes 1337

I did a small amount of work to make a basic installable form. Although the project is prepped for automake, I haven't bothered with that yet and I'm not sure when I will. That aside, this is what I did.
Default install location is to /opt/uvudec/. A symlink is also created /opt/uvudec/current to the last installed version. This then contains a bin and an arch directory. The bin directory contains the executable binaries and library files necessary to run uvudec. The arch dir contains the architecture configuration files that are used to parse binaries into intermediate representation. I might like to later move the libraries into a lib dir instead, but this is a minor change. The biggest issue right now is that if a dynamic build is used, they need to know how to find them. I personally don't like using rpath because its hard to change without specialized tools. I'd consider LD_LIBRARY_PATH, but that has its own set of issues. The main one is that I'd have to have helper scripts or such to set it, which is also somewhat annoying. However, this is the approach many enterprise grade installable Linux software packages such as Firefox take and I will probably do that eventually.
Using makefile configuration variables and later through autoconf, you can do the standard PREFIX and such that allows you to install applications to the directory of choice. However, I personally like programs that I can shove in any folder on the hard drive and still work without hard coded paths. So, I added some detection of where the executable is and derrive the install directory from that.
On a random note, in honor of RCOS, I added a second random ASCII art choice to the --useless-ascii-art option (the first is a UVNet logo). Using an ASCII art generator at , I made a 40 X 40 char ASCII art RCOS logo and then cleaned it up a bit. Here is the current art:
I might remove the dots and leave as white space instead.

Sunday, April 25, 2010

New blog, undergraduate research symposium

After fitting an extensive battle, I finally got my Dia format poster to plot. Had to do a series of workarounds, but got something to work. The biggest issue was since they are large images, some operations took a while. Maybe I should have used some smaller test images? Dunno, never thought I'd have so many problems. Here's my rant about what it took for me to get my poster plotted. I just sent it off too, I don't even know if this went through, but, if it wasn't correct, should be minimal effort from here.

-After some experimenting, tiff seemed to be a good format both my computers and library Windows computers were happy with
-Dia export under a lot of formats (but not all including jpg) under Linux is buggy, but appears not to be under Windows. The bug is that it will misalign chars randomly. Created a .dia under Linux and moved it to Windows VM
-Gimp under Windows doesn't seem to export svg to tiff with correct resolution, but Linux gimp does. Move .svg to Windows
-Load .svg in Gimp. Don't change the resolution, it will just get angry and waste a lot of time. Even if you do wait long enough, which is several minutes (for just changing an option without hitting anything? what the heck?), it seems to asepct lock it to 1:1, screwing up the poster. Leave it at the default of 90 in/pix
-Once loaded, all operations are smooth. Export as a .tiff
-Copy tiff to USB stick and move to Windows computer
-Plot the stupid image with ARCH E type paper. Seems to only allow setting to 600 dpi..w/e hopefully that will still print and not run out of memory
Gimp can actually export .jpg's for some reason fine under my version of Linux. Here is a scaled down version of the final poster:

Oh...and I wish Dia had spellchecking support...but I'm probably too lazy to add it

Thursday, April 8, 2010

Friendly competition

There is finally another open source decompiler on the market! Metasm ( has released a C decompiler module for its assembly manipulation suite. Although I had heard of it before, I was not terribly familiar with this toolsuite. So, I will be looking into this in the near future to learn what worked well in what they did and what I can improve upon.
IDA .sig format has some good progress. Need to get a block of time to finish it though.

Wednesday, March 31, 2010

FLIRTing with binaries

Worked out the bugs from porting the rpat utility into my engine as a BFD based .pat generator. Basically, during the "can engine parse format" check I created, there was a faulty if block along the lines of:
abfd = bfd_openr("in.obj");
if( abfd != NULL )
Which obviously isn't correct as it should be error if we *didn't* get a bfd pointer. Second error was related to mixing up a constant I defined for the leading signature length, which is fixed at 0x20, versus the total signature length, which is somewhat fixed at 0x8000. I don't know the exact reason for this limitation at this time, but I've seen it mentioned several different sources.
Submitted an abstract to the URP symposium. We'll see how that goes. I'm a bit annoyed that they wanted .doc instead of say .pdf or raw text, but at least it wasn't docx in which case I probably would have skipped the whole thing out of convenience and principle. I have a Signals and Systems exam that day, but the symposium isn't until the afternoon, so should be fine.
I've also made some progress on better EPROM rippers. Most images I have I've got by breadboarding an 8051 based MCU circuit using the LITEC 8051 dev kit as a base. I have layout for a board, but need time to etch it. Alternatively, I could (and should) just solder up a perf board based ZIF socket reader using the expansion module on the LITEC compatible boards. Also, I got an EPROM programmer/reader in the mail today. Its a Prompro 7:

It seems a bit unstable at first work. Maybe needs some old filter caps replaced and such as the unit is fairly old. I'll try to do it a brief service this weekend if the two midterms on Monday don't keep me busy. And, as further reason to make the 8051 expansion module, I obviously can't rip its EPROM using the Prompro 7. Finally, I have a lot of EPROMs it came with if anyone needs any.

Sunday, March 21, 2010

Break + some after results

First, the Valgrinding went all right. No invalid memory usage errors were detected, only leaks. Fixed most of the leaks, but there are a few (possibly just one that is cascading) that was difficult to trace even with Valgrind magic. So, I'll have to look into that later to see if I can solve those. The Valgrind results indicate that some objects are not freed, even though their parent constructor seems to free them. Best guess is they are somehow being overwritten and the original object is never freed.
I evaluated several unit testing frameworks to in an attempt to keep the code healthy. We do unit testing at work, but use a large scale framework we wrote ourselves since we needed strong interaction with our army of VMs. After glossing over several lists of unit test frameworks, I first ruled out any commercial frameworks for monetary and principal reasons. The two remaining that looked promising were CppUnit (JUnit C++ port) and Google test framework. Knowing JUnit is pretty standard, I went for that. Some simple engine init/deinit tests have been written and hopefully some more tests will be written in the future now that a testing framework is in place.
.pat format is well underway. Much of the FLIRT framework was written and the .pat core was written and compiles. However, a relocation source needs to be written to feed into the pattern generation. Obviously one such source will be from the configuration file driven uvudec framework. However, I've also been asked before about my support / consideration for libbfd (GNU binutils core), so I did some research on how to use libbfd. Spent some time looking through binutils sources and reading a guide doc I found that highlighed important features (thanks Cygnus!). Unfortunately, I haven't seen any small examples, and only one or two nearly pseudocode examples are provdided on the Cygnus manual on the basis that they don't call things like bfd_init(). I can't get libbfd to recognize my file formats. Hmm. So I migth strip down objdump and see if I can figure out what it does differently.
On the same note, I tried to play around with the rpat utility ( as it does what I'm trying to do, just with libbfd only as its function/relocation parsing engine. Its written very hackish and I rewrote a lot of it to get it to compile and play nice. However, after this it had the same issues as my hello world, probably because I'm making the same mistake twice.
So, more updates after I fix the libbfd issue. Also, why doesn't libbfd expose the demangling function? Its put into bucomm.c instead and linked against every binutils program instead of being in libbfd.a. Annoying. Might e-mail them about that.

Monday, March 8, 2010

Critical bugs fixed, ready to grind

Two bugs were introduced through the argument merge. The largest was related to this function:
uv_err_t UVD::changeConfig(UVDConfig *config)
if( m_config )
delete m_config;
m_config = config;
return UV_ERR_OK;
Which would be called after argument parsing to set the config after we had parsed it in main. However, config parsing had been moved into libuvudec, so this effectively passed in such that config == m_config, result in itself getting deleted, and then set. This typically crashed in the cleanup code as it tried to access various element in m_config which were now presumably invalid memory addresses.
The second had to do with inadvertently freeing UVD's m_data, the data we are decompiling in the decompiler engine. I added this free because several items were missing in UVD's deinit(), but this was data handed by the main program and was not to be deleted by the engine. This resulted in a double free.

Here is some initial stats from Valgrind.
Doing only a basic engine initialization, deinitialization:
==14553== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 25 from 1)
==14553== malloc/free: in use at exit: 260,666 bytes in 6,750 blocks.
==14553== malloc/free: 22,177 allocs, 15,427 frees, 613,701 bytes allocated.
==14553== For counts of detected errors, rerun with: -v
==14553== searching for pointers to 6,750 not-freed blocks.
==14553== checked 223,760 bytes.
==14553== LEAK SUMMARY:
==14553== definitely lost: 111,708 bytes in 5,815 blocks.
==14553== possibly lost: 2,095 bytes in 17 blocks.
==14553== still reachable: 146,863 bytes in 918 blocks.
==14553== suppressed: 0 bytes in 0 blocks.
==14553== Use --leak-check=full to see details of leaked memory.

Doing decompile/disassemble:
==14805== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 25 from 1)
==14805== malloc/free: in use at exit: 6,635,642 bytes in 303,200 blocks.
==14805== malloc/free: 1,305,883 allocs, 1,002,683 frees, 113,107,916 bytes allocated.
==14805== For counts of detected errors, rerun with: -v
==14805== searching for pointers to 303,200 not-freed blocks.
==14805== checked 3,190,136 bytes.
==14805== LEAK SUMMARY:
==14805== definitely lost: 3,416,500 bytes in 244,937 blocks.
==14805== possibly lost: 3,564 bytes in 53 blocks.
==14805== still reachable: 3,215,578 bytes in 58,210 blocks.
==14805== suppressed: 0 bytes in 0 blocks.
==14805== Use --leak-check=full to see details of leaked memory.

So, needs some work, but in a way at least I'm not losing 100's of MB. However, going to work to see if can solve these.

Unit testing and such

Mixed progress so far. Crash from help screen was trivial to track, but during running if a full run is allowed or during cleanup if I abort early, the program crashes. Not sure if they are the same error, but working through Valgrind to fix it. Speaking of that, apparently Valgrind is allergic to static exes because it can't hook (or maybe just ID?) functions, so don't do it. The dynamic build was apparently broken, but now its fixed (/usr/local/bin/ld: ../bin/uvudec: hidden symbol `__dso_handle' in /usr/lib/gcc/i386-redhat-linux/4.1.2/crtbegin.o is referenced by DSO: change from ld to gcc -dynamic)
One thing I've been meaning to do for a bit is implement unit testing. I'm considering CppUnit ( since its based off of JUnit which is pretty standard and the Google framework ( also looked pretty appealing.

Sunday, March 7, 2010

Bust a grind with Valgrind

Made a bunch of changes. In particular, its moved to a proprety based configuration at the core. Other small features include a version embedded in libuvudec for verifying a good version was linked against and removal of a large number of global variables leftover from its C days. The global situation still has mostly a band aid on it since most of them just used a global configuration object now instead, but its probably still better than it was before.
Heres an old --help screen:
[mcmaster@gespenst uvudec]$ ./uvudec --help
uvudec version
Copyright 2009 John McMaster
Portions copyright GNU (MD5 implementation)

Usage: ./uvudec <args>
--verbose: verbose output. Equivilent to --verbose=3
--verbose=<level>: set verbose level. 0 (none) - 3 (most)
--verbose-init: for selectivly debugging configuration file reading
--verbose-analysis: for selectivly debugging code analysis
--verbose-processing: for selectivly debugging code post-analysis
--verbose-printing: for selectivly debugging print routine
--config-language=<language>: default config interpreter language (plugins may require specific)
python: use Python
javascript: use javascript
--addr-min=<min>: minimum analysis address
--addr-max=<max>: maximum analysis address
--addr-exclude-min=<min>: minimum exclusion address
--addr-exclude-max=<max>: maximum exclusion address
--addr-comment: put comments on addresses
--addr-label: label addresses for jumping
--analysis-only[=<bool>]: only do analysis, don't print data
--analysis-address=<address>: only output analysis data for specified address
--opcode-usage: opcode usage count table
--analysis-dir=<dir>: create skeleton data suitible for stored analysis
--input=<file>: source for data
--output=<file>: output program (default: stdout)
--debug=<file>: debug output (default: stdout)
--print-jumped-addresses=<bool>: whether to print information about jumped to addresses (*1)
--print-called-addresses=<bool>: whether to print information about called to addresses (*1)
--useless-ascii-art: append nifty ascii art headers to output files
--help: print this message and exit
--version: print version and exit

Special files: -: stdin
true includes case insensitive "true", non-zero numbers (ie 1)
false includes case insensitve "false", 0

*1: WARNING: currently slow, may be fixed in future releases

And heres the new:
[mcmaster@gespenst bin]$ ./uvudec --help
uvudec version 0.3.0
libuvudec version 0.3.0
Copyright 2009 John McMaster <>
Portions copyright GNU (MD5 implementation)

Usage: ./uvudec <args>
--help ( print this message and exit
--version (action.version): print version and exit
--verbose (debug.level): debug verbosity level
--verbose-init (debug.init): selectivly debug initialization
--verbose-analysis (debug.processing): selectivly debugging code analysis
--verbose-processing (debug.analysis): selectivly debugging code post-analysis
--verbose-printing (debug.printing): selectivly debugging print routine
--debug-file (debug.file): debug output (default: stdout)
--config-language (config.language): default config interpreter language (plugins may require specific)
python: use Python
javascript: use javascript
--addr-include-min (target.address_include.min): minimum analysis address
--addr-include-max (target.address_include.max): maximum analysis address
--addr-exclude-min (target.address_exclude.min): minimum exclusion address
--addr-exclude-max (target.address_exclude.max): maximum exclusion address
--analysis-address (target.address): only output analysis data for specified address
--analysis-only (analysis.only): only do analysis, don't print data
--analysis-dir (analysis.dir): create data suitible for stored analysis
--flow-analysis (analysis.flow_technique): how to trace jump, calls
linear: start at beginning, read all instructions linearly, then find jump/calls (default)
trace: start at all vectors, analyze all segments called/branched recursivly
--opcode-usage (output.opcode_usage): opcode usage count table
--print-jumped-addresses (output.jumped_addresses): whether to print information about jumped to addresses (*1)
--print-called-addresses (output.called_addresses): whether to print information about called to addresses (*1)
--useless-ascii-art (output.useless_ascii_art): append nifty ascii art headers to output files
--addr-comment (output.address_comment): put comments on addresses
--addr-label (output.address_label): label addresses for jumping
--input (target.file): source file for data
--output (output.file): output program (default: stdout)

Received signal: SIGSEGV

Which seems to have gone okay, including the parsing. Except one little thing you may notice at the bottom. Oops. I found the place roughly where the error occurred, but having issues. In fact, the program will run until about when the disassembling to intermediate representation is done and then will crash on the first instruction.
So, enter Valgrind. I've played with it for some trivial programs before and its probably time I learn how to do some more serious automated bug analysis anyway as it might give me some good ideas for this project. However, getting a large number of false positives and the suppression file doesn't seem to be helping. Grr. Oh and blogspot didn't like my angle brackets so I had to escape them.

Saturday, March 6, 2010

Break goals

Its semester break! And what better to do than to hammer at this project?
In preparation for the FLIRT tools, one of the things I've been meaning to do for a while is begin to rework the configuration into something more proper. My plans are to move the opcode configuration into using libconfig ( However, of higher priority is reworking the argument parsing. I'm moving to a property based approach like what Java, Mozilla products, and others do. Ex: There will still be long and short forms for some arguments that will map to property values. This allows a more organized hierarchy and better transistion to config files. Finally, this new structure allows better scaling of the argument parsing to multiple executables.
The biggest work done this far involved seperating code into the decompiler library and the main executable as well as creating skeleton projects for each of the tools I want to develop this semester. Let the coding madness begin! I've already coded up the argument parsing core.
The argument parsing style is somewhat modeled after a product I reverse engineered which I probably shouldn't disclose for legal reasons. They used a structure like this:

struct/class argument {
error handler function pointer
prev pointer
next pointer
argument name
number expected arguments
argument description

And would iterate over the linked list for each argument found. I'm using a similar structure, but have a richer need for argument flexibility (ie they only supported /argkey [argval]), so doing an individual argument preparse followed by some decoder logic to make things scale a little better.

Friday, February 19, 2010

Semester goals

This semester will be focused around adding compatibility support for the FLAIR toolkit from IDA. FLIRT (Fast Library Recognition Technology) is the core of FLAIR. In their paper at, they describe their technique. The basic idea is to create signatures from library files so that they can be recognized in static executables. This is vital for embedded systems as they are always static images (except for some systems running Linux and such, but I consider those more like a traditional workstation for now even though lightweight). The basic idea is this closed (unknown publically) format supposedly maintains a binary tree sturcture of the functions. As code is shared at the beginnings (such as the classic Intel setup frame pointer stuff...push %ebp; mov %esp, %ebp), the tree format is walked. The original function data is not shown in this format, which may be useful for copyright reasons.
There are several simple rules to generating the signatures. First, it is considered much better to not generate a signature than to generate a false positive. Thus, only functions of a certain byte length have signatures generated for them by default as they are likely to generate false signatures.
To generate a signature, start by grabbing a compiled library file. Running uvelf2pat (or similar depending on input format) results in the generation of an intermediate human readable pattern file. It can then be reviewed to verify the signatures they want to generate look accurate and see if there are any functions that were rejected from signature creation due to, say, being too small. Once this looks correct, a tool will try to convert it to a .sig file and see if the sigs conflict with other signatures in the program's database. If successful, a .sig file will be genreated that can now be used to recognize statically compiled functions.
I have several quality goals. The first is to implement several unit tests. A fundamental generic unit test will be to take generated signatures and make sure they recognize functions at the correct addresses in the original library. Next, a list of functions in that library in a static binary will be used as a reference and should detect all. Another useful unit test would be to see if I can get generated signatures to load into IDA or other tools that can use them such as certain OllyDbg plugins.
A nice feature that FLIRT doesn't do right now is I might see if I can automaticlly extract source code from the compiled files (if availible) and include them in an annotated analysis file. Alternativly, I may eventually create a custom analysis format to support my own features and to avoid the uncertainty of the .sig format.
For future work building on this, I'd like to implement several things. First, once x86 support is added, I'd like to index a large collection of static binaries, say from old video game images, to see if I can cluster the development libraries used. This could be useful once the tool is more developed to automatically scan software for licensing violations by providing signatures for GPL or similar code. Another feature that shouldn't be too difficult after the inclusion of signature support is to generate map files so that the function recognition can be imported into programs such as OllyDbg so this data can be useful even as the full decompiler is not yet ready.