James Whittaker and Hugh Thompson are presenting.
These two, from Florida Tech and SecurityInnovation performed
a study of security vulnerabilities in shipped products. (Is Florida
Tech the Florida Institute of Technology in Melbourne, or a different
school? The same.) They asked
what the fault was that caused it, what were the symptoms that should
have alerted testers to its presence, and what testing or analysis
techniques could have prevented the product from shipping with
They discovered that security and functional bugs have little in
common, and functional testing techniques aren’t very good at
uncovering security bugs. In particular, functional testing finds
where the software is missing some intended behavior; security
testing has to look for additional behaviors that weren’t
intended, and which can be vulnerabilities.
Four classes of vulnerabilities: External dependencies,
unanticipated user input, design deficiencies,
and insecure implementation.
To attack external dependencies: block access to libraries,
manipulate registry values, for the application to use
corrupt files, replace files the application users,
and force the application to operate under memory, disk, or other
constraints. Example: Internet Explorer has a content advisor
that prevents users from seeing “bad” sights. He turned that off
with a hacking tool, but he says all you really need to do is
delete the content advisor DLL, and Internet Explorer still works
but without the restrictions.
Aside: the screen is dim and slightly out of focus. PowerPoint
slides are easy to see, but application screens are really
hard to read.
Another example: hover the mouse pointer over a malicious Word
file in Explorer, and crash Windows Explorer. No clicking was
needed at all. But so far they haven’t explained the cause of
the crash. Maybe buffer overflows in Word related to some metadata
for the file (which pops up when you hover the mouse over the file).
Unanticipated user input includes buffer overflows, user command
switches and options, an use of odd characters (like null or
foreign character sets).
Design deficiences: common default account names and passwords;
expose internal APIs (especially for testing); connect to all
ports; fake the data source; create loop conditions;
alternate routes to accomplish the same task; force the system
to reset values. Example: a web commerce site has a drop down
to select quantites; change the HTML file to include negative
values in the box. The site creators don’t test for that because
they thought only the values they populated in the drop down box
could be used. He ordered -3 books, and the site showed his
total as -$149.85. He was charged for shipping, but the sales
tax of about -$10.00 made up for it.
A web browser vulnerability in Internet Explorer. Write a script
to create an object in a loop; IE only asked the user’s permission
the first time it tried to create it (and the user
said now). But each following time, IE just let the script work.
Insecure implementation: get between the time of check and time
of use; create files with the same name as protected files;
force all error messages (because they often disclose sensitive
information, like the fact that an account name is valid but
the password isn’t); screen temporary files for sensitive
Demonstration of hacking Microsoft’s E-book DRM: scrape temporary
memory as pages are displayed. The pages are decrypted so they
can be displayed, then deleted, but he was able to grab it during
the short period of vulnerability.
By the way, some of the examples are running in virtual VMWare
machines. This is the second time I’ve noticed this. We should
be using virtual machines more for testing, too.
A Quake 2 exploit: send a message to a Quake session with a valid
password. It gets rejected. Then send the message, spoofing the
source address to one internal to id Software (the Quake creators),
and it gets accepted, even without a good password. Messages can
include commands, so this is a significant vulnerability.
IE vulnerability: install a DLL in Internet Explorer’s home
directory. The DLLs are supposed to be in the Windows System
directory, which is protected, but IE looks for them in the
current directory first. This DLL is called RTBUILD.DLL, which
controls form input. The trojan DLL captures this input, and
can modify it. This works even though the web pages are accessed
via SSL, because it doesn’t look at the network traffic.
Another very good talk. More information is available at
sisecure.com. They’ve also
written two books on this: “How to Break Software” and “How to
Break Software Security”. I saw them at the conference bookstore,
and they look quite readable.