Friday, January 20, 2006

Project Eva

I've began work on "Project Eva," a personal project to design a new, secure Linux distribution. This is not a typical distribution; it will base neither on Red Hat, Gentoo, or Debian GNU/Linux. Instead, I will be building from the ground up, using a package manager I'm designing and coding myself in a project called "Project Coon Fox."

Project Eva will not be a simple hack-up job of packaging up the Hardened Gentoo or Adamantix efforts in another form and shipping them out. Instead, Project Eva will use a kernel designed and built specifically for Project Eva.

New kernel modifications will be designed from scratch based on the documentation, code, and conceptual efforts that manifest as PaX, GrSecurity, OpenBSD, and Red Hat. The most useful, most secure designs possible will be implemented based on the existing efforts. These new implementations will tightly integrate with the kernel, and will target mainline inclusion.

Our integration scheme is to treat program execution without enhancements such as address space layout randomization or data/code separation as privileged, and grant that privilege system-wide. Various systems to restrict or decline these "privileges" will be put in place, including robust LSM hooks and SELinux policy enhancements. Further, various levels of restriction will be allowed, giving fine-grained control over things some programs are tempermental over, such as the amount of entropy in and general layout of a randomized address space.

Project Eva will be based around the output of Project Coon Fox, my packager project. Project Coon Fox uses its own install scripts on an extensible scripting engine, allowing a complete audit of all actions to be generated before any system changes are made. Simple heuristics can allow for targeting of dangerous operations, such as changes to file associations; SUID and SGID bits; SELinux policies that grant permissions outside the default system policy; or changes to start-up scripts.

The heuristics in Project Coon Fox can also be designed to hard-deny certain operations, such as granting modification access to /bin or allowing alteration of security policies, things only the package manager should control. By properly utilizing these types of controls, installed trojan programs (viruses, spyware) can be easily uninstalled without potential to replicate across the system; infections can only affect users who ran the installed trojans.

Project Coon Fox also dissociates package-for-package dependency and conflict schemes, similar to autopackage, using a more robust system to allow easier download-and-install options from foreign sources. With proper security policies, this becomes somewhat safe; with basic use of the simplified auditing interfaces, this becomes mostly safe. Installation of spyware becomes easily recognizable, i.e. random added security policy privileges and SUID bits; and stripping of these privileges can be undone if the program fails to work properly and can be confirmed safe (by a quick Google).

The founding principles of Project Eva are as follows: Security increases productivity; C is secure; and security is cheap to implement. Today's so-called security experts will immediately choke on any of these and begin a long and pointless speech about how every one of these conflicts with everything we know; I have to do this, because nobody else can.

See you in a few years, with the finished product.

Wednesday, October 12, 2005

Java v. C continued

Looks like I got slashdotted on that one. I wasn't trying to particularly bash Java, more the concept of one language being "secure" over another. Java just makes a good example of a language people believe will solve all their problems for them.

With the responses I've seen both here and on slashdot, I feel I should make a follow-up post. I'll point out a few interesting things that have arisen, and let the wolves at it again.

The major thing I'd like to point out is that, especially on Slashdot, most of the replies seemed to argue from the point of vanilla C on a vanilla system versus Java. This may partly be my fault for using Java as a major target in the argument; but I did discuss vanilla language C programs using a hardened compiler on a hardened system. This was my argument, and I'd appreciate it if people would take time to comprehend the context before replying half-assed to some other similar but distinctly different argument. Once again, the blog was about C compiled using a hardened compiler, run in a hardened operating system environment.

Another interesting point was that certain attacks are still possible in Java. These include SQL injection and cross site scripting, something not inherantly C; although C programs could certainly use SQL libraries or script language parsers that would be vulnerable. Script languages also come to mind, immune to buffer overflows but rabidly vulnerable to XSS; efforts like Hardened PHP work to reduce the risks here.

One major argument which kept resurfacing was that C is insecure because of pointer math and explicit memory management. I'd like to restate here that the environments discussed minimize the possible damage of bad pointer use; you can't modify existing code as one comment alluded to, and you can't execute data such as the stack or heap. The address space layout randomization makes sure that attackers who can control pointers and such at least can't figure out where to point them because everything moves around every time the program is run.

On the same topic, what's so hard about manually managing memory? You can create functions or, if you fancy C++ or Objective-C, classes to manage your linked lists and memory objects. Calling these will abstract the memory management from most of your code. I guess in something that abstracts direct memory management from you, you'd either do the same thing minus malloc() and free() calls; or just haphazardly write the same 2-3 lines of allocation code everywhere, which is probably really a bad thing anyway for maintainability in the more significant cases.

In either case I've never seen any reason why it's not clear when to free() memory, unless you somehow made it behave like a relational database with lots of concurrent areas of code somehow accessing it at the same time in unrelated ways. Typically though I'd think that you'd have some reason to remove an object; and at that time, destroy all resources such as threads that drive the object, and free the object. I guess it's possible if you searched an object out and are working on it in a separate thread, but I can't think of a practical application for this.

At any rate the blog wasn't on programming symantics like explicit memory management; it was on security. I just felt like going off on a tangent to ponder the quandary of why people have trouble with memory management. Pehraps a light weight reference counting library would help; I still have an aversion to garbage collectors because I worry that they may wander the heap back and forth (this is how Boehm was described to me) and thus in times of high memory usage could cause swap thrashing if used on a large scale.

Thursday, October 06, 2005

Security in a language?

Lately I've taken more notice into the debates over programming languages. People often claim that Java is inherently more secure than C; C is faster than Java; C++ is easier than C; C++ is slow and has an over-bloated syntax that makes it confusing; or any number of other things about languages. Looking at C and Java, I'd like to make a quick point.

In gcc 4.1, a re-implementation of ProPolice is included to help squelch stack smashes. OpenBSD has a new secure heap manager that does a similar job in the heap. Then there's PaX with strict but light-weight memory protections; as well as GrSecurity, a project that aims to be a complete security solution built on top of PaX.

Just a few basic enhancements that bring a lot with them. On top of a typical system, ProPolice and the secure heap manager both not only stop security attacks, but report enough specific debugging information to almost trivialize finding and fixing the bugs. PaX stops remote code injection and ret2libc cold, knocking off the basic building blocks of these attacks. GrSecurity finishes up with a few interesting restrictions, including some extreme information separation in /proc and an enhancement to prevent /tmp races, as well as a full mandatory access control system like the more familiar SELinux.

The results are nice, to say the least. C and C++ programs are immunized against stack smashes and heap overflows, as well as code injection and out-of-order execution in general. This alone cuts out over half of the security bugs caused in these languages based on frequency, according to some analysis of the first 60 security announcements from Ubuntu Linux. This includes stack and heap buffer overflows, integer overflows, and most other memory corruptions. And the cost of all this? Around a percent or two, no more, of increase in CPU load.

What of Java or Mono? Well to start with, these programs run on top of a JIT or JVM typically. This demands that the strict data-code separation in PaX is disabled, resulting in a slightly weakened security model. On top of it, Java platforms assume that Java arrays can't be overflowed or double-freed, because the language is bounds checked and garbage collected; however, there is always the slight possibility that in the many tens or hundreds of thousands of lines of added code mimicking the functions of the operating system, a slight mistake can lead to the possibility of Java byte code that forces a double-free or internal overflow, leading to code injection.

C also carries with it a platform, a very thin one though, the "C runtime" or "C standard library." Although small, it presents the same issues as the Java platform; it's just fractionally worrisome because of its smaller size, and inherent security issues are better understood at this point. C++, Objective-C, and other languages build on top of the C runtime and create the problem of an expanded runtime again, though not to the epic proportions of full platforms like Java or Mono. In addition, these runtimes can still be protected by the same enhancements that protect C, although some may have other unexpected attack vectors.

There's a lack of convenience in full platform systems that hasn't been discussed, and has little to do with security. Java and Mono both isolate the program from the underlying system; because of this, they need bindings or reimplementations of common libraries. For example, there must be a class that supplies either a Java implementation of Ogg Vorbis or binds to the native libvorbisfile.so or libvorbisfile.dll to use Ogg Vorbis for Java applications to use Ogg Vorbis. This is mildly aggravating; but also grows the unprotected code base, and forces C libraries used via bindings to run without some security enhancements in JIT and JVM implementations.

There's one more effect that the security enhancements have on C and C++ programs. The increased restrictions tend to expose bugs rather spectacularly; once in a while a program will go from "acting weird sometimes" to simply hard-crashing when introduced into a secure environment. Not only does this force programmers to fix these bugs; but with the stack and heap protections, it even helps them along the way. In essence, the difficulty of debugging a C program can even be reduced by these; though not necessarily below the cost of debugging or writing up a Java or C# program, unless the needed libraries aren't available on those platforms.

So now, which language is more secure? I still say C, for no other reason than because it's easier to make the system protect itself from broken C programs than broken JVMs or Java applets. Assuming the JVM itself is perfect, however, I'm willing to say that C and Java are about on equal ground.

Saturday, September 24, 2005

Virtual worlds

Here's a question. If you found a virtual world in a video game, would you think much of what you did in it? What if you found a computer in the virtual world? What if that computer ran like an actual computer and you were able to get X up and run OpenOffice.org?

I wonder, really I do, about the implications of a computer game that would spawn computers inside it using, say, a hypervisor on a cluster utulizing special virtual CDs found in the game world that were really scripts to install the basic system to the virtual machine. Really I do.

Obviously whoever has access to the machine can monitor you, track your progress, keep tabs on the state of secure handshakes using modified software made to leak, and compromise any security you can conceive; you don't want to rely on a public server to secure your stuff because you have encrypted disk and connection. But what about private applications, or testing? What could you conceivably do, really, with this?

Let's say the game engine is smart enough to have Xen balance out its individual computers across a flexible cluster that can be grown and shrunk without down time, by adding and removing nodes (i.e. physical computers). Existing nodes being removed evacuate to the network; they can then be massively upgraded and plugged back in, at which point the network makes them a supporting node and adds the added hardware to its own. This is relatively normal for a cluster; but the control by a game engine isn't.

So let's say the cluster, controlled by the "game," is pretending to be a network now. You can configure networking in the game by walking up to a computer and looking at an X screen exported out to the game, or a console supplied by Xen and filtered through the game. Hit the console, configure the network. The back-end work to make the computer act physically connected is handled by the game engine; if you run physical cables, the game tells Xen to pretend these computers are bridged.

You now have a "game" in which you can pretend to build a network; set up servers; and run cabling. The only thing you can't do is simulate hardware failure; and tweaking it to use a few real machines that you obviously have to hack up in the real world will fix that. Using a mixed environment, you can have multiple clusters of PPC, x86-64, and IA-32 architecture, each acting as a cluster with all the others of its architecture, with the three clusters acting as individual from one another instead of doing something stupid like trying to share processes across eachother.

Taking this a step further, a kernel with Xen in it can run on any of these machines. The game understands when they reboot, and tells Xen to restart them. You can mess with them and patch them and upgrade them and throw security enhancements like GrSecurity on them if you can make Xen not barf with them. You can set up secure nodes and run them and test them, quite literally, in a sort of "half-simulated" environment.

At the end of the day, of course, you could shoot your boss in the forehead on the way out of the "office;" he'll just respawn with the BFG.

Monday, September 12, 2005

Revisiting copy protection...

I e-mailed the MPAA today on their Report Piracy Hotline about copy protection. Pretty much, I'm annoyed by it, and it's useless. Now we all should know that any copy protection can be broken; and track records for breaking it typically range from several months before a copy protection method is deployed in a product to a few weeks after something on the market uses it. Millions of dollars go into dismally ineffective ideas, and here we go.

What copy protection does do is get in the way of the end user and prevent them from performing some completely legitimate tasks. As for breaking the law, somebody will do it and share the results with everyone else who can't do it themselves, so no problems there. No such luck for the end user; solutions to obscure problems don't get shared freely in a downright illegal transfer of data.

This happened to me. To be brief, I mailed the MPAA with the friendly message below.

I have just legally purchased "The Incredibles" from a Best Buy retailer. This was the 2-disc set "Special Edition" for $19.99.

I would just like to say that the copy protection works extremely well at PISSING ME OFF and assuredly PREVENTS ME FROM VIEWING THE MOVIE PROPERLY AND COMFORTABLY ON MY EQUIPMENT. Whoever designed this NEEDS TO DIE.

Let me begin with details of my setup. I have a Playstation 2 as a DVD player hooked up to a VCR which accepts audio/video, and a surround sound system which accepts audio/stereo. The VCR uses an RG6 terminated coaxial cable to connect to the TV, typical of standard cable hook-ups. This is done because the TV does not have ports for audio/video direct connections.

The apparent problem is that the copy protection on the DVD distorts the picture if a VCR is in-line. This was noticed earlier when my friend had the same problem, but jacked the PS2 directly into his TV and "fixed" it. I have no such luck; therefor, my picture flicks on and off in alternation, each state holding for a few seconds.

There are several solutions to this problem:

1. Copy the DVD using a decode/recode process
- This will definitely work; the copy protection is a useless annoyance to playback only, not actual DVD copying
- Software is easy to get, probably already installed
- The quality will go down slightly
- Costs me a DVD+R
- Fair use clause of US copyright law explicitly allows this
- BetaMax decision sets courtroom precidence allowing this
- DMCA bans this

2. Utilize my computer
- This will definitely work
- Play on a smaller screen
- Can't be productive at the same time

3. Download from bittorrent and burn
- This would also probably work
- The quality would suck
- Finding a bittorrent would be annoying
- (1) is a better solution anyway

4. Buy a new TV
- This would also work
- I'd have a better TV
- I'd be inable to pay my car insurance and fault financially
- This is not a real solution; it's a treatment of symptoms

Perhaps you need a simple reminder. . .

IF YOU SPEND MILLIONS OF DOLLARS DEVELOPING A COPY PROTECTION METHOD, IT WILL BE BROKEN VERY, VERY FAST BY SOMEBODY, AND THEN SUBSIQUENTLY IGNORED BY ANYONE TRYING TO DO ILLEGAL THINGS. LEGITIMATE USERS WILL BE QUITE ANNOYED BY YOUR IGNORANCE AND REPETED FAILED ATTEMPTS AT ADDRESSING THE PROBLEM.

Let's take a few situations here.

1. Customer can legitimately decode in his own isolated system
- It can be done here
- Reverse engineer the system OR
- Just sniff the decoded data

2. Customer needs to validate with an external site
- Not all customers have a connection to you
- Massive privacy violation you'll try to write away in a license
- Just stick a modified recording hardware in-line to beat this
- Or better, sniff the network data and RE the protocol, then start sharing the collected key(s)

Now go fuck yourselves and try to learn from repeated failure.

To add insult to injury, the response I got was rather terse.

This message was created automatically by mail delivery software. Message violates a policy rule set up by the domain administrator Delivery failed for the following recipient(s): hotline@mpaa.org

Needless to say, I'm working on getting around whatever rule set they have in place. In the mean time, let's all stand and clap for an unexpected consequence of a system which failed to meet its original intent anyway.

Sunday, September 04, 2005

Mozilla and Firefox dumping SSL2.0

Well, it looks like Mozilla is dumping SSL2.0, and with that comes the loss of SSL2.0 in Firefox as well. This means supporting code will be gone, and a very few sites will break; but fortunately, most sites support SSL3.0.

I say good riddance to bad rubbish, and may it rot in Hell forever. Some info about SSL2.0, it can be attacked a lot easier than SSL3.0. A man-in-the-middle attack can be used to force 40-bit weak encryption; and message authentication hashes use 40-bits even for 128-bit ciphers. There's a couple other weaknesses that more or less are considered immaterial or minimally useful, but being able to break the cipher invisibly and snoop the traffic is a major, major downer.

A little history lesson, The Data Encryption Standard, with 56-bit keys, was broken by a $250,000 device in a little over 2 days; ironically, 56 hours counts as "a little over 2 days," but this is just coincidental. Today's computers can do a 40-bit symmetric key in under a couple weeks, if not days. Credit card sniffing is useful in minor incriments; you can pick up a dozen credit cards in a month and have a good $50,000 limit right there. More powerful computers can be done in around $1000 to do it in much less time.

I say everyone makes sure SSL2.0 is disabled in Firefox as soon as possible. They're dropping it; get used to it. Complain to the webmasters if your stuff stops working; enable it only if it's needed for your business or job to function.

SSL3.0 has a compatibility feature which allows fallback to SSL2.0 if the client or server can't support SSL3.0. Having SSL2.0 available means that SSL3.0 connections can be man-in-the-middled to fall back to SSL2.0, as the flaws in SSL2.0 are perfectly possible until the last phase of the SSL3.0 hello. From there, the connection can be man-in-the-middled to use a 40-bit key, as it's now SSL2.0. The attacker now only needs a few hours on a newer system to break the key.

Sunday, August 07, 2005

Zombie Hacker Survival!

I have just read Jon Erickson's excellent book, Hacking: The Art of Exploitation published by No Starch Press; followed closely by Max Brooks' The Zombie Survival Guide: Complete Protection from the Living Dead published by Three Rivers Press. Coming from someone who hates reading books, these are two select reads. The first was a detailed but introductory technical reference on exploiting programs, attacking the network, and encryption; the second was a humorous but valuable guide to zombies and how to defend against them.

Hacking leads the introductory C programmer straight into the realm of security. If you can code, you'd better buy Hacking (or at Amazon) now. Hacking is an invaluable introduction into why security holes exist, and how to abuse them. To the programmer, it's a quick smack along side the head with a police baton; now that you're fully awake, you can quit screwing up and start writing more secure code. To the aspiring security expert, it's a step straight towards where you want to be; not only does it tell you what kinds of attacks are possible, but it shows you how to discover them and write your own attacks.

More than half of Hacking is dominated by its focus on exploiting ugly program code. Unlike with common titles which lay into overflows with high level explainations and example programs, Hacking takes you to the beginning and drags you on your face to the end. You start with an example program with a visible stack overflow, and an illustration of taking advantage of this. Like any other book on the topic, after a few short pages you've successfully taken root access for some unknown reason.

Before you know it, the author starts dumping output from gdb onto the page and explaining how he's sorting out the layout of the stack, finding addresses in the environment, and defeating security countermeasures by changing the file name of the program to a string of shellcode. By this time you could drop the book in your toilet and find yourself able to actually write the shellcode yourself, using a text editor and nasm. As you continue, you learn about how to hijack functions with changes to the GOT, utilize printf() to take over a program, and damage function pointers to constantly win skeletal gambling games.

Hacking includes two smaller sections. The first explains packet sniffing, spoofing, and hijacking the network for man-in-the-middle attacks with MAC spoofing and ARP cache poisoning; while the second enters into cryptology, password cracking, and breaking WEP using the flaw in the RC4 stream cipher algorithm as described by the FMS attack. In the scope of Hacking, these topics are less interesting; the author clearly covers them as helpful filler, but at a lesser degree of usefulness than the programming section. Still, for what they are, they do supply valuable introductory information for the inexperienced reader.

Hacking falls short of relating to the real world. When talking about buffer overflows for example, it doesn't reference any worms such as Sasser or Blaster, both of which utilize buffer overflows to spread. It doesn't otherwise bring up history lessons of any sort. Hacking is also not a guide to securing your system; it doesn't dive into address space layout randomization or any other systems with only probabilistic if any evadability.

Even without these, however, Hacking: The Art of Exploitation manages to clearly communicate its topic to the reader, and is a great read for anyone with programming experience. Even if you're not going to enter into the field of security, Hacking should be the first on your list as soon as you can manipulate strings and local arrays in C.

If you like Hacking, you'll like The Zombie Survival Guide. Besides giving your brain a break before moving onto Silence on the Wire (or at Amazon), it may lead you to respect good physical security, including self defense, firearms, and physical barriers. The undead may not be here tomorrow, but it doesn't hurt to be ready.

The Zombie Survival Guide is a a humorous piece, but a very deadpan one. It opens up to detail what exactly a zombie is, their characteristics, similarities and differences to the humans they once were, strengths, weaknesses, fabrications, classes of zombie attacks by size, and how to recognize a zombie attack through government and media coverups. From there it goes on to educate the reader on how to run, defend, or attack when faced with zombies. The culmination leads up to the final scenario of a Class 4 outbreak, zombification of the entire civilized world, and how to start fresh and begin taking the planet back.

The Zombie Survival Guide details weapons, defenses, and survival techniques for journies and camping. It baits the user with the most attractive weapons such as explosives and chainsaws and explains why they are poor choices, often due to mobility, fuel, or the unwieldliness for the precision needed to take down a zombie. It also details defenses impassible by zombies, such as high walls (zombies are too stupid to climb).

The Guide gives the reader the most critical information needed both to run from and to run into a zombie outbreak. Not only is long-term travel out of an infected area covered; but the procedures for an offensive sweep of many environments are explained as well. Stealth, or the blatant ignorance and even avoidance thereof, is critical depending on whether you want the undead to come to you or stay away. Vehicles can be either death traps or godsends, and knowing which to chose based on your goals and which to avoid at all costs for preference of walking is no problem for the reader.

Besides valuable military tactics, there is a detailed account of every known zombie outbreak at the end of the book. These can be quite entertaining as a history lesson, although bear in mind they're completely fabricated. Still, the accounts of heroics and tragedy in rapid passing make for an interesting and colorful read.

The Zombie Survival Guide: Complete Protection from the Living Dead is a humorous yet serious guide to protecting your well-being in the event of a small skirmish or an apocolyptic uprising of the undead. If taken with a grain of salt, it may not only amuse the reader, but also provide valuable information when adjusted for the combat considerations of more likely enemies. Remember: an intruder in your home is more likely to be alive than dead; don't count on him to beat on an unlocked door eternally because he can't figure out how to turn a doorknob, but by all means unload a carbine on him at the first sign of hostility.

Phresh Phish

I recently posted a bug on mozdev about TrustBar. TrustBar is an anti-phishing toolbar that tells you when the current loaded https:// page is using a valid certificate; who verified it; and who it was verified as. This means that when you log into something like eBay or ThinkGeek, you're told that you are indeed logging into them.

What TrustBar will not do is check who a regular http:// page belongs to; validate the action target of a form; or look for cross-domain action. Because of this, sites like PayPal, Amazon, Regions Bank, or Bank of America can raise false alarms of Unprotected log-ins in TrustBar, while indeed submitting to a secure https:// CGI action.

The most vigilant users don't need TrustBar for these sites. They can tell they're being owned by simple factors such as deformed URIs that redirect through Google or by Firefox suddenly not filling in their username and password. As for the others, they'll develop a comfort zone with these sites, accepting that they're secure even though TrustBar false-alarms. During a real attack, they will ignore the alarms, as they're normal.

My bug explains this; details a theoretical attack (which has been outdone; the elaborate javascript tricks I used are unnecessary, evidently); and gives several countermeasures that could be easily implemented by TrustBar 1.0. The author has courteously decided to mark this as INVALID, as he decided that this is not a bug.

More pertainently, he failed to understand that my elaborate javascript mess is not necessary to execute a slick attack like this; he has stated that it would be nice to implement the related countermeasure, but that he can't see how to do it. Other more useful countermeasures were completely ignored.

I have re-explained the danger situations, and how to correct for them with appropriate countermeasures. I also reopened the bug as an enhancement. I urge my readers (if I even have any) to actually read the long post top to bottom, and then post commentary urging the author to consider implementing these countermeasures. These phresh phish need to be skinned alive and I believe it can be done.