Thursday, January 27, 2005

Smoke and Mirrors Awareness Day

Today is Smoke and Mirrors Awareness Day! No, not really, but I'd like to point out various pieces of junk that pretend to be secure.

We've already passed over Microsoft's Data Execution Prevention and why it doesn't work without real Address Space Layout Randomization. This is because a simple ret2libc attack can be used to evade normal memory space protections that systems such as DEP and PaX take advantage of. This is why other systems employ ASLR.

Red Hat is also hyping their security. In an earlier post to the LKML, Red Hat has submitted parts of Exec Shield for mainline inclusion, to add ASLR. These patches allow a 64KiB stack base randomization and a 1MiB mmap() base randomization. I pointed out that this is not adequate, because small gaps in the stack can be easily compensated for:

[...]|STACK---STACK---NONONOSHELLCODE STACK---STACK---NONONOSHELLCODE ----------------------^ | -- You jump here in any case.

Brad Spengler of GrSecurity also had a few things to say about this randomization patch pertaining to the extremely short brute-force cycle needed to break it. He also points out the maintainers' complacency with a glibc information leaking bug that they pretend to have fixed even though it's still in the wild. Apparently, though, the bug was fixed; but the fix was never marked as a security update, so many users including some Red Hat developers are likely still affected.

Interestingly enough, this is the same kind of problem OpenBSD has with its stack-gap randomization being easily evadable. The problem may be that they're more interested in confusing attackers than developing real solutions. That's not to say that they don't occasionally get things right.

So what kinds of things are real? Security auditing for one. This is part of the most basic security fundamentals: finding and fixing flaws. Projects such as the Debian Security Audit Project and Gentoo Linux Security Audit exist for this purpose. Packages such as SAL, Flawfinder, and RATS are created for automated auditing.

There are several distributions focused on real security, such as Adamantix and Gentoo. Ubuntu Linux may also be coming this way thanks to the efforts of the Hardened Debian project and, of course, the gracious concern of the Ubuntu lead developers.

The above distributions all use or are planning to use PaX instead of Red Hat's Exec Shield for executable space protections and address space layout randomization.

Adamantix, Hardened Gentoo, and even Ubuntu's experimental security-hardened kernels include GrSecurity to enhance the security of the system by randomizing various information important to attackers such as PIDs and networking intrinsics and obscuring this information from non-privileged users. GrSecurity also hardens chroot() jails to prevent various break-outs, for example by using mknod and mount; and supplies other restrictions to prevent tempfile races.

Adamantix and Hardened Gentoo supply a full PIE and ProPolice protected base. Ubuntu is planning ProPolice for a future release, and will likely also aspire to move to a full PIE executable base.

While Adamantix and Hardened Gentoo are not large community distributions, Ubuntu is a Debian-based distribution aiming at presenting a more user-friendly, desktop-appropriate environment. This makes Ubuntu a portal to the community's eyes to show how to properly assess which security enhancements are appropriate and how to deploy them; and to demonstrate their effectiveness and non-intrusiveness to the user's environment.

As long as this community of related projects works together, continued security development should be expedient and efficient. Unilateral attempts to reinvent all security systems create a confusing environment where individual implementations have their own strengths and weaknesses, and often don't display a clear affinity to one 'best' product. A joint attempt piles all development energy into correcting all flaws in a single effort and enhancing the project, creating the most polished final product possible.

Monday, January 24, 2005

GrSecurity as kernel hooks

I don't know why I did it, I don't know how I did it, but somehow I managed to reverse part of GrSecurity into a set of kernel hooks—only three right now—to implement GrSecurity with. I know Brad isn't going to go along with it, nor will Linus; it's purely academic.

It took me about 5 minutes to design a stacking mechanism that modules don't have to be aware of, and 10 minutes to implement it. It requires two more void pointers per loaded security module, and basically turns them into a doubly-linked list. This list is read-locked and iterated by the security module framework for every security call.

In this design, security calls don't block eachother, because read-locks are non-blocking. When a pending write-lock exists, it blocks all future read- and write-locks until it gains control and unlocks. In this way, inserting and removing modules is safe; and insertion and removal is the only blocking operation. This provides a fast, SMP-aware, self-stacking security framework.

The design I used also leaves any non-supplied functions initialized to NULL for each module. NULL functions are ignored by the framework, instead of being called. This means that no functions are needed. Because the framework also checks for the grsecurity_ops pointer to actually point to the head of a linked list of modules, the need for a dummy module is also eliminated.

This leaves only the actual important security concerns, such as, how do we determine if we deny or allow an operation? This is quite simple. Along the way, every module has a chance to deny access. If any one module denies access, the check is halted, and an access error is returned. This means that access must be granted by all security modules, or access is not granted.

I find myself asking why no upstream kernel developers have bothered to write this into LSM yet. Perhaps they simply don't want stacking; or perhaps those kernel devs are lazy. Maybe they're just not smart enough.

Stacking is important to allow multiple isolated modules to function together. GrSecurity's kernel security, chroot() jail modifications, and RBAC system could all be isolated modules functioning together on a security framework. Similar systems could be ported to LSM; but it'd be a waste of time without stacking. Many systems aren't mutually exclusive; they shouldn't be forced to be.

Apologies for this short, irrelavent post. I just haven't had anything to say in a while, and didn't want my readers to think I was dead.

Wednesday, January 19, 2005

There's no such thing as a local exploit

Some people feel safe knowing that they have zero or only one remote exploit, even though they have 400 local ones, because they're the only local user on their local desktop box. This is a falsehood. Local exploits are just as dangerous if not moreso than remote exploits.

A remote exploit doesn't always mean root access. If an attacker exploits Firefox or XMMS, he can get local user access. This could be just a normal user account, but it's a way in. The more common social engineering tactics don't even need a remote exploit if the worm does something pretty, like display fireworks or pretend to be a screensaver.

This means that access to a local account, whether by remote exploit or by social engineering and worms, is fairly likely. Complex attacks aren't needed, nor are large amounts of stealth. Payloads don't need to go on the same day. All that's needed is one simple remote exploit into a user's account. Just slip a bit of a program in, make changes to the user's account, dump a worm in ~/.mozilla/firefox/plugins.p9, then crash Firefox gracefully, and nobody will suspect anything.

With the above attack on Firefox, the user predictably sees a bug in the browser, and ignores it. The worm can now run when the user logs in and check a site for encrypted plug-ins to let it attempt to exploit local privilege elevation exploits. This allows the worm to spread to other accounts or, ideally, to root. Since the plug-ins are encrypted, an IDS won't notice the malicious code. Base64 encode the encrypted data and it looks just like text.

Even if the worm doesn't get root with a local exploit, it can spread around user accounts and gain a better operating position. Worm children can use AF_UNIX sockets or shared memory keys to communicate and operate distributed across user accounts to look less suspicious. They could even operate with a Firefox extension to do the networking tasks from the Firefox process.

Soon enough the worm can get a plug-in to gain local root using a local exploit. Now that it has root, it can make sure that it installs a setuid binary similar to sudo to allow root access when the vulnerability is fixed. This will allow the worm to continue to get root from any account, and possibly to connect to the attacker and give him a root shell.

This is how local exploit can be turned into a remote exploit. It only requires an existing remote exploit. This does not have to be a software vulnerability; simply riding with a trojan horse through e-mail or Web downloads is a common way for worms to spread using social engineering, which can be thought of as an exploit in the user himself.

Many worms today have demonstrated not only social engineering tactics, but combined remote exploit tactics. Worms which use multiple exploits or combined tactics, such as worms deployed through an MP3 or PNG image that infect other MP3s or JPEGs sparsely, and swap themselves via e-mail or simply by riding with the media in a file-swapping program, are becoming more prevalent. Because this won't normally gain root access, local exploits become very valuable to an attacker, even on a single user system.

Monday, January 17, 2005

Most of it can be stopped now

To aid the Hardened Debian project, I wrote up an analysis of the Ubuntu Security Notice list. The results are nice to look at, and a blog on them is deserved. It appears that most USNs contain vulnerabilities which can be decreased to Denial-of-Service attacks, precluding any privilege escallation with simple crashes.

Below is a table aggregating the analysis of the first 60 USNs. Blue bars show vulnerabilities which trigger available protections if used for an exploit. Red bars show things that I don't yet know of anything that protects against them.

Race/non-tmpfiles:  1
Lack of environment checks:  2
Integer Overflows:  5
Bad malformed data handling:  7
Buffer Overflows: 27
Race/tmpfiles: 11
Generic/Design:  5
Kernel (generic):  5
Kernel (BV):  1

The arrangement of the above graph is deliberate; it illustrates that security vulnerabilities are normal.

By using a combination of PaX, GrSecurity (which supplies PaX), and The IBM Stack Smash Protector, threats in approximately 81.7% of notices can be mitigated. This leaves us with 23.3% of the notices containing threats we can't mitigate. The 105% total stems from the fact that notices contain multiple vulnerabilities in some cases, often of different classes.

The most distressing thing I've encountered during this task was that there are an increasing number of kernel vulnerabilities. Up to USN #55-1, there were 5 kernel vulnerability notices total, accounting for 5.4% of notices. Between USN #55-1 and #60-1, the number of kernel vulnerability notices doubled. With the extra three kernel vulnerability notices, a smooth 10% of USNs now involve kernel exploits.

Kernel exploits, for the most part, cannot be protected against proactively. When they can, the protection involves detecting the attack and panicing the kernel, bringing the entire system down. This means that we can't rely on proactive security in the kernel, because the results aren't even remotely acceptable as a temporary fix. Unless there's extremely confidential information on site, an easy DoS to take down the system isn't much better than an intrusion, though marginally it may very well be.

The problem, however, is not that the only protection we can afford is wildly disruptive; but that we'll never be able to protect against all kernel bugs. This coupled with the accelerating rate of kernel security vulnerabilities creates a looming cloud of evil surrounding the Linux kernel. We must prevent bugs from happening.

The only way to fix kernel bugs is to audit the kernel, find them, and destroy them. Various audit projects exist; and they need to keep a portion of their focus on the Linux kernel and entering patches. It may be most prudent to devote a predetermined level of resources specifically to Linux kernel source code auditing.

This again ties into something I've blogged before. I had given reasons on switching to a new kernel development model, and even made a loose outline of a potential simple change to create an enterprise-ready development model based on the current model. I believe now that some change may be in order.

The change to my previous suggestions on a new model is quite simple. Upon release of a fresh Stable tree--for our example, let's say 2.8--the tree would be dubbed 2.8.0-audit0. An audit group would make a pass at all of the patches that have gone in between 2.6 and 2.x.0 and periodically release any security fixes they found to be needed. Once the audit pass was finished and the audit group was satisfied, 2.8.0-audit(a) would finally be released as 2.8.0.

This would delay an official x.y.0 release; however, the pre-audit releases would be for the most part Stable-ready, in the same sense that every 2.6 point release is considered "stable" now. Distributions and external audit groups would predictably chip in to accelerate the audit process as well. Adventurous desktop users and developers could get straight to work on the -audit0 release anyway.

The reason for using -audit rather than simply doing periodic point releases is that with point releases, it is not obvious when a single audit pass has finally been made. The audit may be finished in .1, .5, or .15, depending on how many bugs are found and how often they incur a new point release.

Many security issues can be handled now with technology we have available. I'm predicting that Ubuntu Linux will be rolling out a high-security distribution within a year; however, it could go longer than that. Still, the threat of kernel security vulnerabilities cannot be easily mitigated, and remains the most severe threat any system can face.

Friday, January 14, 2005

Time for a new Linux kernel development model

Earlier, I had discussed why changes to the Linux kernel development model were needed to better support third party development. Now I feel it's time to sit down and discuss what key issues the current development model raises.

Brad Spengler of GrSecurity criticizes the Linux 2.6 development model in a post to Bugtraq. He gives the following expression of his distaste for the model in that message:

I'd really like to know what's being done about this pitiful trend of Linux security, where it's 10x as easy to find a vulnerability in the kernel than it is in any app on the system, where isec releases at least one critical vulnerability for each kernel version. I don't see that the 2.6 development model is doing anything to help this (as the spectrum of these vulnerabilities demonstrate), by throwing experimental code into the kernel and claiming it to be "stable". Hopefully now these vulnerabilities will be fixed in a timely manner.

While all patches added to the kernel are considered to be in their stable release cycle, stable releases always have lurking bugs. With the widespread use of mainstream release, software is put through testing several orders of magnitude more rigorous than its beta releases or smallscale stable release. This almost always brings the exposure of existing but previously unnoticed bugs.

Security flaws are bugs. Some bugs are memory corruption bugs or information leaking bugs, which can create a situation where confidential information is accessable by non-privileged users. Because of the current Linux 2.6 development model, more bugs are predictably introduced into mainline, and more security holes with them.

Maintaining a truly stable series would continuously decay the number of bugs in that series. This would give vendors and consumers a viable option for a more stable and more secure codebase, rather than one in which updating to get bugfixes may involve encountering new bugs. Maintaining a series in which only bugfixes are allowed may be low enough maintainer overhead to allow several stable series to be supported, creating a long support cycle.

With a scheme in which the odd-major development series (2.7, 2.9, etc) were managed in the exact same way as the current 2.6 series is now, widespread use of the development or "Volatile" series would still be likely, as it would be stable enough for desktop use in many cases. This would facilitate the widespread use of a continuously evolving series, and contribute to the decay of the bug count.

Linus supports third party kernel development. In a conversation on the Linux Kernel Mailing List, Linus points out that he likes that different vendors have different objectives, and that they implement them in their own way:

I just think that forking at some levels is _good_. I like the fact that different vendors have different objectives, and that there are things like Immunix and PaX etc around.

The current 2.6 development model conflicts with this, however. Changing core functionality and APIs can stall the development of some third party patches. PaX, for example, was stalled on 2.6.7, and didn't have testing patches until just before 2.6.10 was released.

With a truly stable series, core functionality and APIs would remain constant for a predictable cycle. This would allow developers to focus on developing their product and not chasing a volatile codebase because of bugs and security flaws. Third party development could occur more rapidly this way; major up-porting would not have to be done for each minor release of the kernel.

Third party projects aiming for mainline inclusion would still have to track the development branch. These projects could still deffer that stress until they had a ready product for mainline inclusion.

Andres Salomon has announced the 2.6-as series to the LKML. This series supports 2.6.x kernels for a limited time with periodic 2.6.x-as(y) releases. These releases will contain bugfixes and security fixes only, and especially not driver updates, large subsystem fixes, and cleanups.

Salomon can only support a limited number of kernels. Because there is generally a new release every 1-2 months, a short support cycle is covered by the -as series. Salomon has said he is willing to accept patches from distribution maintainers to continue to support older series for long periods; however, he won't be actively searching for and backporting security fix patches for more than two or three kernels. One man can only do so much.

It can be argued that the 2.6 model is a step forward rather than a step back, despite its criticism. I believe this is true. The 2.6 development model shows the evolution of the Linux kernel's development scheme into a more efficient form which more rapidly brings new innovations and new development to the user.

Whereas the old model used to create and horribly break a development branch, then spend several months to a year trying to fix it before release; the current 2.6 model merges only working code, producing a constantly progressing but usable codebase. I believe that the further evolution of moving this growth back into a development or "Volatile" branch will bring a better balance to the process without sacrificing its current raw efficiency.

Thursday, January 13, 2005

A DEP evasion technique

<script> // policy issue hide_googlead(); </script>

In an earlier post, I pointed out a possible way to evade Data Execution Prevention in Microsoft Windows XP Service Pack 2. I feel this deserves its own blog post, so I've decided to go on here.

I'd like to first point out that this is a speculative method to evade hardware-enforced DEP based on various documentation. There is not yet a proof-of-concept, but this does not mean there is not a vulnerability. I will make a short blog if and when a POC is available, or if it turns out that I was wrong in my analysis.

This method applies to any system where proper protections on memory can prevent it from being executable, whether by hardware facilities or software emulation, if and only if those systems do not employ appropriate countermeasures such as memory protection restrictions (mprotect() or VirtualProtect()) or Address Space Layout Randomization.

This means that systems such as PaX, Exec Shield, and W^X are not vulnerable. PaX supplies high quality ASLR and mprotect() restrictions on Linux; while Exec Shield and W^X both supply ASLR for shared libraries at least. This technique still applies if certain information leaks (/proc/[pid]/maps) are not obscured, however.

The original problem that deploying these memory protections was meant to solve is shellcode injection. Some vulnerabilities, such as those in US-CERT Technical Alerts TA04-315A, TA04-260A, and TA04-293A lead to arbitrary code execution. While in these cases upgrading to Service Pack 2 brings fixes to Internet Explorer, future vulnerabilites similar to these will not be protected by DEP itself.

There are two reasons why DEP can be exploited. First, the VirtualProtect() function can still be called with any protecitons. There is no restriction at the time of this writing to VirtualProtect(), and so arbitrary memory can be made executable, or executable and writable.

Second, there is also ASLR, which makes locating the address of the VirtualProtect() function both easy and reliable. Even if VirtualProtect() were restricted properly, CreateFileMapping() and other functions could be used with open() and write() to simply write the data to a file and map it in as executable data.

Additionally, VirtualAlloc() and memcpy() could be used, since "VirtualAlloc can commit [(allocate)] an already committed page." It will seriously corrupt memory, but this is already a memory corruption attack so who cares?

To explain this exploit, we'll start with a normal proof-of-concept overflow. eEye Digital Security discovered a vulnerability in USER32.dll allowing animated cursor files to cause a buffer overflow and execute arbitrary code. A proof-of-concept was later released by Assaf Reshef to demonstrate this vulnerability.

This proof-of-concept falls in a class that would be stopped by DEP. It uses a buffer overflow to inject code into the stack and modify the return pointer to execute that code. Upon execution, the CPU raises a Segmentation Fault because the memory area is not executable. Thus, Windows is able to stop this exploit on Service Pack 2 on supporting processors.

Below is explained a hypothetical modification to the above cited proof-of-concept exploit for this particular overflow. The exploit as described below has not been written or tested, and is purely theoretical.

The process can be modified to inject a modified sest of data during the overflow. This data would contain a modified stack frame pointer, return pointer, a stack frame, and a block of payload shellcode, as shown below.


The SFPT would point at the STACK FRAME, and RETP would point to VirtualAlloc(). The STACK FRAME would have a return pointer to SHELLCODE, and appropriate layout for a call to VirtualAlloc() as shown below.


Upon RET from the overflowed function, the above call to VirtualAlloc() would be made to allocate an area big enough for the shellcode with protecitons PAGE_EXECUTE_READWRITE. This would leave the area readable, writable, and executable, all at the same time. Because VirtualAlloc() will allocate overtop of already allocated memory, REMOTE_BASE need only to be some remote address not near VirtualProtect(), memcpy(), or the injected stack frames and shellcode.

Because the stack frame for the call to VirtualAlloc() was part of the initial overflow, the attacker has complete control of its contents. The return pointer in the stack frame therefore should point to memcpy(), with a proper pointer to STACK FRAME2. This means that, upon RET, memcpy() is executed. It should be executed as shown below.


This copies SHELLCODE into the newly allocated area of memory. Again, the attacker has complete control over the stack frame. On RET, SHELLCODE is returned to. This causes SHELLCODE to execute.

When SHELLCODE is executed at the end of this process, it has been copied to a newly created executable area by existing code supplied by the Windows operating system. This means, as stated above, that SHELLCODE can safely be executed without DEP interfering. This attack method should be plausible for any attack in which shellcode is injected, and is compatible with older, non-DEP Microsoft Windows systems as well.

Note that the original overflow string must not contain NULL characters in buffer overflows involving strcpy() and related functions. This is because the string will end there and not be copied to the stack. Access to ASCII armored areas (addresses containing a NULL byte) will not normally be possible, although there may be ways to load the heap with prepared data, such as by loading certain data files or running certain scripts.

The NULL byte dilema may be evadable if a UUE, Base64, or MIME decoding function is available, and does not start at an ASCII armored address. In these cases, the first return can be a return-to-UUDecode() and can decode the rest of the attack, then continue with it. The UUDecode() address and stack frame must not contain any NULL bytes for this to work.

In conclusion, Microsoft's Hardware DEP protection does not prevent future exploits from being successful; it only adds a trivial amount of complexity to the attack. I believe that any attacker able to create the exploit as it would normally work will be able to handle the less complex task of incorporating a return-to-VirtualAlloc() and memcpy() attack into the process. This could only be properly protected against by incorporating Address Space Layout Randomization into the protection scheme.

Wednesday, January 12, 2005

Review of Microsoft's DEP

After reading through a page on memory protection changes in Windows XP SP2, I e-mailed Microsoft's Technical Support for XP SP2 and requested confirmation on my understanding of the product. This resulted in a support contact sending me a link to a detailed description of DEP. The given page doesn't conflict with or add to my understanding of DEP, so I assume I have a fair grasp on how it works.

DEP provides fairly basic memory protections, similar to the anciet POSIX memory protections established probably (guess?) in the 1970s-1980s, when UNIX was invented. Microsoft just caught up to 30 year old technology; although the Virtual*() functions have always been defined as if they actually work. Still, POSIX systems have always run on platforms with NX capabilities, such as SPARC.

No emulation of an NX bit is done by Windows at all. Back in 2000, the plex86 developers pointed out a curious situation in which an NX bit could be emulated on a standard 32 bit x86 CPU. This method would be slow and complex, involving using the separated ITLB/DTLB logic to allow the OS to control which pages could be used for instructions and which could be used for data.

The PaX Team implemented the plex86 suggestion as a proof of concept by October of the same year; then later enhanced the implementation, and went on to create a new method which was much faster. Later, in 2003, OpenBSD released W^X, and Ingo Molnar of Red Hat released Exec Shield. These use a much simpler and faster, but inaccurate, method of NX emulation which sometimes loses protections.

Microsoft has made no advancement with DEP in this sense. The functionality DEP proports to supply already exists in all versions of Windows. The only difference is that newer processors supply an NX bit for Windows to use. There are numerous ways in which Microsoft could have emulated or partially emulated an NX bit for older CPUs.

DEP does not protect against unsafe VirtualProtect() usage. Restrictions can be controlled per program with DEP, but this restriction doesn't exist. It makes sense to allow memory to be made X|W for some programs like JIT compilers and realtime CPU emulators; but most of them shouldn't need that functionality.

PaX is the only system right now which supplies mprotect() restrictions. These restrictions prevent any memory from being marked Writable and eXecutable at the same time. They also prevent any memory from transitioning from -X to +X; executable memory must be created +X against a file resource containing executable code.

The concept of least privileges is at the heart of any security setting, so this restriction deserves some consideration. Certain attacks do not need to inject code. They can instead alter the program so that it naturally goes to existing code. These alterations could include using VirtualProtect() or mprotect() to make the stack executable, followed by a return to code on the stack.

This sequence could be done by overflowing a buffer with a large amount of data, including an overwrite of the return pointer and stackframe pointer to call a VirtualProtect() or mprotect() function; a stack frame for the function; and executable code. This would allow successful evasion of DEP in one step by making the injected code executable in the same pass.

DEP has no Address Space Layout Randomization. With ASLR, the address of the VirtualProtect() functions is different for every run of the same program. This helps to prevent the above described attack and any other attack relying on executing existing code, unless information about the address space can be leaked to the attacker during the attack.

PaX provides ASLR for the stack, heap, any mmap() base including PIC/PIE executables and libraries, and optionally for non-PIC executables. Basically everything is placed randomly and has to be located by reading the Global Offset Table. The GOT is stored in a register, so it can't be examined by classical format string bugs. There are also patches, such as the one included with GrSecurity, to prevent the leaking of the address space layout.

Exec Shield and W^X also supply limited ASLR. While the mprotect() function can still be abused in the way described above, an attacker will have to take a blind guess at where that function lies. Note that local access to an Exec Shield protected Linux system still allows address space information to be leaked through /proc/[pid]/maps unless an obscurity patch like the one in GrSecurity is used.

Software DEP and SafeSEH don't look too usful. It appears that Microsoft is trying to prevent people from changing exception handlers; however, replacing an exception and triggering it is a more complex attack than simply using one-step return-to-VirtualProtect()/code injection sequences, which only require one overflow. ASLR would probably protect against this too anyway. It is important to note, however, that I'm not quite sure what problem SafeSEH was trying to solve.

When I started, I intended to give DEP a good review. Proper use of the NX bit is a great step forward in security, especially in advanced implementations such as PaX. However, by giving DEP closer examination, I have noticed that it's theoretically very simple to simply evade DEP in a single step. This won't stop any half-serious blackhat cracker. 100% of attacks where some other system (like Stack Smash Protection) doesn't stop the attack should be possible in one slightly modified step.

Monday, January 10, 2005

Hardened 2.6.10 kernel from Gentoo soon

Linux users fall into three basic groups when it comes to the kernel. First, there are those who just use what the distribution supplies. Second, there are those who chose or build their own patchset. And finally, there are vanilla users who just grab whatever from and use that.

From a security standpoint, it is usually better to use a patchset, whether it be your distribution's default kernel or something elaborate like Con Kolivas' or Alan Cox' patchets, or your own rolled on top of those. Usually these patchsets contain numerous security fixes to vulnerabilities discovered after stable release; and with the 2.6 development model, waiting for the next release could always mean more potential bugs.

The Hardened Gentoo team supplies a kernel on Gentoo Linux known as hardened-dev-sources. With the upcoming PaX release and a new GrSecurity, h-d-s is finally moving up to 2.6.10. This brings several security fixes and enhancements.

The new h-d-s includes the 2.6.10-ac8 patch, which brings numerous hardware and security fixes to 2.6.10. This fixes a number of vulnerabilities, including several found by the GrSecurity and PaX developers. The random poolsize sysctl integer overflow, RLIMIT_MEMLOCK bypass DoS, and SCSI IOCTL integer overflow are all fixed, as well as many others.

Capabilities are flags assigned to programs that give administrative access to the system, such as CAP_SYS_BOOT (shutdown/reboot) and CAP_SYS_MODULE (load modules). Included in h-d-s are LSM capability fixes to a local root exploit and a patch to enforce common sense in kernel configuration. The first patch is a vulnerability fix, while the second prevents capabilities from being built as a module. All standard Linux systems use capabilities as a major control barrier.

Many programs with privileged (root) access drop capabilities when they're loaded to reduce the damage possible if they are hijacked. When the capability module is loaded, all running privileged programs are given all capabilities. Programs which would have dropped caps are now running with all privileges. This means that any init scripts and services started before the capabilities module is loaded may suffer privilege elevation relative to their normal running mode. For this reason, capabilities should be built in if they are going to be used.

A local DoS in the i810 DRM (CAN-2004-1056) code was also fixed in h-d-s, preventing users from crashing X and displaying odd things on the screen. Support for a.out binaries is also disabled by default; modern Linux distributions run a pure ELF system. And of course, h-d-s features GrSecurity.

The h-d-s kernel also features netdev-random, a patch to gather entropy from network interrupt timing. Network interrupts occur whenever the network card receives data. The path of a packet between a server and a client is often littered with routers interacting with other network requests, and can be affected by electromagnetic noise as well. It is fairly infeasible that network interrupt timing could be significantly manipulated externally, and so this should be a good alternate source of entropy for /dev/random.

Aside from that, various hardware fixes—especially for sparc—are in h-d-s, along with VM patches to make the OOM killer more friendly; IP connection tracking fixes; squashfs; and a fix to the Deadline I/O Scheduler. Overall, the upcoming h-d-s release for 2.6.10 looks to be well rounded. It will be released soon, so Gentoo users should keep an eye out.

Sunday, January 09, 2005

A living kernel

Recently, Jake Moilanen announced a set of patches to add a genetics algorithm library to the Linux kernel. These patches supply functionality to modify the kernel's behavior experimentally and tune for peak performance. The base patches can be combined with patches that enhance the Anticipatory IO Scheduler and the CPU scheduler (zaphod patch required) with the algorithms.

The last time I tried a scheduler patch, it was with Con Kolivas' patchset. I had mixed this with PaX, and gotten nice performance results. Unfortunately, a little probing with `pspax` showed that PaX was no longer working, due to the invasive changes of the CK patches. This isn't a flaw in Con's work or in PaX, but rather in the fact that neither patch was made to function with the other. This is not an unreasonable situation; every piece of non-mainline code cannot be expected to function properly with every other piece of non-mainline code.

With the Genetics Algorithm in the kernel, though, PaX seems to work. There have been no ill side-effects caused by mixing the two, and Jake's patches even seem to be doing some good. I crossed them with the -test17 PaX patch for 2.6.10, and immediately ran `paxtest` upon rebooting as well as `pspax`. Both programs showed that PaX was performing as expected. My programs appear to start faster, so I assume the GA is working.

This brings up a simple and interesting point. Whenever combining patches, it is important to look for regressions. With security patches, other patches may cause added security layers to cease to function properly, which left unchecked creates a false sense of security and potentially more unchecked security holes. It would even be feasible for two patches to create security holes; one patch could supply a security hole but not any way to trigger it, while another reliant patch could supply a path that triggers the vulnerability.

In the case of PaX and the GA library, everything appears to be working fine. I can't comment on anything that would require a code audit; but everything seems functional. I've been running for almost seven hours with no problems. I'm looking forward to Jake's continued work; there have been many suggested enhancements to the GA and its uses already, including diploid algorithms, hillclimbing, and swap tuning. Jake seems to be very willing to experiment, which is appropriate seeing as his patch does exactly that continuously.

Tuesday, January 04, 2005

Ubuntu Technical Board Meeting 2004.01.04

Ubuntu Linux has a strong community structure which involves the community in the development process via two types of meetings. Ubuntu Linux Meetings take place every week on#ubuntu-meeting on freenode, and alternate between Technical Board and Community Council meetings.

Today was the Technical Board meeting, a discussion of the Ubuntu Linux technical direction. The meeting was mostly a wash for Proactive Security, although some discussion was done. Of course my contribution was being horridly out of the loop and displaying difficulty understanding the current direction of the conversation.

The Technical Board opened at 16:00 UTC with Proactive Security discussions. For the most part, it was a stale mate. Not much was established in reality, but some good discussion went on. For the most part, there was discussion on deploying Stack Smash Protection. The general consensus is, of course, that SSP won't be in Hoary; but that development will be pursued for post-Hoary. There was also some talk of making two Main trees, one SSP and one normal, for testing purposes. Finally, it was decided to defer an "Official" statement until Ubuntu's work on these things were more mature.

So no SSP for Hoary. We knew this already. If we're lucky though, the split Main tree will spin on Hoary at first; though I have a feeling that it'll be more likely that the tree follows Development. No worries, it's T-Minus less than 1 year before some sort of functional SSP branch should be supported.

The splitting of Main is alright. Once it's shown that the SSP branch is well polished and stable, Main will most likely move to a single, stack smash protected tree by natural selection. Maintaining two Main trees is a waste of time, and so good results will bring favor to the SSP version.

Finally, it was decided that an official statement would be made when Ubuntu's progress in this area was more mature. So, nothing official yet, but definitely lots of interest. I doubt that proactive security will actually be dropped in Ubuntu, due to developer interest and to the potential benefits to the user base.

Martin Pitt has been collaborating with the Hardened Debian team due to his interest in enhanced security. He has released hardened kernels with PaX in them, although a few things seem broken, such as XFS. The Hardened Gentoo kernels work fine with XFS, so I'm confident that the bug can be worked out.

I get the feeling that I should have found out what was going on before going in there. On the bright side, though, the Technical Board has recommended that trulux and pitti propose and form a team for Proactive Security during next week's Community Council meeting. Overall, I still feel like I'm starting to hinder progress, and so am stepping back to watch for the moment.

Monday, January 03, 2005

Finally a new PaX

PaX has been stuck at Linux 2.6.7 for a while now. The author has been fairly active on the 2.4 branch; but 2.6 has been too volatile. Between 2.6.7 and 2.6.8, major VM changes were put in which changed how PaX had to be written. This delayed the release of a new PaX for 2.6 ever since, although the 2.4 branch still gets regular releases.

The PaX Team needs to do more work than just up-porting PaX through various kernel releases. PaX is still lacking in a number of features that fit in with its design goals, and the author can't magically make these features appear. These things take time and effort, as parts of the kernel have to undergo invasive changes to allow for such things as kernelspace NX emulation to protect the kernel itself on x86 architectures.

Most up-ports of PaX are simple. PaX applies fine, aside from some fuzz and an occasional failed hunk which can be easily merged manually in a few minutes. Because of this, the PaX Team continues to up-port PaX to newer versions of the stable branch of the Linux kernel. When this does not hold true, however, there are more important matters at hand than several week long coding and testing cycles purely for the purpose of moving PaX to the next point release, which will probably last a few weeks to a few months before the next.

Because they're so nice, The PaX Team has finally started to catch up to the 2.6 stable branch. A few weeks ago, 2.6.9 and 2.6.10 patches started to trickle into the hands of a few people who regularly communicate with The PaX Team. I've had a good level of success with both sets, although I located an outstanding bug in ET_EXEC base randomization on x86-64 (Athlon64). My base is ET_DYN though, so I disabled ET_EXEC randomization. Haven't tried with 2.6.10, although I've run up to -test9 for PaX with the randomization disabled and no problems.

There should be 2.6.9 and 2.6.10 patches for PaX soon. No official statements yet, but they run fine on my end. Spender of GrSecurity has been following The PaX Team and should have a GrSecurity patch out shortly after the new PaX is released. Tests for that are also available, though I forget where and haven't been following Gr. This will open the way for a hardened-dev-sources-2.6.10 on Gentoo; currently the 2.6.7 kernel that the Hardened team is maintaining is a handful of security enhancement patches and a huge pile of security bugfix patches.

I believe this is a good time to discuss why exactly the Linux kernel development process is both brilliant and flawed, and how to fix it. I have not deeply examined the current development policy, but I believe that I understand the basic concepts, and that if better applied a greater development scheme can be pursued.

The reason I believe we need a new development scheme is that, as demonstrated here, new development must follow mainline Linux. If heavy testing and community acceptance wasn't needed for an honestly good shot at mainline integration, this would be less of a problem. Unfortunately, for various reasons, many projects need to stay up to date with mainline.

A good example of having to stay in time with mainline has already been shown here. The PaX Team is not ready to go to mainline for their own reasons, possibly due to the set of unimplemented features planned for the future, such as better kernel protection. PaX is, however, ready in the eyes of many for production deployment, and is a part of Adamantix, Hardened Gentoo, and Hardened Debian. Thus it becomes important for these projects that PaX stay up to date with mainline. The symbiotic relationship here is that PaX gets a great deal of testing, while these distributors create an environment of a higher grade of security.

The same trends may be followed with experimental schedulers, realtime Linux efforts, new drivers, rewrites of other core Linux kernel components, and the like. These efforts may be ready for mainline, but may be mutually exclusive, colliding with eachother even if they follow completely different goals. PaX for example collides with anything doing major modifications to memory managment, such as enhancing the RMAP scheme or changing APIs. Collisions between ready and unready technologies may hold back unready technologies by forcing them to rework internals to match with the new codebase, creating new bugs without fixing old ones.

To counteract this effect, I decided to examine what must be guaranteed and what must be considered. Taking this into account, I believe that the current development scheme can be offset yet remain largely in tact while still solving the problems the current scheme poses.

We must guarantee that the codebase remains fairly static for long periods of time, yet still gets the bugfixes it needs, both general and security related. A codebase without a unified bugfix base means that security holes and instabilities persist for long periods; this is unacceptable. Most bugfixes are unintrusive, and so satisfying this can be done directly without causing much of a problem.

We must also guarantee that new drivers, new systems, and new advancements reach public stable release in a timely manner. A timely release cycle can satisfy this reasonably. Commercial operating systems have varied release cycles of 2-5 years often, with bugfixes inbetween. A shorter release cycle could bring advancements in a reasonable period without stressing third party developers excessively.

We must consider that older releases must be maintained. Somebody has to maintain bug fixes in older releases; and a guaranteed and reasonable duration of maintenance must be made. The number of releases to maintain is a hard-limit; the length of the release cycle must be adjusted to support any given release for an acceptable timeframe.

Taking these factors into consideration, I have sketched out a skeletal development model. The model is aimed at producing a stable, unchanging codebase for third party developers; producing a timely release cycle to bring new developments to the community quickly; incurring minimal load on maintainers for the upkeep of older releases; and producing long-lived releases for situations where the kernel is not frequently slated for a major upgrade.

The first major change is a strict separation of Volatile and Stable branches. The current 'stable' branch, 2.6, undergoes massive changes with any working, relatively bug-free advancements. This sort of behavior should go into odd-numbered releases. I believe this is much better than a model in which a "development cycle" lumps heaps of patches together and then picks out the bugs. The result is that the bleeding-edge "Volatile" branch becomes a realistically usable development branch, and can be released as "Stable" at any point.

With this separation, the Stable branch would be virtually untouched. Only bugfixes and possibly very unintrusive drivers would be acceptable for Stable. Drivers like Reiser4, which modified parts of the core filesystem interfaces in the kernel, would not be acceptable; whereas drivers such as SquashFS, which modifies a Makefile and Kconfig and adds a new directory, would possibly be allowed. A strict rule to actively backport new and updated drivers would be wrong, because this would put excess load on Stable maintainers.

To facilitate the release of new technology, an established release cycle should be set. Somewhere between six and nine months should be appropriate for Stable releases. This would give third party developers several months to up-port and work on improving their code. Those ready to go mainline would have to focus on up-porting to Volatile and then make an announcement.

I believe supporting three Stable branches with bugfixes should be sufficient. While a loose policy of backporting drivers to Stable may be allowable, such a policy should be strictly forbidden beyond current Stable. Under this scheme, minimal maintainer overhead can achieve official support of each Stable release for 18 to 27 months.

It has been suggested that particular projects select a release to follow, and hold there for a while before up-porting. While this works for development, it doesn't dictate which release exactly that each project should select. Some distributions may wish to combine external patches, especially security and driver patches, for their kernels. Without a unified authority on where developers should focus, distributions may at times have to decide which set of features are most valuable and allow less valuable ones to fall out of their support.

This is not acceptable. Distributions and their users should not have to select between combinable features simply because the develpers chose different versions to spin on. This could even isolate "traditional" features such as Supermount from much more useful features, which forces one of two bad decisions to be made. For this reason, I believe the selection should be officiallf handled, and I believe that the method I have described here would be a good basis for a new development model.

Sunday, January 02, 2005

Policy on Web content

You may not think about it, but content filtering is a security issue! Filtering out pornography is a major enforcement aid for many businesses and public institutions which do not allow access to such material from their network. It is therefor interesting from a security standpoint that tools exist to facilitate the control and filtering of pornography, violence, and racist content, and that such control can be extended over a variety of languages.

Ubuntu Linux has had suggestions to implement such things in their IdeaPool. I took that basis and Dan's Guardian and extended out the idea into a suitable set of information for easily and effectively deploying such a thing. Although these should not be enabled by default, they should be available for parents, schools, libraries, businesses, and government institutions which may want or even be legally required to prevent access to such content.

I have also made a post about this on the Ubuntu Linux development list. The post is accessable via gmane, and thus you can get in on the conversation by clicking Action->Followup and posting. If you are using or considering using Ubuntu, this would be a good way to influence its development.

I have said that this is a policy enforcement tool. I do not advocate blind censorship, nor do I believe that we should enable these tools by default. However, I believe that any serious Linux distribution should present a set of important features to the user. To any user, a Web browser, e-mail client, music player, and friendly desktop environment are important. In many situations, content filtering would also be an important feature to be available; however, it would also be a bad feature to enable by default, as many users will not want it.

Parents with young children have a responsibility to actually raise their children. Dan's Guardian could be configured to block access to sites determined to be inappropriate based on a given set of rules. The parents could decide to block "pornography," "violence," and "racism," or any combination thereof. They could also decide to monitor their children themselves and have Dan's Guardian simply log access and allow it; or deny and log access. Most parents wish to prevent their children from accessing certain content, and so having an available content filter would be a great deciding factor.

Businesses and public institutions also have a legal responsibility to content filter. A business, a school, or a library can be fined or even sued if by their negligence certain content is accessable in many situations. Such institutions could look to a content filter as a solid, free, and flexible best effort both to enforce policy and protect them from legal troubles.

In any case, Dan's Guardian can be combined with Clam Anti-Virus to filter out viruses, worms, trojans, and other malware. Such malware may come inside infected executables or embedded into corrupted content that exploits browser and support library bugs. This would be of interest to many users and institutions.

With some work, it's possible to deploy these systems in such a way that they hijack unencrypted HTTP connections, both originating from the local host and being routed from the internal network. This provides zero-configuration proxying which would allow a clean and solid policy enforcement solution regardless of the lack of administrative control over other network nodes, such as laptops with custom operating systems, or LiveCDs. This also allows the virus protection filtering to be highly effective, allowing unconfigured Windows machines using an Ubuntu gateway to be automatically protected from malicious Web sites.

Although I am against censorship, I believe that this would be a great policy enforcement tool to have available, but disabled by default. It would only be a problem if ISPs decided that their policy included mandatory filtering; in such cases, the ISPs could easily run a low-budget project to deploy a proprietary content filter which the user could not disable. Thus, there should be no moral or rights issues harmed by the inclusion of content filtering software.

Saturday, January 01, 2005

Hardened Ubuntu

Earlier I had blogged on a more secure setting that would be suitable for wide-spread distribution. Such an environment can be created in Gentoo Linux relatively easily, if you're already a Gentoo user. Unfortunately, the learning curve needed for Gentoo is out of the reach of many people, and Gentoo is not always feasible anyway. Other distributions have to take up these enhancements in order to bring them to the average Linux user.

The Hardened Debian project is working towards these goals on Debian. They have more than just PaX and SSP in their goals, and show great promise. Rather than make a Debian-based distribution, they chose to modify the Debian tools and present the results to Debian itself, in the hopes that they would accept and use these modifications. This would bring a more secure environment to a large chunk of the Open Source community.

Ubuntu Linux ties in closely with Debian, not only with the development tools but also with the developer base. There are many Ubuntu developers who are also Debian developers. These distributions are tied closely enough that successful deployment in Debian will most likley be inherited by Ubuntu; and that successful deployment in Ubuntu will most likely cause enough of a stir in Debian for the efforts to move upwards into Debian.

The Ubuntu developers have been promisingly open to security advancements. The focus of the Hardened Debian project now includes Ubuntu Linux; deployment will likely first occur there. This would be good; Ubuntu is on a 6 month release cycle, which means a working release should be out faster than it would be with Debian. This will aid in demonstrating to Debian the advantages of the Hardened Debian efforts, and may even influence the (Sarge+1) release.

So, keep your eyes to Ubuntu. There's been no official announcements, but I have a feeling that the efforts of The PaX Team, Brad/Spender of GrSecurity, Etoh and Yoda, The Adamantix Team, The Hardened Gentoo Team, and The Hardened Debian Team will finally emerge to the average user there. It's either them or Debian.

Spinning a secure setting

I've been a Hardened Gentoo user for a while. I don't use the full set with SELinux/GrSecurity, Prelude, and whatever else they like to throw at people; but instead use a few basic things like a security hardened gcc that produces PIE binaries with stack smash protection (paper).

It may come as a surprise to you, but these weren't terribly painful for me to get on my system. I won't say that the Hardened team didn't do their fair share of work; they did enough mapping out which packages break from what, trying to fix obscure bugs they find because of this breakage, and just in general trying to make this stuff work in the first place. Once it's known how to do it, however, it's fairly simple to upkeep.

It may also surprise you that I find these suitable for widespread use on "user-friendly" distributions. These particular technologies also don't generate any extra administration duties once in place. No extra passwords are needed, no added steps in installing programs need to be taken. If a distribution supplies these things, then the user doesn't even have to think about them.

After using some transparent security features, I became quite attached to PaX (Wikipedia) and SSP. I even produced an article for about them. This prompted no action, but was still fun to do.

After a while, I took a look at Ubuntu Linux and read through their Security Notices to produce a simple analysis of the potential impact of PaX and SSP. In the end it seems like 40-60% of notices contain potential intrusions which can be reduced to DoS attacks, which although annoying do not open the path for local attacks or worm spreading.

Based on the above analysis, I also found there to be facilities to help programmers easily close off another 20% of local attacks. These are related to the creation of temporary files and directories, which makes potential bugs easily recognizable in source code audits. The supplied facilities handle what is normally doable in a handfull of code with at most two lines of code, and so are easier for programmers to use than the other, less secure methods.

Any distribution could easily deploy these things in a sane manner. It would be work, but not difficult work, although perhaps tedious to start up. Maintaining the changes would be very minimal effort. I believe this is the direction Linux distributions will follow, the direction they should follow.

Blogging on Cyberterror

The Blog on Cyberterror is now up. This blog is made by the same person who runs the War on Cyberterror Web site, an informational site which attempts to pool together information related to security and security efforts.

The purpose of the Blog on Cyberterror is to give me a place to vent and talk about things going on. This is a very informal setting; I may get into observations that I have little grasp on about efforts not very broadly displayed, and may make errors about the magnitude of such efforts. Still, I try to be right all the time. :)

I may blog multiple times in one session. This is because I do not believe in batching together multiple topics in a single blog; if five things go on, then five blog posts will be made. This keeps everything nice and separated for those of you wishing to share news with your friends.

Central Focus

This blog is centered heavily around Linux and Open Source Software (OSS). I am a Linux user, and I believe that from a security standpoint OSS is the only player in its class. The reason for this is that closed source software cannot be combed for security bugs by the general population, or augmented with new security features.

While blackhats may encounter exploits by disassembling, tracing, or pure experimentation, whitehats tend to be more focused on source code auditing. Finding the bugs in the source means having the code available to write up a quick fix and submit it. This is feasible with OSS, requiring no NDA contracts or legal negotiations. There are several projects which do exactly this, such as the Debian Security Audit Project.

With no statistical backup, my intuition tells me that hiding the source code will deter whitehats more than blackhats from discovering security flaws, which means that flaws are more likely to be exploited before they are found. Closed source software may have more or fewer security holes than OSS, but I believe those flaws will have a longer lifetime. Because of this, the window of attack is widened; attackers can spend more time feeling out what they believe to be exploits and learning how exactly to trigger and use them without worrying that somebody else will find and fix the bugs.

Sometimes newer security features can disbar existing bugs. If these features interfere with a piece of OSS, then those supporting these new features can correct the problem to aid with the deployment of new security technology. Features such as stack guarding and proper managment of memory protections can be deployed in OSS freely; recompilations and source code tweaks are both feasible.

Closed software is updated at the leisure of the developers, whenever they want to make a new release, and only if they actually decide it's worth their time to support such software. Sometimes the developers do not care to expend their resources on correcting what they believe is an obscure bug that probably can't be exploited in any useful way, at least not until somebody exploits it. It's even more likely that adapting the code to fix an incompatibility with a new system will be a low priority task, especially if that system is not mandatory and can be disabled for the program.

Aside from this, the same principles apply to the overall quality of the software. If we're going to work on enhanced security, we should do it with better software. OSS is easily accessable as a research base; but it's also capable of being very high quality. OSS is more capable of being both relativly bug-free and highly featureful.

Anyone can find and fix bugs in OSS. The code is available and all are welcome to modify it. These modifications normally go upstream, especially if they involve bugfixes. As a notable example of the impacts of this, the Linux kernel has fewer bugs than most software. Any popular OSS will have lots of eyes scanning for bugs, meaning that bugs will have a higher chance of being fixed. Thus, as a general trend, OSS is more likely to be genuinely more stable and more secure.

Anyone can add a feature to OSS without first evaluating it for marketability and lobbying for company resources. If the feature is disinteresting, it will be removed later. If it's interesting but poorly implemented, the implementation will be enhanced. Resources can be external and automatic; an interested developer may appear with a patch for a new feature after working on it independently. Thus, OSS is more likely to contain mid- to high-demand features and some nifty but low-demand features, rather than simply high-demand features or features which are predicted to become high-demand.

Because of all these things, I believe that security research is both best suited and more beneficial primarily in Open Source Software. It's a lot easier to research what you can actually examine freely; and it's better to apply enhancements first to the better environment and allow the worse one to atropy or catch up.