PaX has been stuck at Linux 2.6.7 for a while now. The author has been fairly active on the 2.4 branch; but 2.6 has been too volatile. Between 2.6.7 and 2.6.8, major VM changes were put in which changed how PaX had to be written. This delayed the release of a new PaX for 2.6 ever since, although the 2.4 branch still gets regular releases.
The PaX Team needs to do more work than just up-porting PaX through various kernel releases. PaX is still lacking in a number of features that fit in with its design goals, and the author can't magically make these features appear. These things take time and effort, as parts of the kernel have to undergo invasive changes to allow for such things as kernelspace NX emulation to protect the kernel itself on x86 architectures.
Most up-ports of PaX are simple. PaX applies fine, aside from some fuzz and an occasional failed hunk which can be easily merged manually in a few minutes. Because of this, the PaX Team continues to up-port PaX to newer versions of the stable branch of the Linux kernel. When this does not hold true, however, there are more important matters at hand than several week long coding and testing cycles purely for the purpose of moving PaX to the next point release, which will probably last a few weeks to a few months before the next.
Because they're so nice, The PaX Team has finally started to catch up to the 2.6 stable branch. A few weeks ago, 2.6.9 and 2.6.10 patches started to trickle into the hands of a few people who regularly communicate with The PaX Team. I've had a good level of success with both sets, although I located an outstanding bug in ET_EXEC base randomization on x86-64 (Athlon64). My base is ET_DYN though, so I disabled ET_EXEC randomization. Haven't tried with 2.6.10, although I've run up to -test9 for PaX with the randomization disabled and no problems.
There should be 2.6.9 and 2.6.10 patches for PaX soon. No official statements yet, but they run fine on my end. Spender of GrSecurity has been following The PaX Team and should have a GrSecurity patch out shortly after the new PaX is released. Tests for that are also available, though I forget where and haven't been following Gr. This will open the way for a hardened-dev-sources-2.6.10 on Gentoo; currently the 2.6.7 kernel that the Hardened team is maintaining is a handful of security enhancement patches and a huge pile of security bugfix patches.
I believe this is a good time to discuss why exactly the Linux kernel development process is both brilliant and flawed, and how to fix it. I have not deeply examined the current development policy, but I believe that I understand the basic concepts, and that if better applied a greater development scheme can be pursued.
The reason I believe we need a new development scheme is that, as demonstrated here, new development must follow mainline Linux. If heavy testing and community acceptance wasn't needed for an honestly good shot at mainline integration, this would be less of a problem. Unfortunately, for various reasons, many projects need to stay up to date with mainline.
A good example of having to stay in time with mainline has already been shown here. The PaX Team is not ready to go to mainline for their own reasons, possibly due to the set of unimplemented features planned for the future, such as better kernel protection. PaX is, however, ready in the eyes of many for production deployment, and is a part of Adamantix, Hardened Gentoo, and Hardened Debian. Thus it becomes important for these projects that PaX stay up to date with mainline. The symbiotic relationship here is that PaX gets a great deal of testing, while these distributors create an environment of a higher grade of security.
The same trends may be followed with experimental schedulers, realtime Linux efforts, new drivers, rewrites of other core Linux kernel components, and the like. These efforts may be ready for mainline, but may be mutually exclusive, colliding with eachother even if they follow completely different goals. PaX for example collides with anything doing major modifications to memory managment, such as enhancing the RMAP scheme or changing APIs. Collisions between ready and unready technologies may hold back unready technologies by forcing them to rework internals to match with the new codebase, creating new bugs without fixing old ones.
To counteract this effect, I decided to examine what must be guaranteed and what must be considered. Taking this into account, I believe that the current development scheme can be offset yet remain largely in tact while still solving the problems the current scheme poses.
We must guarantee that the codebase remains fairly static for long periods of time, yet still gets the bugfixes it needs, both general and security related. A codebase without a unified bugfix base means that security holes and instabilities persist for long periods; this is unacceptable. Most bugfixes are unintrusive, and so satisfying this can be done directly without causing much of a problem.
We must also guarantee that new drivers, new systems, and new advancements reach public stable release in a timely manner. A timely release cycle can satisfy this reasonably. Commercial operating systems have varied release cycles of 2-5 years often, with bugfixes inbetween. A shorter release cycle could bring advancements in a reasonable period without stressing third party developers excessively.
We must consider that older releases must be maintained. Somebody has to maintain bug fixes in older releases; and a guaranteed and reasonable duration of maintenance must be made. The number of releases to maintain is a hard-limit; the length of the release cycle must be adjusted to support any given release for an acceptable timeframe.
Taking these factors into consideration, I have sketched out a skeletal development model. The model is aimed at producing a stable, unchanging codebase for third party developers; producing a timely release cycle to bring new developments to the community quickly; incurring minimal load on maintainers for the upkeep of older releases; and producing long-lived releases for situations where the kernel is not frequently slated for a major upgrade.
The first major change is a strict separation of Volatile and Stable branches. The current 'stable' branch, 2.6, undergoes massive changes with any working, relatively bug-free advancements. This sort of behavior should go into odd-numbered releases. I believe this is much better than a model in which a "development cycle" lumps heaps of patches together and then picks out the bugs. The result is that the bleeding-edge "Volatile" branch becomes a realistically usable development branch, and can be released as "Stable" at any point.
With this separation, the Stable branch would be virtually untouched. Only bugfixes and possibly very unintrusive drivers would be acceptable for Stable. Drivers like Reiser4, which modified parts of the core filesystem interfaces in the kernel, would not be acceptable; whereas drivers such as SquashFS, which modifies a Makefile and Kconfig and adds a new directory, would possibly be allowed. A strict rule to actively backport new and updated drivers would be wrong, because this would put excess load on Stable maintainers.
To facilitate the release of new technology, an established release cycle should be set. Somewhere between six and nine months should be appropriate for Stable releases. This would give third party developers several months to up-port and work on improving their code. Those ready to go mainline would have to focus on up-porting to Volatile and then make an announcement.
I believe supporting three Stable branches with bugfixes should be sufficient. While a loose policy of backporting drivers to Stable may be allowable, such a policy should be strictly forbidden beyond current Stable. Under this scheme, minimal maintainer overhead can achieve official support of each Stable release for 18 to 27 months.
It has been suggested that particular projects select a release to follow, and hold there for a while before up-porting. While this works for development, it doesn't dictate which release exactly that each project should select. Some distributions may wish to combine external patches, especially security and driver patches, for their kernels. Without a unified authority on where developers should focus, distributions may at times have to decide which set of features are most valuable and allow less valuable ones to fall out of their support.
This is not acceptable. Distributions and their users should not have to select between combinable features simply because the develpers chose different versions to spin on. This could even isolate "traditional" features such as Supermount from much more useful features, which forces one of two bad decisions to be made. For this reason, I believe the selection should be officiallf handled, and I believe that the method I have described here would be a good basis for a new development model.