Looks like I got slashdotted on that one. I wasn't trying to particularly bash Java, more the concept of one language being "secure" over another. Java just makes a good example of a language people believe will solve all their problems for them.
With the responses I've seen both here and on slashdot, I feel I should make a follow-up post. I'll point out a few interesting things that have arisen, and let the wolves at it again.
The major thing I'd like to point out is that, especially on Slashdot, most of the replies seemed to argue from the point of vanilla C on a vanilla system versus Java. This may partly be my fault for using Java as a major target in the argument; but I did discuss vanilla language C programs using a hardened compiler on a hardened system. This was my argument, and I'd appreciate it if people would take time to comprehend the context before replying half-assed to some other similar but distinctly different argument. Once again, the blog was about C compiled using a hardened compiler, run in a hardened operating system environment.
Another interesting point was that certain attacks are still possible in Java. These include SQL injection and cross site scripting, something not inherantly C; although C programs could certainly use SQL libraries or script language parsers that would be vulnerable. Script languages also come to mind, immune to buffer overflows but rabidly vulnerable to XSS; efforts like Hardened PHP work to reduce the risks here.
One major argument which kept resurfacing was that C is insecure because of pointer math and explicit memory management. I'd like to restate here that the environments discussed minimize the possible damage of bad pointer use; you can't modify existing code as one comment alluded to, and you can't execute data such as the stack or heap. The address space layout randomization makes sure that attackers who can control pointers and such at least can't figure out where to point them because everything moves around every time the program is run.
On the same topic, what's so hard about manually managing memory? You can create functions or, if you fancy C++ or Objective-C, classes to manage your linked lists and memory objects. Calling these will abstract the memory management from most of your code. I guess in something that abstracts direct memory management from you, you'd either do the same thing minus malloc() and free() calls; or just haphazardly write the same 2-3 lines of allocation code everywhere, which is probably really a bad thing anyway for maintainability in the more significant cases.
In either case I've never seen any reason why it's not clear when to free() memory, unless you somehow made it behave like a relational database with lots of concurrent areas of code somehow accessing it at the same time in unrelated ways. Typically though I'd think that you'd have some reason to remove an object; and at that time, destroy all resources such as threads that drive the object, and free the object. I guess it's possible if you searched an object out and are working on it in a separate thread, but I can't think of a practical application for this.
At any rate the blog wasn't on programming symantics like explicit memory management; it was on security. I just felt like going off on a tangent to ponder the quandary of why people have trouble with memory management. Pehraps a light weight reference counting library would help; I still have an aversion to garbage collectors because I worry that they may wander the heap back and forth (this is how Boehm was described to me) and thus in times of high memory usage could cause swap thrashing if used on a large scale.