The next stable update of the Linux kernel will bring advances in file system event monitoring, the Xtensa architecture, and a set of system calls that allows users to load another kernel from the currently executing Linux kernel.
While the 2.6.13 –rc (release candidates) are currently being tested, the stable version is expected to be released in the next few weeks, kernel developers told eWEEK.
In fact, Linus Torvalds, the “father” of Linux, this week told kernel developers on the Linux Kernel Mailing List that he “really wanted to release a 2.6.13, but theres been enough changes while weve been waiting for other issues to resolve that I think its best to do a -rc7 first,” he said.
Most of the -rc7 changes were “pretty trivial, either one-liners or affecting some particular specific driver or unusual configuration,” he said.
The 2.6.13 kernel will bring some significant changes, including the addition of Inotify, a file system event-monitoring mechanism designed to serve as an effective replacement for Dnotify, which was the de facto file-monitoring mechanism supported in older Linux kernels.
Inotify is a fine-grained, asynchronous mechanism suited to a variety of file-monitoring needs, including security and performance.
Also included will be the Xtensa architecture, which is a configurable, extensible and synthesizable processor core and the first microprocessor architecture designed specifically to address embedded SOC (System-On-Chip) applications.
Also included will be Kexec, a set of system calls that allows the user to load another kernel from the currently executing Linux kernel, and Kdump, a kexec-based crash dumping mechanism for Linux, said Greg Kroah-Hartman, a Linux kernel developer at Novell Inc. in Portland, Ore.
Also in the cards is an implementation of “executeI in place” for a specific s/390 usage, kernel maintainer Andrew Morton told eWEEK.
The devfs (device file system) will be disabled in this release, which also contains an enhancement to the Complete Fair Queuing disk I/O scheduler, which permits separate processes to have different I/O priorities, similar to nice levels for CPU prioritization, Morton said.
The Linux vendors are also likely to quickly adopt many of these new kernel features in their consumer and enterprise distributions. Donald Fischer, a product manager at Red Hat Inc., told eWEEK that the kernel includes “the latest and greatest kernel releases in our Fedora Core community project on an ongoing basis, and 2.6.13 will appear in a Fedora Core release, coming soon.”
Fischer said Red Hat would also include most new kernel features that are of use to its customers in future major releases of its enterprise products. “For example, features of 2.6.13 will be included in our future Red Hat Enterprise Linux Version 5 solution,” he said. Red Hat Enterprise Linux 4 was released earlier this year.
Fischer said the company would also selectively back-port certain functions from recent community kernel releases to its enterprise product.
“In general, this is limited to new hardware platform support features. The reason [for this] is that our enterprise products favor stability for more conservative enterprise customers, versus Fedora, which targets the latest and greatest technology for developers and enthusiasts,” Fischer said.
Coming in future stable Linux kernel releases will be the Xen virtualization technology; Fuse, which makes it possible to implement a fully functional file system in a user-space program; and version 2 of the OCFS (Oracle Cluster File System), which will be the first clustering component to be added to the public kernel, Novells Kroah-Hartman said.
But some technologies that have been expected to be in the kernel going forward may now not make it after all, Morton said, including the Reiser 4 local file system, about which he said, “I dont know when Reiser4 will be mergeable—there are issues.”
Red Hats GFS (Global File System), which is commonly used in clusters of enterprise applications to provide a consistent file system image across the server nodes, allowing the cluster nodes to simultaneously read and write to a single shared file system, may also not ever find its way into the kernel. “Thats all in flux,” Morton said.
Next Page: Incorporating the kernel into the enterprise.
Incorporating the Kernel
Earlier this month at LinuxWorld in San Francisco, Red Hat announced that it was including the GFS in Fedora Core 4. The GFS is a scalable, 64-bit cluster file system for Linux. It can support up to 256 x86, AMD64/EM64T or Itanium nodes. Red Hat bought the GFS as part of its acquisition of Sistina Systems in 2003.
After the acquisition, Red Hat worked to make the proprietary GFS available under the GPL. “GFS is highly valuable technology that now has the opportunity to improve even more rapidly in the open-source community,” said Paul Cormier, Red Hats executive vice president of engineering.
Some large enterprise Linux customers like John Engates, chief technology officer for Rackspace Ltd., a managed-hosting provider in San Antonio, say they believe the Linux vendors now are able to incorporate these new features and functionality into their distributions more incrementally than was previously the case.
But there is still a lot more work to be done, especially in regards to mapping more of the pieces into the development process, said Dan Frye, vice president of IBMs Linux Technology Center in Beaverton, Ore.
“All the device drivers from all the different manufacturers have to be open-sourced and moved into the upstream tree to make this fully robust and rock solid,” Frye said, adding that the community is working on this and is making progress, “but we need to get [the manufacturers] into the process, rather than standing alone.”
While the 2.6 kernel provides all the basics, there is still work to do, as the kernel is not good enough for every workload, Frye said. “We need better large page support for application binaries, and better scalability through the IPV. You know, its tweaking, its thousands of patches that are small stuff,” he said.
Frye said there was a huge amount of work going on, some of it very difficult, such as “memory add/delete … [and] specific types of large page support for certain types of workloads. There is also a lot of work still to be done on system throughput for the hardest workloads: large SMP [symmetric multiprocessing] running transaction processing-type things. Theres also lots of work to be done there, as you have to get fine-grained scalability in every subsystem.”
“There is also still a lot of work to do around serviceability to get first-failure data capture in a reliable way throughout the system. This is increasingly driven less by technology than it is by customer workloads,” he added.
Frye said the Linux kernel development team at the technology center was also working on core security, functionality and a higher level of certification, as well as on Samba, networking, protocols, performance analysis, and finding hot spots and serviceability. It is also working on enabling its hardware, and is spending an increasing amount of time on storage, he said.
The team is also spending much more time working on GCC, as the rest of the tool chain is in good shape, Frye said.