Improving Linux Driver Installationby Jono Bacon
With any operating system, hardware drivers play a key role in ensuring that the hardware can talk to the software and vice versa. Although often riddled with specific information about how a device works, many of these drivers are very similar in their design and code. For example, many USB digital cameras use the same method of storing pictures; although the devices are different, the drivers are similar.
The increasing popularity and number of contributors to the Linux kernel have created a culture in which the kernel itself includes the driver source code. When compiling the kernel, you can select the drivers you want to use. Linux also has the capability to compile most drivers into special modules that it will load only when you use the device. These loadable modules allow the kernel to load certain drivers only when needed. This is particularly handy with rarely used devices and removable USB peripherals. Although loading drivers on the fly is flexible, the user experience of dealing with drivers has required that users know how to deal with modules, mount disks and devices, and low-level device information. These requirements have acted as a barrier to Linux adoption for nontechnical users.
Over the past year, several developers from different projects have collaborated to form an effort called Project Utopia. The expressed aim of the collaboration is to make device handling in Linux as simple as possible for both the user and the developer. Among its results, this project has produced the Hardware Abstraction Layer (HAL), a specification and software tool that tracks devices on the system. The aim of HAL is to prevent developers from manually implementing code that deals with devices. Instead they can use a reliable layer that provides notifications about devices, such as when a user plugs in or unplugs a USB device.
As Project Utopia has matured, the idea of an easy-to-use method of dealing with devices has produced many potential use cases. In the eyes of the project's architects, a user should plug in a USB device and have it appear on the desktop, ready for use. When you plug in a camera, the photos should sync with your desktop photo application. A wireless card should log on to the network. Plugging in a printer should bring up the printer configuration tool automatically. These are some of Project Utopia's aims for how it will handle devices on the desktop of the future.
Drivers on Demand
Although the core goal of Project Utopia is to simplify device usage, one of the areas that is outside the domain of the project is making drivers easier to deal with. The main problem is that Project Utopia will need an appropriate driver for each device. If the running kernel does not have support for the device, the user will need to upgrade or possibly compile a kernel manually.
Andrew Leucke created a related project to continue this vision. The goal of Driver on Demand is that the user should be able to plug in a device and the software will then check whether a device driver module is present. If not, it will download the relevant driver from a secure Internet site and insert it into the kernel. The user will not have to run through the complex process of finding a driver or recompiling the kernel. With a well-designed, well-specified, and well-tested security model to prevent the installation of malicious drivers, this system could push Linux to a new level of ease of use.
Although the Linux kernel is an impressive and stable feat of distributed engineering, the incompatibility between different stable point versions of the kernel hampers the Driver on Demand concept. You could compile a driver for 2.6.5 and it would probably not work on 2.6.10 if you simply loaded the precompiled binary module; you would need to recompile the driver for each kernel version.
This problem worsens when you consider furnishing a driver for all distributions. Not only do you have the official 2.6 tree to consider, but you also have the slight modifications that different distributions add to the kernels. If you want to distribute a precompiled binary driver, you'll need to provide a binary for each point version in the kernel for each distribution. This can amount to hundreds of modules for a single stable kernel release.
The reason a single binary driver will not work across a kernel series is the lack of internal API and ABI (application binary interface) compatibility in the kernel. The API specifies a list of functions and facilities that the kernel exports to people writing other kernel features or modules. With the API you need to ensure consistency, as the code that uses the API will rely on the API staying the same. The compiled version of the kernel also needs to have a standard interface so that the programs that link to it will find the functions and structures they expect. This standard interface of a compiled program is the ABI; it is the interface that changes with almost every kernel release.
Linus Torvalds, the creator of the Linux kernel, has been quite clear on his position of ABI stability on the Linux Kernel Mailing List: "It's not going to happen. I am _totally_ uninterested in a stable ABI for kernel modules, and in fact I'm actively against even _trying_. I want people to be very much aware of the fact that kernel internals do change, and that this will continue." He continues, "I occasionally get a few complaints from vendors over my non-interest in even _trying_ to help binary modules. Tough. It's a two-way street: if you don't help me, I don't help you. Binary-only modules do not help Linux, quite the reverse. As such, we should have no incentives to help make them any more common than they already are."
Linus's position seems to be very much a black-and-white scenario--if you don't contribute to Linux, why should it support your device? Although this is a perfectly reasonable position, locking out binary-only modules also limits the reasonable use of binary versions of modules that do have available source code. If it is fair to assume a large proportion of the Linux community is against binary-only drivers, there is also a proportion of users who would prefer the convenience of using binary drivers based on open source code. David Zeuthen, the maintainer of HAL, sympathizes with the problem. "I think it's a difficult issue," he says. "I for one actually appreciate the fact that the kernel team doesn't want to be locked in, because maintaining stable API and ABI is expensive, and it surely makes it more difficult to innovate and improve the kernel."
It is apparent that the kernel will never match the kind of ABI stability that is common in commercial operating systems that see kernel updates once every few years; the two different development models simply do not compare. The challenge that faces kernel developers is the question of driver overload versus driver convenience. On the one hand, it is convenient to have all Linux device drivers inside the kernel source tree. On the other hand, this produces an ever increasing kernel tree that needs to support every device for every supported platform.
Another problem is that many drivers have not yet made it into the kernel tree proper. Zeuthen cites this as a particular issue. "I feel that one of the biggest problems is that the bar to getting a driver in the kernel may be too high," he says. "I've got a number of devices for which there exist a driver, but I have to go and download the source and compile my own kernel module, which is bad."
Zeuthen has highlighted possibly the most critical problem with the current kernel framework for handling drivers. When Torvalds started work on the kernel back in 1991, the concept of including drivers with the kernel tree seemed to make inherent sense from an outsider's perspective, but with the huge interest and adoption of Linux since those days, the issue of keeping every driver in the kernel could cause problems as the kernel scales and supports hundreds more device drivers. Although the vetting process for drivers does ensure that the kernel is of a high quality, there are many third-party drivers available--although the complexity of finding and installing them may discourage regular users from trying.
Zeuthen continues: "An interesting side effect of the in-kernel API-ABI instability is that authors of drivers outside the kernel have to make a release for every released kernel. Throw in the fact that some vendor kernels are horribly patched, and you'll have people using kernels from kernel.org to get the device working. Now those users have a problem that other parts of the OS don't work because they are not using the vendor kernel, which is bad. Another problem is that when I upgrade my operating system and get a new kernel from my vendor, I manually have to recompile all my drivers--which is also bad, because lazy users don't do that, and then they are more prone to run unpatched, vulnerable systems."
Zeuthen proposes a possible fix to this problem. "[Perhaps] one solution would be to convince Linus to lower the bar on driver inclusion in the kernel and mark those drivers as experimental, in progress or something," he says. "It still doesn't solve the issues of closed source drivers, though--users will still have to drop to a text console and rerun the NVIDIA installer, for instance".
As the maintainer of Driver on Demand, Luecke also shares some of these concerns. "[Even] though Linux is scalable itself, its configuration interface isn't," he says. "At the moment, it's easy to configure which drivers we want to compile in, but in five years' time, when there are ten different new buses and a massive foray of new devices, it will take too long to configure which devices you need--which means that sometime in the future, the Linux kernel really needs to offer a way of detecting which drivers it needs to compile in itself, and try to show the user as little as possible (like if they want preemption or support for a device currently not in their computer)."
Mapping the Future
The kernel is possibly the most critical component in a Linux system. Although the facilities within the kernel have changed and improved, there have been few fundamental architectural changes in the software. The challenges of dynamically loading drivers, a growing kernel source code base, using drivers from outside the official tree, and making the drivers as easy to use as possible keep moving the goalposts for the kernel developers.
One of the problems with low-level software such as the kernel is its perception among Linux users. Should they rely on distributors who create their own binary modules for their distributions, or should regular users download and compile their own kernels? My view is that the common perception is a little bit of both. Linux development often means creating frameworks to present to users in prebuilt configurations typically found in distributions. An example of this is a graphical environment; if you install the environment from CVS, you need to know how to configure and tweak the environment to suit end users and to give them access to its different facilities. The kernel is in many ways a hybrid technology and falls into the hands of normal users and software architects.
Leucke suggests that until a better method of handling the increasing driver requirements appears, "probably the biggest current issue is that there is no mechanism in place to manage drivers. We currently have package managers, but what we really need now is a driver manager; one which records the external drivers' installation, their versions, where they came from, and includes a mechanism for them to be upgraded easily. It shouldn't be completely unexpected that in the future, drivers for devices will come on CDs, and at the very least, users will want to be able to just double-click the driver to install it, or they will want a way they can get a driver automatically installed without needing to do anything (which is what Driver on Demand is about). The truth is that Linux will never be able to include drivers for devices in its kernel as they are released, and Linux distribution installation CDs will never be 100 percent up to date--so such a system would compensate for that, and allow Linux distributions during installation to ask for a new driver on disk (for networking or whatever else) and install it with barely any intervention. A system like this would be particularly handy on a clustered system, which could automatically update its drivers if it had this available."
Heading into the Future
There is little doubt that the goals of Project Utopia aim to bring Linux forward into the desktop era, where hardware will work as users expect it to. Although the challenges of a Driver on Demand framework are high, the core functionality and original aims of the Project Utopia framework mostly works today. The developers are now stabilizing the different components, intending to add them to the next release of GNOME (2.8) to embed the functionality into the desktop.
With the core of Project Utopia complete, application developers from any desktop or project can HALify their software and use the HAL specification when they need to deal with hardware. Already CUPS has begun this process. If major tools such as X servers follow suit, that could resolve many of the configuration problems that have plagued hardware devices.
One of the true benefits of the collaborative open source model is how well it can fix and improve issues and problems. I have no doubt that this will continue to apply for the challenges of using Linux as a desktop operating system. It will be interesting to see whether these challenges can push the kernel into new areas and continue the evolution of the Linux desktop.
Jono Bacon is an award-winning leading community manager, author and consultant, who has authored four books and acted as a consultant to a range of technology companies. Bacon's weblog (http://www.jonobacon.org/) is one of the widest read Open Source weblogs.
Return to the Linux DevCenter