AddThis Social Bookmark Button

Print

OpenBSD 4.0: Pufferix's Adventures
Pages: 1, 2, 3

You developed prebind, a secure implementation of prelinking that is compatible with address space randomization. How does it work, and how does it compare with prelinking?



Dale Rahn: Prebind was developed because someone at my work had pointed out how much time a QT application on an embedded device spent in the dynamic linker before startup. I have maintained the ELF dynamic linker for quite a while and had implemented a symbol cache previously to reduce this startup time, but realised that it still didn't compare to prelinking.

One of the significant security features in OpenBSD is address randomisation (aka ASLR). Prelinking as implemented in Linux removes the randomisation feature so it would not be compatible with OpenBSD's security goals.

Since I was also using a Zaurus quite a bit, I realized how long some large applications took to start up. With some research I found that the data stored during the caching of the symbol lookup in the existing code could be saved as library and symbol index. As long as the library hasn't changed, the index information could be used to do a symbol lookup without ever having to do a symbol lookup.

Much like how prelinking works, there are "common" relocations to most libraries and there are "fixup" relocations necessary for some binaries i.e., binaries that override specific functions like malloc or that link with a different library which causes overrides like pthreads. However those fixups tend to be rather limited in number.

I have been intending to write a formal paper and present it at some conference, however have not had the available time to do that so far. This means that I don't have handy speed comparison numbers to show how much faster it is; however the amount of time spent performing relocations dropped by a factor of at least 10.

To configure prebind it is necessary to be able to write to all of the binaries to be set up as well as all of the libraries which the binary references, typically root.

The default mode is to replace the prebind data found on any programs or libraries touched during the processing; this makes it necessary to be run in the same invocation on all binaries in the system which are to be configured for prebind. Prebinding can also be performed in "merge mode" where existing prebind data does not get modified on libraries so that if a single binary is added to the system, it is not necessary to rerun prebind on the entire system, only the new files will be touched.

If an application uses LD_LIBRARY_PATH to locate its libraries either by the user setting it or if the application sets it in a script before running the main binary e.g., Firefox. If prebind cannot locate all of the necessary libraries it will not prebind the binary. This can be worked around for Firefox by adding Firefox's extra directory to the library search path using ldconfig (then it can be removed for normal operation).

Like prelinking, prebind does not improve the dlopen speed; the symbol lookup cache still exists, but it is not possible to predict which symbols will be present in the loading application to precalculate the prebind data.

The prebind code appears to be fairly robust at the current time; only a couple issues which remain are that are concerning.

If a single library that a prebound binary touches is modified (prebind data stripped, library replaced) then the binary will disable the prebind optimisation and perform normal symbol loading. This would then require all binaries to have prebind run on them again after any library change.

Currently there is no provision to configure the prebind data on a subtree, e.g., a system tree that has not yet been installed. Hopefully this will be fixed in the near future. At that time the possibility of OpenBSD shipping with prebind enabled by default will exist.

The prebind functionality has been incorporated into ldconfig(8).

What's new in the ports framework and in pkg_* tools?

Marc Espie: In 3.9, pkg_add over scp had big issues: pkg_add was starting one separate scp per package, and due to some legacy issues with the way scp works, those processes tended not to die, and to gobble all available processes.

So, instead, we used the same technique that rsync uses: pkg_add opens one single communication channel over SSH, and then uses it to transfer all information it needs. No more rogue processes. Added bonus: it's currently the fastest way to update packages over the network, because there is one single connection, instead of fleeting connections you need to tear up/restart. With decent network, it's as if you were updating packages locally.

Nikolay Sturm: With OpenBSD 4.0 we will improve our support for stable packages. First, there's a policy change. Until now I only backported security fixes to stable; after some discussions I was convinced that it is desirable to backport more changes, so that our users see even less reasons for mixing stable and current packages.

Second, with 4.0 we will provide stable packages for amd64 as well. Fabio Cazzin of NS3 kindly grants us access to a stable build machine. This is, however, an experiment and we'll have to see if it works out.

GNU RCS has been replaced with OpenRCS. Who worked on it? What advantages does it provide over GNU RCS?

Ray Lai: Joris Vink, Niall O'Higgins, Xavier Santolaria, and I worked on OpenRCS. It is compatible with GNU RCS, minus some missing functionality such as branch checkins. We hope to complete the missing functionality soon.

The main advantage OpenRCS provides is security. Throughout its development we have kept security in mind, identifying insecure patterns and eliminating them. As a side effect, our code is clearer and simpler.

Aside from that, our goal has mainly been compatibility with GNU RCS. Once this has been achieved, we may enhance OpenRCS with features, though we are more likely to work on enhancing OpenCVS.

Why doesn't this release come with X.Org 7.0?

Matthieu Herrb: Because it's a pretty big change, and we were not ready in time to include the new modular X in 4.0. Building and testing ports needs time to find and fix problems that can happen. We'll switch to the new modular X shortly so that there's a full release cycle to find and fix remaining issues.

Do you like the way the X.org developers split the system into smaller modules?

Matthieu Herrb: I'm not fond of it, but it's done now and we will work with that. The modular X has some advantages that should make development easier. In particular you won't need to rebuild all of X to try a patch or to apply an errata.

I noticed two interesting log entries: "Widen the aperture used for legacy vga space on macppc, needed for Mac Mini ATI graphics cards" and "Add sysctl_int_lower() API, consequence of which is that root can now lower the machdep.allowaperture variable without rebooting." Please tell us more.

Matthieu Herrb: Older OpenBSD/macppc releases did not limit the address space that was accessible though the aperture driver. It is now enforcing more strict limits, but this had to be done by trial/error, as the X drivers need to poke at various addresses to check if there's an x86 BIOS available or not and other assorted things. This is a step forward in protecting the hardware from malicious code that could be injected into the X server.

In 3.9 and before, the allowaperture variable was completely readonly if securelevel > 0. It can be decreased to restrict hardware access by X if the user decides that he doesn't need X after all.

The new release allows X.org to run without privileges when using the wsfb driver. How does that work?

Matthieu Herrb: This is for hardware running wsfb where X doesn't need hardware access (Zaurus and Sparc only). There were a few stupid things requiring root during X startup. Now the only thing that needs root on these platforms is opening the default log file (/var/log/Xorg.0.log), but you can use the -logfile option if X doesn't have any privilege to specify a file in your home dir, i.e.:

chmod 755 /usr/X11R6/bin/Xorg
startx -- -logfile ~/X.log

Post 4.0, this will be extended to other platforms able to run using the wsfb driver (alpha, macpcc, sparc64 and soon i386 and amd64, using VESA BIOS calls).

What is the status of sparc64 systems?

Jason L. Wright: The GENERIC kernel is capable of running on Ultra 1, Ultra 2, and Ultra 3 class processors now. There are still problems with some of the Ultra 3 based systems, however. schizo(4), the host bridge, appears to be fairly buggy as evidenced by the Linux/OpenSolaris workarounds. Unlike Linux and Solaris, the only documentation or errata we have for these chips is the Linux/OpenSolaris source code. This is NOT documentation and has slowed progress. We tried getting documentation from Sun; might as well have been yelling at a wall, so we do the best with what we have.

The Ultra 3 is still running without its L1 data-cache enabled. The L2 (aka E-cache) and the L1 instruction-cache is enabled. Performance isn't optimal, but the U3s, even with L1 data cache disabled are still the fastest sparc64's supported.

Besides dealing with the schizo(4) bugs and the disabled D-cache, the next major hurdle is support for the Cassini Ethernet controller found on many Ultra 3 systems. Work for this will begin when I finish my move to Idaho (most of my stuff is offline in anticipation of the move).

As far as Niagra goes, I'm not that worried about it. We need SMP support on sparc64 before Niagra is a concern. I would, however, love the opportunity to play with the Fujitsu processors... mmm, full register window set...

There have been improvements and new features supported in the drivers that manage CPU speed control, Intel SpeedStep, and AMD PowerNow in particular. What can we do now? And did you work on this using publicly accessible documentation?

Gordon Willem Klok: Well one thing that we can do now is scale the processor frequency and voltage on AMD's eight generation processors such as the Athlon64, Turion and Opteron this was added for 3.9 to the i386 architecture, 4.0 extended this support to the amd64 architecture. There has also been a lot of progress made with enhanced speedstep; Dimitry Andric has made some big strides in supporting newer Intel processors where they are no longer publishing the model specific p-state data. Support for processor scaling on the Zaurus was added, there were a lot of reliability fixes for the Pentium 4 clock control driver and the powernow variant found on the seventh generation athlon/duron.

For the most part AMD and Intel publicly document the mechanisms for frequency and voltage scaling. From the driver writing perspective these mechanisms are fairly straightforward; you plug values that correspond to the desired combination of frequency and voltage into a register specific to the model of the CPU. The rub is that increasingly ACPI is the "proper" mechanism for retrieving what values correspond to each state and what states each CPU supports.

While ACPI support is being worked on for OpenBSD it isn't ready so we rely on whatever legacy method is available. Unfortunately in the case of AMD this method is deprecated; many BIOS vendors don't include the requisite table so even though every modern amd64 processor supports Powernow in many cases we simply can't use it. In the case of Intel for a given CPU model, the supported states and the magic numbers that we need to plug into the register used to be gathered from a data sheet and written into the driver. These values are no longer spelled out explicitly in the data sheet and only seem to be available to BIOS writers under NDA.

How does OpenBSD 4.0 interact with VMWare, Xen, and other virtualizers? What about the VT features in recent AMD/Intel CPUs?

Anil Madhavapeddy: OpenBSD 4.0 will work normally using hardware virtualisation under Xen 3.0 and VMWare, using VT/SVM from Intel and AMD CPUs. It does not have any special guest tools support however.

Para-virtualisation support for running OpenBSD as an "enlightened" guest OS under Xen 3.0 is currently under development (it was sponsored as a Google SoC project, and development continues). It boots multi-user using a ramdisk, and sources available online.

What is your opinion on these virtualization technologies from a security standpoint? I have already seen some talks about rootkits that bypass the OS and put themselves between the CPU and the OS. Is there anything that OpenBSD plans to do to fight it?

Otto Moerbeek: A large problem with virtualization is added complexity: more code and more configuration. This makes your setup harder to audit and maintain. I already heard of systems where admins were reluctant to apply patches to the host OS, since it ran so many guest systems and they were afraid to take the host OS down. Of course you cannot expect any virtualization layer to protect you from security bugs in the host OS.

It also adds an attack vector: with real hardware, bad guys can try to use bugs in hardware, kernel, OS provided userland, and applications to gain access. With virtualization, they get a whole new layer to attack.

Virtualization can be useful for test setups and can provide isolation between applications, but it is certainly not a magic way to increase overall security.

As OS developers we cannot do a lot to provide extra protection. From our point of view, virtualization just provides an alternative execution environment for our kernel. The kernel has to rely on certain mechanisms, like page protection and the distinction between supervisor and user mode. If the kernel cannot trust the execution environment, all is lost.

Reading the changelog, I found this note: "A large amount of memory leak plugging in various system utilities inspired by Coverity reports, as well as ruling out of hypothetical NULL dereferences," and I saw a lot of fixed memory leaks. Is this something common, or did you run a new checker that helped you spot them?

Otto Moerbeek: This work has been done using Coverity reports on other platforms. We share quite some code with the other BSDs, so Coverity reports from those might apply to us as well. Based on the reports we saw, we hand checked other parts of the source tree as well, to find similar patterns. That resulted in some more fixes. Coverity does not publish specific reports for OpenBSD (yet).

When auditing code, we use various simple tools like grep(1) to hunt for various bug patterns. We also use lint(1), which was much improved lately, mostly by Chad Loder, Theo De Raadt and myself. But the most important tools are eyes and brains.

Federico Biancuzzi is a freelance interviewer. His interviews appeared on publications such as ONLamp.com, LinuxDevCenter.com, SecurityFocus.com, NewsForge.com, Linux.com, TheRegister.co.uk, ArsTechnica.com, the Polish print magazine BSD Magazine, and the Italian print magazine Linux&C.


Return to O'Reilly SysAdmin