ONLamp.com
oreilly.comSafari Books Online.Conferences.

advertisement


OpenBSD 3.7: The Wizard of OS

by Federico Biancuzzi
05/19/2005

Today the OpenBSD project announced the new 3.7 release. This is the first release to support newer wireless chipsets, especially for 802.11g, thanks to a big activism campaign lead by project leader Theo de Raadt. It's now possible to create a portable access point with a tiny PDA using the Zaurus port, too. As usual, there are a lot of other big and small changes, such as the import of Xorg, the jump towards gcc3, and a feature to update your installed packages automagically. Discover the details behind the scenes in this interview that Federico Biancuzzi had with several OpenBSD developers.

Many people will probably enjoy this release especially for the rich set of new Wi-Fi chipsets supported. It took years, but now you support them with total compliance with your goals and license. Which vendors would you define as open-source-friendly and which as unsupportive?

Damien Bergamini: Ralink Tech. and Realtek have been very friendly by providing us documentation for their wireless chipsets. Ralink Tech. gave us the spec for their RT2500USB 802.11 b/g chipset and about one week later, we had a working driver (ural) for it, just in time before 3.7 code freeze. The work on this driver now continues in OpenBSD-current and I'm adding advanced functionalities like HostAP mode support (the capability for a host to act as an access point). OpenBSD was the very first operating system to provide an open source driver for the RT2500USB chipset, and AFAIK it is still the only one to support this chipset today. I know some Linux users that feel jealous about this ;)

It is important to show vendors what they can benefit at no cost when cooperating with us. We are very respectful toward these vendors and if they ask us not to spread the documentation around, we respect this. But we do not sign NDAs.

Related Reading

Mastering FreeBSD and OpenBSD Security
By Yanek Korff, Paco Hope, Bruce Potter

OTOH, Intel has been quite uncooperative by refusing to give us the right to redistribute the binary firmware images for their PRO/Wireless 2100/2200BG/2915ABG adapters (Centrino). As a consequence, although drivers for these adapters (ipw and iwi) are included into OpenBSD 3.7, they do not work out of the box. Users must download the firmware images by hand on the Intel web site and they are invited to complain to Intel about this before doing so.

Is there anything that our readers could do to help you succeed?

Damien Bergamini: Buy hardware made by open-source-friendly vendors (there is almost always a choice) and continue to contact the closed vendors. It has proven to be effective in the past.

And to the Linux "vendors" that regardlessly ship non-free firmware images with their OSes, I'd say that they are playing against their camp. Why would vendors ever change their policies if such things are accepted by the open source community?

I read that you have a plan to create a global raidctl tool to manage all of the different RAID controllers, but you are missing the necessary documentation from some vendors. Which vendors would you define as open-source-friendly and which as unsupportive?

Marco Peereboom: Yes, work has started on a tool that will do universal RAID management; it is actually called bioctl(8) because it uses the bio(4) ioctl tunnel device. The tool you mention, raidctl(4), is used for software RAID instead.

The idea behind bioctl(8) is to do as much RAID management as possible inside of the driver whereas the userland portion remains as simple as possible.

Unfortunately, there are still vendors out there that believe that there is IP inside of a hardware API. In my experience this is mostly nonsense, and whenever there is IP hidden inside an API it only means that the device was architected poorly. We are talking about filling in some sort of structure, sending it to the device and awaiting an answer either directly or through a callback. There is no secret sauce hidden in these numbers that enables competitors to magically understand what goes on in the firmware.

Reverse-engineering hardware is all about motivation and resources. If a vendor is motivated enough, it will have to spend considerable resources on people with specialized knowledge and equipment to X-ray a board in order to obtain the schematics. It is even harder to reverse engineer programmable chips like FPGAs and CPLDs. Compared to this, reverse-engineering a software API is trivial. Hiding an API does not help a vendor in any way, shape, or form, and it really is silly to keep it hidden as if the farm depends on it.

This release fixes the mirroring mode in ccd. What types of choices does an OpenBSD user have to set up a RAID array now?

Michael Shalayeff: As usual, two choices: software or hardware. All of the drivers we have currently do not support any management, but we are working on a few that we have enough info to make some management usable. Software raid(4) however allows full management. For simple mirroring or striping, ccd(4) provides even better performance, although it lacks some basic management knobs that I hope to fix soon.

A lot of companies have been using OpenSSH in their products (Sun Microsystems, Cisco, Apple, GNU/Linux vendors, etc.). Did they give anything back, like donations or hardware?

Henning Brauer: Nobody ever gave us anything back. A plethora of vendors ship OpenSSH--commercial Unix vendors (basically all of them), all of the Linux distributors, and lots of hardware vendors (like HP in their switches)--but none of them seem to care; none of them ever gave us anything back. All of them should very well know that quality software doesn't "just happen," but needs some funding. Yet, they don't help at all.

This release includes the new OpenSSH 4.1. What's new?

Damien Miller: 4.1 is just a bugfix release; it didn't include any new features over 4.0. The new features in 4.0 included:

  • Extending port forwarding to allow it to bind to a user-selected address rather than just localhost or the wildcard address. This turned out to be a fair bit of work and discussion to ensure that we didn't break backwards compatibility.
  • Added the ability to store hostnames added to ~/.ssh/known_hosts in a hashed format. This is a privacy feature that prevents a local attacker from learning other hosts that a user has accounts on from their known_hosts file.

    So instead of hostnames being stored in plain text like:

    > yourhost.example.com ssh-rsa
    AAAB3NzaC1yc2EAAAABIwAAAIEAp832eeMwYH...

    They are hashed first, so they don't reveal the hostname. E.g.:

    > |1|bRGYyrC+bfKZGGd5GZH4wo1AnsI=|xcQ+54QNVwQ+fBCldn0=
    ssh-rsa AAA...

    We added at the request of some MIT researchers who found that a substantial number of user private keys on shared systems are not encrypted (a really dumb thing to do, BTW). This lack of user care, coupled with the information in the known_hosts files, allowed attackers to spread their attacks to multiple systems.

    Right now this is disabled by default, but administrators of sites with lazy users can turn it on with the HashKnownHosts config flag.

    If you do this, you should probably also hash your existing known_hosts file (ssh-keygen -H).

  • Because the hostname hashing makes it more difficult to manually manage known_hosts files, we added some tools to ssh-keygen to perform common operations, like finding listings for hosts and deleting hosts. These work on hashed as well as plain known_hosts files.
  • More improvements to the sftp client. In particular, it now has basic command-line editing and history (no tab-completion yet, though).
  • As usual, lots of other improvements and bugfixes :)

Does ntpd include any new features?

Henning Brauer: ntpd is not about features. ntpd is about solving a relatively simple task the right way. ntpd is about keeping the local machine's clock synchronized with other ntp servers, no more and no less than that. A plethora of little buttons and statistics and query tools doesn't gain the majority anything, but imposes a cost--more complex configuration, and more (potentially buggy) code. Our ntpd is different. It is supposed to stay small and solve the task, period.

The only bigger new feature is the -s option, which allows ntpd to set the machine's clock hard on boot, so it is no longer required (or even desirable) to run rdate, ntpdate, etc. beforehand.

I read some complaints about its limited precision. Is this a real problem for most users?

Henning Brauer: No, not at all. It reaches offsets of way below 100ms, typically below 50ms. That is about the clock resolution typical PC hardware reaches. None of the people crying for "better" resolution could ever give any valid reason why they'd need it--in fact, almost all couldn't give any at all.

I read that OpenBSD added the support for PIM (Protocol Independent Multicast). What is it? When is it useful?

Damien Miller: These changes came from FreeBSD and NetBSD via Pavlin Radoslavov and Ryan McBride. It is the kernel support required to implement the PIM multicast routing protocols (PIM sparse-mode and PIM dense-mode). These are the most popular ways to route multicast traffic between networks.

Note that OpenBSD doesn't have PIM-SM or PIM-DM daemons that can use this support in the base OS, but you can use Xorp from ports.

Claudio Jeker: PIM replaces the traditional multicast mechanisms like mrouted or MOSPF to do multicast routing for a certain area/network. In most cases, group members are sparsely distributed over this area and protocols like MOSPF are very inefficient in these cases. PIM introduces two new routing protocols, PIM-SM (Protocol Independent Multicast--Sparse Mode) and PIM-DM (Protocol Independent Multicast--Dense Mode). Both protocols share the same packet format but the underlying concepts are different. Currently, xorp (from the ports tree) has support for PIM-SM on OpenBSD, but I don't know how usable it is.

PIM makes only sense in larger networks that need to do a lot of multicasting. A possible example are some IP TV solutions that stream every channel as its own multicast group. This reduces the network load on the backbone, but the end user is still capable of switching quickly between channels.

OpenBSD 3.7 includes in-kernel PPPoE support. Previous versions already have a PPPoE implementation in userland; why is there a kernel-based one now, too?

Can Erkin Acar: PPP is a protocol for establishing point-to-point links. It is a quite complicated protocol with many options for authentication, encryption, and compression. The existing userland ppp(8) program supports a wide variety of these options and has been in the tree for a long time. PPPoE (PPP over Ethernet) is a method for establishing point-to-point links over Ethernet via PPP. Many ISPs provide broadband services through PPPoE. The userland pppoe(8) program is used for establishing PPPoE links through ppp(8).

The main problem with userland PPP/PPPoE processing is the overhead. A single packet has to travel quite a number of times between the kernel and the userland before finally reaching the TCP/IP stack in the kernel. While not really noticeable with slow links, high-speed PPPoE links may increase the CPU utilization and the throughput may be affected.

When the PPPoE processing is moved to the kernel, much of the overhead is removed. However, the PPP layer inside of the kernel (the sppp layer) supports only a minimal subset of the PPP protocol. While it is usually enough for connecting to your ADSL link to your ISP, most of the other bells and whistles are not available.

OpenBSD's PPPoE (PPP over Ethernet) code is initially ported from NetBSD by David Berghoff. It was developed and tested outside of the tree until one of the developers (me) had enough time and motivation (Theo) to put it into the tree. I modified the initial port to work with our sppp layer, and moved the functionality of the pppoectl tool into ifconfig(8).

I continue to work on and improve the sppp layer, which is also shared by the WAN drivers lmc(4) and san(4).

Claudio Jeker: The userland PPPoE is a bit tricky insofar as it uses bpf to capture the PPPoE packets from the interface. This was a bit ugly, as it needs root privileges to use bpf, but since 3.6 PPPoE drops privileges to user _ppp and chroots after setting the write filters and locking its bpf descriptor. The in-kernel version of PPPoE accesses the device in a more natural way and has less overhead--the userland PPPoE runs two processes and has to pass all data between the two processes and the kernel. In some cases, it is still better to use the userland PPPoE because its PPP back end is far more complete than sppp(4) of the in-kernel PPPoE.

The 3.7 release page shows some new PF features. How does each one work?

  • Improved carp(4), new carpdev mode for IP-less interfaces.

    Henning Brauer: Previously, CARP relied on the physical interface having an IP address in the same subnet as the carp interface. Now you can leave the physical interface IP-less, by specifying it when configuring the carp interface, like:

    # ifconfig carp0 10.0.0.1 netmask 255.255.255.0 vhid 1
    pass XXX carpdev sk0
  • Support limiting TCP connections by establishment rate, automatically adding flooding IP addresses to tables and flushing states (max-src-conn-rate, overload <table>, flush global).

    Henning Brauer: This was made to fight (D)DoS attacks. Basically, IPs with unusual high connection rates are detected, and added to a table. You can then block them based on that table, or assign them to a very slow queue or similar. It is also possible to flush all existing states from that IP when it gets added to the table.

  • Improved functionality of tags (tag and tagged for translation rules, tagging of all packets matching state entries).

    Henning Brauer: Previously, we only tagged the first packet of a stateful connection, so that you had to filter statefully. We now tag all packets.

  • Improved diagnostics (error messages and additional counters from pfctl -si).

    Henning Brauer: Some error messages have been made more clear, and some counters were a bit overloaded, so we spread some off. This allows for a finer view on what pf is doing now.

  • New keyword set skip on to skip filtering on arbitrary interfaces, like loopback.

    Henning Brauer: It does exactly that--"disable" pf on the given interface(s).

  • Filtering on route(8) labels.

    Henning Brauer: It is now possible to assign labels to routes--basically, 32 bytes of arbitrary information that can be used for almost anything. For example, bgpd could record information about the source AS number for that route, or about the peer it learned the route from. pf can then filter based on this information--or apply queuing :)

    bgpd cannot set these labels yet, but that's the plan.

What is new in the IPsec/isakmpd arena?

Hans-Joerg Hoexer: For 3.7, we mainly focused on stability and interoperability: the implementations of the NAT-traversal (NAT-T) and dead-peer-detection (DPD) features that we shipped with 3.6 were not mature enough. These issues have been resolved for 3.7. Among many minor bugfixes, we also made sure that it is possible to run multiple instances of isakmpd on different ports again.

Ryan McBride added a nice feature for typical roadwarrior setups: in addition to IP addresses, isakmpd.conf now also understands interface names and the special keyword default, which selects the address based on the default route. Thus one does not have to change isakmpd.conf when the interface address is assigned dynamically.

Did you develop any new feature to fight spam?

Bob Beck: Yes, I've added a "greytrapping" feature to spamd. This allows a site runner, when greylisting, to make use of old unused email addresses (or any other ones that spammers like) as spamtraps for the greylist. This is done by adding the spamtrap address to spamd's database with the spamdb(8) utility. After adding:

spamdb -T -a "<bigbutts@obtuse.com>"

if a server is running doing greylisting, any greylisted host who attempts to mail to the destination address of <bigbutts@obtuse.com> would be blacklisted for 24 hours. These machines then show up in the spamd database as TRAPPED entries for the next 24 hours; i.e.:

spamdb -a | grep TRAPPED
...
TRAPPED|83.144.89.209|1113143172
TRAPPED|83.152.141.13|1113148919
TRAPPED|83.193.167.254|1113159698
TRAPPED|83.23.36.214|1113158993
...

The spamtrap address has no effect on established WHITE servers; i.e., mail servers I regularly correspond with could mail to bigbutts@obtuse.com without being trapped. This only affects hosts that are a candidate for greylisting; i.e., I haven't exchanged any mail with them at all (or recently).

Good candidates are any unused address or formerly used address that has been harvested or appears in a public forum. (in fact, you can pretty much count on my putting bigbutts@obtuse.com into my spamtrap list before I even see this article, since spammers like to harvest anything that looks like an email address to a resolvable domain :)

I have small servers consistently running several hundred TRAPPED entries, and large ones running TRAPPED lists fluctuating between 7000 and 10000 entries. These seem to pick up more hosts to blacklist than the typical spews and spamhaus lists, without any side effects.

This is the first of several new features to take advantage of the 30-minute initial delay given by greylisting, as spammers seem to be noticing. Noticing is good. This means both that it works, and that spammers are easier to identify if they don't like talking to spamd--have your sendmail running readers try the following in their sendmail mc files:

define('confSMTP_LOGIN_MSG', '$j spamd IP-based SPAM blocker; $d')dnl

Reading the changelog I found a funny point: "The Great Apache Cleanup of 2004 to remove code we don't use." What is the status of the forked Apache included in OpenBSD?

Henning Brauer: Well, we're cleaning up the mess, starting by removing code we don't use. In the end, we hopefully have readable code, and we're pretty sure we will fix security bugs in the process.

Since the Apache people decided to change their license to an un-free one, we cannot import newer versions anyway--which is not really a loss, given that they don't really put any work into the 1.3 tree any more. Our Apache has hundreds of fixes they didn't take back even before the licensing issue. Now it has even more, and they still don't have them. We're fixing more stuff on the way, most often without noticing--we are just fixing bugs.

So, technically, this could be called a fork.

This is the first release that includes X.Org. Why did you choose to import it instead of XFree86 4.5.0?

Matthieu Herrb: The primary reason is that the new revision 1.1 of the XFree86 license is less free than the old MIT license that had been used for years by XFree86. OpenBSD already avoided shipping the final XFree86 4.4 release that also uses the new license in 3.6. Then, as many other projects moved away from XFree86 because of the license, it became obvious that most new developments in the X window system now take place in X.Org. Having said that, projects like OpenBSD have to stay vigilant that X.Org doesn't turn into a Linux-only project (that would slowly slip to a GNU General Public License).

What new features do package tools support?

Marc Espie: A lot!

The most visible new feature is probably the progress meter. If you add/remove packages, you will now get instant feedback that something is going on. A related features is that the message system has been completely redesigned to be more useful: it's much harder to miss things now.

In general, the system is more robust, handles more fringe cases better, and is a wee little bit faster. Package tools in 3.7 consume half the memory they did in 3.6.

Shared library handling has been totally rethought. Packages will now check that libraries in the base system are present, with the correct version. And also register and handle inter-package library dependencies fully. From the ports people point of view, it's now much easier to write correct package dependencies than it ever was.

The object-oriented packing-list framework has been cleaned up, and is now used extensively through the whole package system. This is a huge improvement, because some very nice tricks are now feasible with a few lines of Perl. For instance:

  • Packing-lists updates are now 99 percent automatic and correct.
  • There's a new pkg_mklocatedb tool that you can use to build a small locate(1) database of files all packages can install. It took about half an hour to write with the new tools.

Alongside shared libraries, the package system now includes smart ways to handle:

  • Fonts
  • Man pages
  • Texinfo files
  • Configuration files
  • Shells

And I've probably forgotten a few things along the way. This means that most special cases (@exec, @unexec, and INSTALL) are gone. In most cases, you just say "OK, this file is a man page" and the system will do the right thing. (Heck, make update-plist will often do the guessing correctly for you.) For instance, one minor thing you'll notice is the instant update of the apropos(1) index.

This is way more robust than the old tools. A lots of simple manual mistakes in making ports with the old tools will just not happen with the new ones.

OK, I've kept the big one for last.

You can use pkg_add -r foo-1.1 to replace your old foo-1.0 package with the new foo-1.1 package.

That's right, we've got a real update system. pkg_add -r will (more or less):

  • Unpack the new package near its final install place.
  • Check that dependencies still match (both forward and backward).
  • Check that the deinstall of the old package will work.
  • Check that the install of the new package will work.
  • Proceed with the deinstall.
  • Finish the install.
  • Adjust dependencies for the new package.

Yes, Virginia, you no longer have to uninstall the old packages before installing the new ones. pkg_add -r will deal with dependencies correctly. For instance, assume you want to update xv. You know it depends on jpeg, png, and tiff.

So you just have to:

pkg_add -r <newjpeg-pkg> <newpng-pkg>
<newtiff-pkg> <newxv-pkg>

and it will just work, by first updating jpeg, then png, then tiff, and finally xv.

It handles shared libraries correctly, keeping old libraries around until all packages that need them are updated, and is generally pretty safe.

What is the roadmap for their development?

Marc Espie: Next on my list is handling really complicated update cases, where you can't update packages individually (for instance, when files move from package to package, you have to update both packages at once).

There's also some redesign for dependencies (again) to take updates into account.

And there's a smarter tool to do updates too. If you read pkg_add -r's description, you'll notice that you have to specify all of the packages you want to add. The counterpart would be a pkg_add -u, where you just say "OK, I want to update those packages, so locate the replacements, and just do it." We're getting there.

As usual, there's some quality-assurance work to do. Even though pkg_add -r has seen a lot of testing internally, I'm certain there will be a minor quirk or two to take care of after the release.

The next very significant change will probably be in the ports tree itself. A lot of the pkg_tools replacement effort was geared at ensuring better quality of the ports tree. Well, this is now mostly done. Porting software has gotten simpler. A lot of mistakes no longer happen. So the remaining mistakes are, of course, more visible, and need to be taken care of.

One important policy you'll already see is that we're much more careful in bumping package names each time we change anything--having update capabilities mean people will want updates to work. And they will mix and match packages from various points in time, even though this is not guaranteed to work. Between very precise package names, and the registration of shared libraries versions, the package system is doing its best to "get out of the way" and ensure things will just work, or say in a very obvious way, "Hu, hu, no, sorry, you really shouldn't do that" if the user does a stupid mistake.

I saw various updates for mac68k. This is not a modern platform, so what happened?

Martin Reindl: As you noted, pretty much.

OpenBSD on mac68k was rotting away for quite some years. The hardware is quite slow and there is no fancy stuff like USB or wireless. Nevertheless, the exotic hardware nature is a valuable proving ground for the kernel APIs and OpenBSD in general. A nice example how mac68k helped showing bugs was a fix to gas, so CFLAGS could be set with -pipe system wide for all architectures.

It all started in fall 2004 when I dusted off a stray NuBus network card. It was not recognized back then, but the fix was easy. Miod Vallat then contacted me, gently pushed me in the right directions, and gave me some good information on where and how to start working if mac68k support should not be dropped. So I started merging code from NetBSD and after two weeks I had quite a big diff covering a new interrupt system, much more like the one in the hp300 port, and a much more flexible serial driver. This in turn allowed Miod to add evcount support.

Things took off and Claudio Jeker fixed the mc(4) driver as found on the fastest mac68k, the 840AV. Another long-standing bug was fixed by Miod, which made it possible switching to bsd.rd as the preferred install method. (Cutting down the install time from like one day to one hour!) With this came the ability to finally use more than one disk. Various other subsystems were also improved, including the driver for the Quadra's onboard SCSI controller, resulting in much better disk performance (which is mac68k's weak point). During all of this time Nick Holland, Otto Moerbeek and many other developers provided rich testing reports and help.

So what we have now is a mac68k port that runs better than ever on Quadra machines, including new support for the AV machines. Projects for the future include improved NuBus support, wscons, and pdisk, as well as many small bugfixes and code merging.

I think people will get excited by the Zaurus port, bringing a secure SSH-capable machine to their pocket. What is the development status?

David Gwynne: It is trivial to find an OpenSSH package and install it on the Zaurus's default Linux distribution. I believe the availability of these packages is a good demonstration of the effectiveness of the work that the portable OpenSSH guys are doing.

Personally, I am more interested in having OpenBSD itself and some of its other features on a wireless device in my pocket. Being able to build a temporary access point out of a Zaurus with a USB Ethernet device and a wireless CF card and using four commands to set it up is extremely cool. Apart from my own needs, I know that people want OpenBSD on the Zaurus for the same reason they want it on i386 or sparc64 or macppc; they want to use something they trust and are comfortable with.

The 3.7 release on the Zaurus C3000 is surprisingly usable considering the short amount of time it has actually been in-tree. It is a functional system with exactly the same software and ports tree as found on any other architecture OpenBSD runs on. As for Zaurus hardware, 3.7 ships with support for its display, keyboard, touchscreen, Compact Flash slot, and the USB controller. This means that if your USB device works on your desktop or laptop, you can expect it to work on the Zaurus. The same goes for Compact Flash devices.

Development is already progressing post-3.7. Power management is constantly being improved, mostly thanks to Uwe Stuehler. Drivers for the rest of the onboard Zaurus hardware are being written. For example, Chris Pascoe has already provided a basic driver for the sound hardware. The Zaurus has become a popular toy among the developers, so there is a lot of interest in producing good device support.

Are there any particular limitations?

David Gwynne: In my opinion the biggest limitation is the need to keep Linux on the device; it is only used as a glorified boot loader for OpenBSD itself. I would love to see OpenBSD boot up when I press the power button instead of Linux.

Uwe Stuehler: A minimal fraction of Linux must remain on internal flash memory in order to run OpenBSD's boot loader. Those who want to use the whole disk for OpenBSD can do that, assuming they can do well without the applications that ship with the Zaurus.

The boot loader is mostly the same as on other OpenBSD platforms. It is conveniently installable via the same package installer that you would use to install OpenSSH on Linux. Once it is installed, OpenBSD can indeed start immediately every time you press the power button or reboot the Zaurus.

There are intrinsic limitations of the ARM processor being used in the Zaurus, such as the lack of a hardware floating-point unit. Our ports tree already contains alternative software that is better optimized for integer arithmetic (like mpg321 as an alternative to mpg123). Whatever gaps currently remain can hopefully be closed as more developers port their software to various ARM platforms.

Is there any plan to support other devices beyond the Sharp Zaurus SL-C3000?

David Gwynne: The Zaurus port originally started out on the SL-C860, but was quickly moved to the C3000 due to the better onboard hardware. The intention has always been to go back and make OpenBSD work on the C860 once we're happy with our work on the C3000. In the short term, it might be easy to support the new C1000, since it is basically a C3000 without a hard disk. Support for other Zaurus handhelds should be fairly trivial once the C860 and C3000 are both working, since the same devices and CPUs on these handhelds are used on other models, too.

Uwe Stuehler: Despite there being certainly at least a little bit of interest, I think realistically nothing much apart from recent Zaurus models has a chance to be supported anytime soon.

This release introduce gcc 3.3.5 for common platforms i386 and macppc. What pros/cons does this version of gcc bring over the old and patched gcc 2.95?

Marc Espie: Obvious con: gcc 3.3 is about 30 percent slower than gcc 2.95.

Pros: a lot of modern C++ code now compiles. There's no way to avoid recent gcc for some architectures, like AMD64. The various language parsers now give much better diagnostics. In particular, the new C preprocessor is very good.

Do you plan to update to gcc 3.4 or 4.0 in the near future?

Marc Espie: Nope. gcc 3.4 adds a whole set of C++ problems we don't want to deal with right now (more stringent syntax, catastrophic memory increases in some cases), and gcc 4.0 is definitely not mature technology yet, and it's even slower than gcc 3.4.

In a previous interview I did with Richard Stallman, he stated that gcc will include ProPolice. I hope this is this good news for you. Is there any other technology that you would like to see imported into gcc?

Marc Espie: OK, this might be the scoop you're looking for.

This is not news to me. This is definitely not good news. In this instance, Richard was all talk, and no action. There is absolutely nothing going on that indicates that ProPolice will be a part of future GCC releases.

Let me publicize that more. The GCC people are going to say they haven't had any luck collaborating with Etoh Hiroaki.

I've inquired into the problem, I've offered to help by acting as liaison between the GCC developers and Etoh Hiroaki. I got a lukewarm response at best.

Right now, the ProPolice technology is not considered stable enough for inclusion in GCC. Technically, the stuff Etoh does is a nice hack that plays interesting games with GCC internals. Those games are actually not really supported inside of GCC. (ProPolice assumes the frames have a given internal representation, which is 99 percent the case in practice, but is not part of the GCC internals' contract.)

And this is it--this is as far as ProPolice integration so far has gone. Richard asked for ProPolice to be integrated, which does not cost him anything. It's just a PR move, as far as I'm concerned. No resources have been devoted in actually pushing for ProPolice to be integrated.

The GCC people are handling other stuff, which is more important for them, which is something I quite understand.

There is absolutely nothing going on where ProPolice integration is concerned.

Yes, there's a lot of work. Ask the OpenBSD developers how many issues we had to solve in gcc in taking ProPolice from the "proof of concept" stage to "integrated compiler technology in the gcc we ship." Hundreds of hours. Sweat and blood.

Well, right now, ProPolice stops at GCC 3.4. No one has a working ProPolice for GCC 4.0. No one is devoting enough resources to ensure this will happen. And you think we will switch to GCC 4.0? Think again.

Instead, GCC 4.0 will ship with a framework called mudflap, which does about as much as "flap, flap, flap" flying through code. It catches one valid bound violation every April the 1st, and complains about valid code every other day of the year.

No ProPolice in sight.

Federico Biancuzzi is a freelance interviewer. His interviews appeared on publications such as ONLamp.com, LinuxDevCenter.com, SecurityFocus.com, NewsForge.com, Linux.com, TheRegister.co.uk, ArsTechnica.com, the Polish print magazine BSD Magazine, and the Italian print magazine Linux&C.


Return to the BSD DevCenter.



Sponsored by: