oreilly.comSafari Books Online.Conferences.


Using Linux as a Small Business Internet Gateway

by Alexander Prohorenko

The Internet is an integral part of the world's businesses. Practically any business that uses computers has Internet access. The need for connection is obvious — business and business-like correspondence with partners, access to databases, upgrading software, and so on.

As a rule, many small businesses were initially limited to one dialup connection from one computer. But electronic mail and access into the network very rapidly became necessary for multiple employees. This made the case to connect an entire office network to the global network. Of course, it would be inexpedient to buy each computer a separate modem and a separate Internet access account. What's needed is an Internet gateway, a separate computer through which everyone can share Internet access.

It still may be sufficient to use a dialup connection, and in time (if necessary) upgrade to a cable modem or DSL connection.

This article describes how to set up and configure such a gateway built on the Red Hat 9 operating system. In other Linux distributions and packages, path names, file names, and file formats and sizes can differ. The techniques are all the same, though.

Installing and Tuning The Necessary Services

More and more businesses use high-bandwidth connections through DSL or cable modems. Usually the hardware for these connections includes Ethernet connections. In this case, it's possible to skip tuning the dialup connection.

Tuning a Dialup Connection

To create a dialup connection, you will need pppd and wvdial. pppd supports connections via the PPP protocol. wvdial actually guides your modem in connecting to your ISP. If you've built your own Linux kernel, be sure that you've enabled PPP support. Red Hat builds this in to their pre-built kernels. Let's check for the existence of the appropriate packages for these utilities:

$ rpm -qa | grep ppp
$ rpm -qa | grep wvdial

If necessary, please install these packages. Next, create a symbolic link to the actual device to which the modem is connected. For a modem connected to COM1, do the following:

# ln -s /dev/ttyS0 /dev/modem

To configure PPP, we need to append these lines to the /etc/ppp/options file:

# set remote computer as router by default
# work via modem
# turn on RTS/CTS support for modem
# get DNS addresses after connection from remote computer

Now, let's configure wvdial. It usually comes with the wvdialconf utility to create the configuration, but sometimes it works incorrectly. That's why I suggest to create the configuration file manually. Edit or create the file /etc/wvdial.conf to contain:

; default 
[Dialer Defaults]
; modem init string
Init1 = ATZ
; ... up to 9 strings
Init2 = ATM1L2
; dial type (tone/pulse)
Dial Command = ATDP
; your ISP phone number
Phone = 555-12345
; login name and password for connection
Username = internet
Password = hard_password
; re-connect after break (off, if you don't need that)
Auto reconnect = on
; this argument is needed for ppp with version 2.4.x
New PPPD = on
; set this argument, if you use pppd for authorisation
Stupid Mode = on

To test your connection, obtain superuser privileges (by logging in as root or through the sudo command) and type:

# wvdial

To configure wvdial for multiple ISPs or phone numbers, you need to add special sections with specific descriptions, such as:

[Dialer MyProvider]
Phone = 555-43210
Username = dialup
Password = dial_password

In this case, to call the "MyProvider" ISP, pass its name on the command line:

# wvdial MyProvider

Arguments from additional sections overwrite the default arguments. You can see the full list of wvdial's arguments with the man wvdial command.

After running wvdial and performing any further authentication or connection with your ISP, your Linux server will be connected to network. To break the connection, you must send a signal to wvdial:

# kill `pidof wvdial`

You can easily automate the process of setting up and breaking dialup connections through cron.

Tuning the Proxy Server

The next step is to configure a proxy server. Usually, we use Squid. It's rather large and requires too much memory for proper and good work, but this could be compensated for with convenience in controlling, economizing your traffic (by about 30%), speeding web page access, and many other very useful features.

First, it almost goes without saying that we need to install it (if you don't have it yet):

# rpm -ihv squid-2.5.STABLE1-2.i386.rpm

Squid's configuration files live in the directory /etc/squid. It also contains a symbolic link, errors, which points to the directory storing all user error messages. You may need to modify this link to point to the appropriate language directory. For example, if you need messages displayed in the Russian language with Win-1251 encoding, use this command:

# rm -f /etc/squid/errors; ln -s /usr/lib/squid/errors/Russian-1251

Let's take care of common proxy configuration now. There is no need to describe the details of this or that argument — it's described in many other sources: a little bit in configuration files, a lot of in official documentation, FAQs, and many other articles about Squid. Instead, we will describe the minimal configuration changes necessary to run this service. Let's start by editing the file /etc/squid/squid.conf.

In the NETWORK OPTIONS section, set the argument for http_port to the IP address and port on which our proxy server will work.


In the OPTIONS WHICH AFFECT THE CACHE SIZE section, the cache_mem argument defines the amount of RAM to allocate for cache objects. By default, it is 8MB. When you have too little memory (32MB or less) I suggest you decrease this value to 4MB. With a lot of memory, increase it.

Related Reading

Linux Network Administrator's Guide
By Olaf Kirch, Terry Dawson

The maximum_object_size argument defines the maximum size of any object to be stored in cache on disk. By default, it's 4096K. Depending on the free disk space in your /var/spool directory, you can decrease it, for example, to 1024K.

In the LOGFILE PATHNAMES AND CACHE DIRECTORIES section, the emulate_httpd_log argument defines the type and structure of the log file. By default, the value of this argument is off. In this case, the log file is rather specific. For example, time is set in Unix-style, so we need to use special utilities to convert it to any other readable view we need to use special utilities. When setting this argument on, the log file will appear the same as that of Apache httpd's log file. This argument may be critical when configuring your log analyzer. Some of them use one format, some of them use others.

In the OPTIONS FOR TUNING THE CACHE section, see the very interesting quick_abort_min, quick_abort_max, and quick_abort_pct arguments. They govern whether Squid should cache files that the user aborts downloading. By default, the first two arguments have a value of 16K and the third one, 95%. Incorrect arguments can create strange effects — even if users are not working with the proxy at all, it may still download some things into the cache. For small and slow networks, we suggest to set quick_abort_max to 1 or 2 and quick_abort_pct to 98 or 99.

In the TIMEOUTS section, the shutdown_lifetime argument defines the maximum timeout after the proxy stops. This governs when all open TCP user connections will be closed normally. By default, it's 30 seconds. For small networks, you can set it to 15 or even 10 seconds.

One of the most important sections is ACCESS CONTROLS. It defines ACLs — access control lists, a powerful and very useful Squid feature. By default, Squid is configured to allow proxy access only from the local user (through the localhost interface). In the simplest case, add the following line to the acl list:

acl mynetwork src

where mynetwork is the name of this ACL, src is a keyword, which defines the type of ACL (in our case it's a class C IP network), is the network address, and is the network mask. Please remember that when you define the IP address, you always need to define the network mask, even when defining only one IP address.

In list of arguments to the http_access parameter, you need to provide this access rule:

http_access mynetwork allow

The order of rules to http_access is very important. Rules are processed in order from the top to the first match. That's why you must add the above line you need to add exactly after:

http_access allow localhost

and before:

http_access deny all

Note that this specific example is correct only for the configuration file that comes with the package. In cases where the access rules have changed, the placement of the line that grants access may vary.

One very useful feature is that ACL lists can be loaded from files. Instead of writing addresses directly, you can set a pathname. For example:

acl myuserlist src "/etc/squid/acl/myusers.lst"

states that all IP addresses for myuserlist can be found in the file /etc/squid/acl/myusers.lst. The Squid process, which runs under the unprivileged user squid, should have at least read-only access to this file. This file must list all IP addresses, each on its own line.

Of course, ACL possibilities go far beyond only setting IP addresses. Lists can include dates and times, domain names, URLs (lists and regular expressions), ports, protocols, browsers, and HTTP methods, all with the help of ACL authorization. This gives for administrator powerful options to control proxy access. The Squid documentation and other articles give more details on ACLs. I will just add few useful recipes:

  • Blocking banners and porn sites:

    acl allow_url url_regex "/etc/squid/acl/allow_url"
    acl deny_url url_regex "/etc/squid/acl/deny_url"
    http_access allow allow_url
    http_access deny deny_url

    The file deny_url contains regular expressions that define denied sites, for example:


    The file allow_url contains exceptions to this list, for example:

  • Support running some applications (e.g., ICQ, Odigo, etc.) over HTTPS. By default, Squid denies HTTPS requests (and the CONNECT method) over any port, except for ports 443 and 563. As a result, some applications that use HTTPS over their own ports will not work. The obvious solution is to comment out the line:

    http_access deny CONNECT !SSL_ports

    We probably shouldn't do that, because it will open the proxy to access from any utility that uses SSL. It's better to find out to which port the specific application uses. As an example, ICQ uses port 5190. We can modify the line:

    acl SSL_ports port 443 563

    adding port 5190:

    acl SSL_ports port 443 563 5190

    After restarting Squid, everything will work properly.

  • Limit access based on time. In some cases we need to allow access through the proxy only during specific times of the day. The solution, as always, is easy:

    # Define access time:
    acl worktime_am time MTWHF 8:00-11:00
    acl worktime_pm time MTWHF 12:00-16:40
    # Define access lists:
    acl time_unlimited src "/etc/squid/time_unlimited.list"
    acl time_limited src  "/etc/squid/time_limited.list"
    # Set rules:
    http_access allow time_unlimited
    http_access allow time_limited !worktime_am !worktime_pm
    http_access deny all

These examples show how easily we can solve very hard tasks with the help of Squid.

The MISCELLANEOUS section contains the very useful deny_info setting. It defines which error message to show the user when a http_access deny rule matches. Use it as:

deny_info ERROR_MESSAGE acl_name

where ERROR_MESSAGE is the name of a file that contains message text in HTML (without the tags </BODY> and </HTML>). This file should live in the /etc/squid/errors directory. The acl_name is the ACL that defines the active deny rule.

With our proxy configured, now we can run it:

# service squid start
init_cache_dir /var/spool/squid... Starting squid:      [  ok  ]

On the first startup, the special directory tree under /var/spool/squid will be built. This is the disk cache, in which objects will be kept. The init_cache_dir variable controls this directory. This operation can take up to few minutes (depending on your PC). The next startup will be much faster.

Now, if we connect to with our ISP with wvdial, our gateway is ready to serve the network. On client workstations, we need to configure browsers to work via our proxy server for the HTTP, HTTPS, and FTP protocols.

After testing our service, we will make it start by default:

# chkconfig squid on

Pages: 1, 2

Next Pagearrow

Linux Online Certification

Linux/Unix System Administration Certificate Series
Linux/Unix System Administration Certificate Series — This course series targets both beginning and intermediate Linux/Unix users who want to acquire advanced system administration skills, and to back those skills up with a Certificate from the University of Illinois Office of Continuing Education.

Enroll today!

Linux Resources
  • Linux Online
  • The Linux FAQ
  • Linux Kernel Archives
  • Kernel Traffic

  • Sponsored by: