Linux Networking

This is part of a write-up of an attempt to use Linux as a "Small Business Server". It would be a shame to use Linux just to mimic Windows Networking, and not take advantage of the native networking features Linux has to offer. Here are some pointers.

NFS - Network File System

The concept of NFS is pretty straightforward : the server "exports" a part of its filesystem, i.e. makes a directory accessible to remote hosts (clients), and the clients include this directory in their own filesystem (directory tree), so they can access it as if it was a directory on their own hard disk. If you need a windows comparison : it's like mapping a network drive (assign a drive letter to a network share), but without drive letters (because Linux doesn't need those). It looks something like this :

	host01:~# mount
		/dev/hda1 on /		type ext3 (rw,errors=remount-ro)
		/dev/hdc2 on /srv 	type ext3 (rw)

		srv01:/srv/shared/ on /home/me/music 	type nfs (rw,hard,intr,addr=

It means that the directory /home/me/music on host01 is actually the directory /srv/shared on server srv01, but for programs and users on host01, it looks and behaves as a local directory, eg you can just browse it with a file browser, copy files to and from it, etc. This is often used to add disk space, to implement 'roaming' user profiles, or to manage storage space on multiple servers without the clients being aware of which file is on what server.

It's pretty easy to set up (see also NFS on Debian

on the server

		# install nfs daemon and rpc
		apt-get install nfs-kernel-server nfs-common portmap

		# edit /etc/exports to create shares, eg share to a LAN

		# make nfs daemon read exports file for new shares to become effective
		exportfs -ra	

NFS HOWTO : server configuration

on the client

		# install client software
		apt-get install nfs-common portmap

		# mount share - 
		## manually
		mount srv01:/srv/shared/ /home/me/music

		## automatically at boot : add a line to /etc/fstab
		srv01:/srv/shared/ /home/me/music	nfs 	rw,hard,intr	0	0

NFS HOWTO : client configuration

secure it

You should pay attention to security, you don't want to share your files to the world. Security is managed partially in /etc/exports (mentions hosts/networks that can use a share), and through filesystem permissions on the shared directory (so you also need to manage user accounts). Additional network security can be implemented by using a firewall (iptables) and/or tcp wrappers (the /etc/hosts.deny and /etc/hosts.allow files) and inetd / xinetd. [ NFS HOWTO : security ]

Remote Management

To remotely manage your server(s), see Remote Control and Server-based Computing. eg to easily manage files on a file server, you could install a lightweight file browser on the server, and export its display to a workstation, so you can manage files in a GUI environment without having to install a desktop environment on the server. Here, xfe file manager on a VMware host called 'coumpound' allows for remote file management from an Ubuntu desktop PC.

running xfe file manager on a remote server

For remote management from a command prompt, install opensssh-server on the server, and ssh in to it. You can also use ssh for remote command execution, without actually having to start a remote session. Proof of Concept : a directory listing on a remote server :

	john@host01:~$ ssh root@server02  ls -al

		drwxr-xr-x  5 root root  1024 2008-01-06 10:13 .	
		drwxr-xr-x 22 root root  1024 2007-12-28 14:27 ..
		-rw-------  1 root root 12279 2008-01-06 10:13 .bash_history
		-rw-r--r--  1 root root   412 2004-12-15 23:53 .bashrc

From a Windows client, you can use PUTTY and similar programs to do the same.

It's trivial to expand this to remote batch processing, i.e. remote execution of multiple commands on a remote system.

Copying files over a network

To simply copy some files from one server to another (or from a server to a desktop PC), you don't always need to set up file sharing or file transfer solutions such as nfs, samba, ftp, etc. For occasional transfers (or routine, repetitive tasks such as backups), you can simply copy between two systems, with tools such as scp (ssh based secure copy) or rsync. rsync is intelligent in the sense that it can be made to synchronize directories by only transferring changes / differences, not the entire directory.

Note that the following examples require you to have a user account on the target system. This can be avoided if you implement ssh with public/private keys.

	## copy a file from one computer to another
	scp [options] myfile user@remotehost:/home/user/myfile
	## starting a secure FTP session (for file upload, download, ...)
	sftp user@remotehost		#defaults to /home/user at remote host
	## synchronizing directories
	rsync -avz /foo/bar/ user@remotehost:/foo/bar

using rsync to copy files between Windows and Linux, and running rsync as a server

downloading files from a web server or ftp server can be done with the commandline tool 'wget', so you can use e.g. an intranet web server to store files, and download them when needed, even from a script or in a CLI environment.

	## download and install a software package from an intranet web server
	cd /tmp
	wget http://intranet/software/coolapps.deb
	dpkg -i coolapps.deb
	cd -

Secure tunnels

Apart from vpn solutions (openvpn, pptp, ...) it's pretty easy to set up secure tunnels using ssh. What you do is set up ssh to forward ports of other daemons/services, and connect to them with an ssh client. Because ssh uses an encrypted tunnel, your data-communication is secure and private, and the normal service ports of the services you're running, are not accessible. ssh can be used with password authentication or with public/private key pairs.

Secure tunnels over untrusted networks

When using ssh, you will have to authenticate with a username and password on the remote system. SSH can also be setup to connect passwordless, i.e. through Public Key authentication, host-based authentication, certificates, and so on.

Server-based computing and Thin Clients

Whereas Windows Networking comes from standalone systems networked together, mainly centered around file sharing, Unix / Linux comes from centralized, multi-user server-based computing. This heritage can still be put to good use. The concept is that not only user home directories, but also applications, user accounts, etc. only exist on a server. Clients run sessions on a server, but get the display on their workstation / PC / terminal. [see also Terminals, Remote Desktops and Server-based computing]. This eliminates all the trouble a Windows environment has to go through in terms of central user management, software distribution, patch management, application delivery, computer configuration and security ... you name it. When combined with network booting (as opposed to booting from a hard disk), you can even have diskless clients. LTSP - Linux Terminal Server Project and Edubuntu are two examples of this concept.

Network booting

When looking at Serverbased computing and thin clients, you might want to take this one step further and use diskless clients that boot their operating system from a network server - see diskless clients. The same mechanism can also be used to boot a Linux installer so you can setup workstations over the network, without installer CD's. This is especially convenient for unattended, customized installations : just start up the PC's, and see them install your preconfigured operating system over the network. see network boot installer.

Koen Noens
January 2008