Virtualization is technology that allows you to run multiple computer systems (the "guests", often called 'Virtual Machines') on 1 physical machine (the host). The Virtual Machines exist as nothing but a bunch of files on computer (the host), but when running, they function as real computers, capable of running software, networking, etc.
Virtualization is often used to run e.g a Windows system (+ Windows programs) on a Linux computer, or as a test environment. In enterprise environments, Virtualization is also used to support legacy systems (run a virtual Windows NT4 Server for that one application you just can't replace yet - without worrying about the hardware degrading over time), in recovery scenarios (keep a copy of the virtual machines in case one goes down irrepairably ...) and fail-over scenarios (move a virtual machine to a new physical host without stopping it ... , or to reduce hardware maintenance and costs (multiple virtual machines on 1 physical machine), ...
There are many virtualization implementations, ranging from "dosbox", a DOS-emulator for Linux (great for DOS games) over desktop solutions such as 'VirtualBox' (Ubuntu's preferred virtual machine solution), VMware Workstation, VM Player and Microsoft Virtual PC, to large scale virtualization servers and systems that allow you to manage multiple virtual machines in virtual networks - all on 1 computer (Microsoft Virtual Server, VMware, Xen, ...).
I want a server solution : I want a dedicated server running a virtualization layer with multiple virtual machines. The server will live in the basement (my "data center"), so I need remote access for managing the host system and the virtualization layer and for using the virtual machines both : desktop solutions are not an option. Because I'm mainly interested in test environments to play with multiple Linux releases and network configurations and a production environment with possibly some Windows guest systems for tasks my Linux system can't handle properly, I only consider Xen and VMware. Microsoft Virtual Server is out because it only runs on a Windows host, and apart from the licensing, a Windows server has just too much overhead to be considered as a virtualization platform.
So, Xen or VMware ?
Xen is so-called "para-virtualization" http://en.wikipedia.org/wiki/Paravirtualization and therefore requires modified Operating systems as guests, or a processor with hypervisor support. It promises better performance (i.e. less Virtualization overhead) in return. Porting operating systems to Xen is not (yet) my particular brand of wodka and I don't have any hardware with built-in support for virtualization, so Xen is not an option at the moment.
Conclusion : let's have a look at VMware Server and how to install and use it on a Debian Linux host.
Get a suitable machine and install Debian. A suitable machine will have enough disk space for your virtual machines (remember that eg the hard disks of your virtual machines will be files on the hosts system, so 1 machine will occupy several gigabytes of disk space. You also need lots of RAM, because parts of the host's RAM will also be used by the virtual machines - if a guest OS needs 512 MB and you also have to run the host OS + the virtualization software, you'll understand that 1GB is a minimum. Likewise for your CPU : running an OS + virtualization + the processing power required by the guest operating systems and the applications they run, adds up. Look up some specs in the VMware manual : Chapter 1 Introduction and System Requirements : Host System Requirements
For the host operating system, we do a minimal Debian install, and add openssh-server to have a remote shell. You might want to review some other remote server management options. For the host system network configuration : preferably give it a fixed IP address (and add it to your DNS or /etc/hosts).
During the host OS setup, you may want to pay special attention to disk space for data (virtual machines), swap, and /tmp. Swap will be used when either the host or the guest systems run out of memory, so you need it. The recommended size is 2x the system RAM. If you expect to be adding RAM in the future, set the swap space at 2x the future size or an arbitrary larger size - saves you the trouble of having to add swap space (and possibly repartition the drives) later on. VMware also makes intensive use of the /tmp folder, and disk space in /tmp on Linux hosts should be equivalent to 1.5 times the amount of memory on the host. Again, plan ahead for adding RAM in the future. If you mount a separate partition for /tmp, make sure it's large enough. If you leave /tmp on the same partition as /, make sure there's enough free space left. A separate /temp partition will avoid that the partition fills up with other data (eg /var, /home, ...)
Here's an example partitioning layout with 2 disks (on separate IDE channels)
Except the separate swap and /tmp partition, the operating system is is entirely on 1 partition, except for /root, which is root's home directory, and /srv. There is no /home partition, because we don't expect any user's home here, but we wan't to keep root's home on a separate partition for easy recovery (re-install). /srv is where we'll keep the virtual machines and possible some other server-related data.
Do have a look at VMware Server Manual : Chapter 1 Introduction and System Requirements : Host System Requirements : Server Host Hardware : Host Hard Disk
and at VMware Selfservice help.
It is also recommended to use a fixed IP address (and update your DNS or hosts file). You'll also want to install openssh-server so you can remotely get a shell on the server, and you may want to look in to some other remote server management solutions.
VMware Server is not in the Debian repositories. You install it from tarballs you get at the VMware website. The tarballs include an install script that compile VMware for you. You therefore have to install the following packages first :
# required packages for VMware installation apt-get install build-essential apt-get install linux-headers-$(uname -r) # packages required for VMware Server to run apt-get install psmisc xinetd # eg. "killall" command", xinetd for remote console, ...) apt-get install libX11-dev libxt6 libxtst6 libxrender1 # required libraries
That linux headers statement installs C header files relative to your kernel version, which are required for the compilation that is part of the VMware setup procedure. After a kernel upgrade, you'll have to redo that statement to get the corresponding header files, or future updates of VMware will fail.
Go to the VMware website where you can download the VMware software. You will need to accept the license agreement, and register to get a registration code that you need to get VMware installed and running. You will probably download 2 or 3 tar archives. Save them somewhere accessible to your server : eg. the /tmp directory of your server, or on a web server or file server on your LAN.
You need the VMware-server and and one or both of the client packages (web interface and/or console).
To install VMware Server, simply unpack that tar archive and run the install script.
cd /tmp # get tarballs from intranet web server wget http://intranet/tarballs/VMware-server-1.0.1-29996.tar.gz tar xvfz VMware-server-*.tar.gz cd vmware-server-distrib/ ./vmware-install.pl # verify it's working /etc/init.d/vmware restart
the install script also calls the configuration script, /usr/bin/vmware-config.pl. You can run this script separately later, to reconfigure the VMware Server. There is also an uninstall script : /usr/bin/vmware-uninstall.pl. For the installation and initial configuration, you can accept the proposed default values - but mind the following :
there's quite a good setup guide at http://www.howtoforge.com/debian_etch_vmware_server_howto, but it assumes Vmware Server and VMware Management Console will be on the same machine. Don't follow it blindly !.
See also http://pubs.vmware.com/server1/wwhelp/wwhimpl/js/html/wwhelp.htm chapter 7 : Networking
There are 3 ways you can network-enable your future Virtual Machines. If you intend to use them as 'real' machines and have them connected to a real LAN, you chose BRIDGED networking. The virtual NIC (i.e. the virtual machine's network adapter) will use the physical NIC of the Host to connect to the physical network. The Virtual Machine gets its own IP address etc in your network, and the guest system will appear as (physical) hosts on the LAN. Network infrastructure (eg DHCP, ...) must be provided by the physical network (but this can be accomplished by virtual machines that have access to the physical network). This is the preferred solution if you have a LAN and want to use virtualization to create additional hosts without additional hardware.
Alternatively, you can choose to set up a virtual network. The hypervisor (VMware Server) will create a virtual switch to which your virtual machines will be connected to form a virtual LAN. This is a 'host only' virtual network - the only physical machine that has access to it is the host system (a virtual NIC is created on the host to connect it to the virtual LAN). VMware server will provide network services such as dhcp for the guest systems. You can add virtual switches and create additional virtual networks. The host and the guest(s) are networked so you can communicate with the guests (eg set up file sharing, ssh, ...). If you set up the host system to do routing and NAT for the virtual LAN, it can also connect to networks outside the host.
Thirdly, you can also set up a virtual network and let VMware do routing and NAT to connect it to the outside world.
Unless you have reasons to decide otherwise, BRIDGED seems the sane decision : virtual machines participating in a physical network, so you can use standard networking tools to work with them : ssh sessions, NFS or SAMBA filesharing, rsync, FTP, ...
Reasons not to use Bridged network may include :
For bridged networking, during the VMware Server configuration you'll choose : Networking: yes ; NAT: no ; host-only: no
If you decide to use host-only or NAT networking in combination with bridged networking, check that the virtual network infrastructure, especially the VMware dhcp service, doesn't interfere with your physical network ! (see /etc/network/interfaces and /etc/vmware/vmnet1/dhcp/dhcp.conf on the host system)
You're VMware Server is now up and running. To be able to use it, you might want to try vmware-cmd (see vm command line utilities listed below), but to really get some use out of it, you need to install either the client software (on your desktop PC), or the web interface (install on server, use with browser from desktop PC). A 3th alternative is to install the management console on the server, but export the output to a desktop pc. We won't explore that option (but see server-based computing if you're interested). You could also install a lightweight file manager on the server and export its screen to your desktop PC. This can be useful to manage the vm files when you're moving vm disks and machines around (see further).
# /usr/bin/vm vmnet-bridge vmstat vmware-loop vmnet-dhcpd vm-support vmware-mount.pl vmnet-natd vmware vmware-ping vmnet-netifup vmware-authtrusted vmware-uninstall.pl vmnet-sniffer vmware-cmd vmware-vdiskmanager vmrun vmware-config.pl
This is quite similar to the Server installation : make sure you have all required packages installed, download the VMware tarball, unpack it, run install script. The following example is for a default Ubuntu 6.06 Desktop system ; presumably, other Ubuntu and Debian desktops will be similar. Refer to the VMware website for Windows client installers.
cd /tmp wget http http://intranet/tarballs/VMware-server-linux-client-1.0.4-56528.zip unzip VMware-server-linux-client-1.0.4-56528.zip tar xvfz VMware-server-console-*.tar.gz cd vmware-server-console-distrib ./vmware-install.pl
Unfortunately, the install and config scripts seem buggy. I got this working by running the scripts several times (!?). If that fails, you can also install the server tarball, which includes the console. If you install the console separately, you may have to fix missing files. Also, the server config script creates menu items etc for the console, while the console config script does not. You can make your own menu items or desktop launchers (executable : /usr/bin/vmware-server-console ), or install the server, then disable it by removing or crippling /etc/init.d/vmware. Or you just run the console on the server, and export its interface to your desktop, eg with ssh -X. That has the added advantage that your data goes through an encrypted tunnel.
To connect to the VMware Server from the VMware Console, you select "connect to a remote host", give the hostname or IP address of the server, and give the username and password of a user that is allowed to access the VMware Server. In our scenario, that's root.
TODO: figure out how to run VMware server and access it with a user account other than root.
As previous : unpack the VMware-mui-1.0.1-*.tar.gz and run the install and config scripts. I didn't try those - I prefer using the console
Installing a guest operating system is not different from installing on physical hardware, although there are some details to look out for. See some special cases and workarounds at http://pubs.vmware.com/guestnotes/wwhelp/wwhimpl/js/html/wwhelp.htm.
When working with a remote console, you can not install from a physical CD drive on the client system, because this can only be connected to the guest after the guest OS has loaded. You can install from a CD-drive on the host server, or - even better - from an .iso CD image
To set up an operating system, you may need to know what "hardware" you're using, in case the OS's hardware detection doesn't recognize all components. Check the VMware Server Online Library : Virtual Machine Guide : Chapter 1 - Introduction and System Requirements : Virtual Machine Specification.
Note: To use SCSI disks in a Windows XP or Windows Server 2003 virtual machine, you need a special SCSI driver available from the download section of the VMware Web site at www.vmware.com/download. Follow the instructions on the Web site to use the driver with a fresh installation of Windows XP or Windows Server 2003.
Each VM starts with 1 NIC by default. You can add more, up to 4 NIC's per guest, allowing for multi-homed guest systems and other advanced networking configurations. Unless you have reasons to decide otherwise, BRIDGED seems the sane decision : you get virtual machines participating in a physical network, so you can use standard networking tools to work with them : ssh sessions, NFS or SAMBA filesharing, rsync, FTP, ... For other networking solutions, see previous notes on networking..
Virtual Machines use files on the host as "disks", but can also be made to use physical disks on the host system.
It is possible to move the virtual disks (vmdx files) around and attach them to virtual machines at will (same as you would swap physical disks on a real PC). Because all Virtual Machines have the same hardware, it's also possible to move disks with operating system files from one virtual machine to another. (This is explored further here)
Therefore, we can create standardized, baseline Operating system disks, and, when we need them, make a copy and attach them to a newly created Virtual Machine. We could also create baseline virtual machines, and copy those.
It's possible to e.g. use virtual disks (files) for the operating system, and use real disks / partitions for data storage so that the data is accessible from outside VMware.
When using snapshots :
snapshots can be turned off; and turned of for "risky changes' : http://www.vmware.com/support/gsx3/doc/preserve_snapshot_redo_gsx.html.
Since Virtual machines (guest systems) and the disks they use exist as files on the host systems, you can use basic file tools such as cp, mv (copy, move, rename, ...) to manage them. One obvious thing to do is to create 'template' virtual machines : clean installed and base-line configured systems that you never change. If you need to create a new virtual machine, copy one of these virgins to a new location, rename it, choose 'open an existing virtual machine' to run it or to modify its settings. You can also attach additional virtual disks. So you might want to set up the virgin systems so that they have a relatively small system disk, and you'll add extra disks later to provide space for data, programs, or whatever.
To implement this, you need to organize the locations of your virtual machines and virtual disks. Using naming conventions will also help to keep things working while moving and renaming files. You could possibly organize your vmware files like this :
/srv/vm virgins/ machines disks/ test/ machines disks/ production/ machines disks/ iso/
Creating a new virtual machine, eg an Ubuntu Desktop for testing purposes, becomes as simple as
Alternatively, you can get rid of the actual virtual machines after you've set up the operating system on the guest - and just keep the .vmdx disk files. (A copy of) these files can then be attached to new virtual machines to create new, identical systems. Slight changes in eg the amount of RAM or peripheral hardware (CD station, USB ports, ...) will most likely be taken care of automatically by the operating system present oin the disk.
You may find that copying virtual machines or virtual disks this way causes the netwotk interface device name (eg eth0) to change (to eth1, eth2, ...). What i suspect is happening is that the copied disk (/machine) already has a udev rule for eth0, with the MAC address of the original nic. The new machine's nic has a new nic with a different MAC, so udev rule generator adds a new line with eth1 for the new MAC, and consequently the nic will come up as eth1, even though "eth0" (the original MAC) is no longer present in the system. If my hunch is correct (haven't checked), I suppose this may be considered a bug in udev - the 'old' eth0 rule should be deleted as the nic with the old MAC is no longer present in the system. Or it may be considered a bug in vmware, in that it should provide a way to make clones unique and clean up such things ... (maybe vmware does have the tools for this, but I don't now)
I guess it's possible to work around it by doing a find and replace in the udev rules, something like
# remove obsolete eth0 line sed -i /eth0/d /etc/udev/...rules # replace eth1 with new eth0 sed -i s/eth1/eth0/g /etc/udev/...rules
You might also just (quick and dirty) edit /etc/interfaces to use the new device name.
When you're cloning/copying systems like this, there are probably a few other things you'll want to check because they need to be unique on a network (hostnames, statically configured ip addresses, ...) so you can just add this to your checklist.
The 'disks/' subdirectory can be used for loose vmdx files that you want to attach to or detach from virtual machines at will (eg data disks) or to hold operating system disks that you've 'taken out' of a virtual machine/
The iso directory is meant to hold a collection of CD iso images for use as virtual CD's (eg setup media for operating systems, ...). It could be a mount point for a remote network share, or a local directory.
Virtual machines use relative paths. The path names for all files associated with a (GSX Server 3 / VMware Server) virtual machine are relative, meaning the path to each file is relative to the currently active directory. For example, if you are in the virtual machine's directory, the relative path to the virtual disk file is
In the setup discussed in this write-up, the virtualization layer is an application that runs on the host operating system. VMware (and others vendors) also have high-end solutions where the VMware Server *is* the host operating system - a so-called Type 1 (or native or bare-metal) hypervisor : VMware ESX Server. As with Xen, this reduces the Virtualization overhead and thus decreases performance loss.
There's also Hardware virtualization ; new feature in certain processors to implement the virtualization layer on CPU-level and to better support the virtualization software / hypervisors, eg AMD processors with Direct Connect Architecture and AMD Virtualization (AMD-V) technology or Intel Virtual Technology. With these processors, Xen is also capable of running unmodified guest operating systems.