Quick-start Guide to Linux
for Windows Power Users
and would-be system administrators

or how to take a walk in the park and not get lost or hurt yourself


this article complements A Walk in the Park : migration from Windows to Linux.


Advanced windows users (power users, sysadmins) may be at a lost the first time they encounter linux, because they try to "translate" what they know from Windows to the Linux environments (and sometimes fail). In order to apply your (Windows) knowledge of system and network administration to Linux, you need to be aware of some major conceptual differences between the two operating systems / environments. In this article, I will try to tackle some of the hurdles you'll have to take when you try to use Linux with what you remember from using Windows.

This is not a complete migration guide or Linux to Windows comparison : not every problem is tackled, and not every feature of Linux is covered. I'm only giving some highlights to help you take the first hurdles and let you get used to things - ranging from some common Power User stuff to basic and intermediate system administration.

Design principles

In Windows, "everything" is integrated with all the rest, and everything is GUI-oriented. Linux is much more modular, and less GUI oriented. You think of an operating system, and you think Windows because that's what you've been using so far. In Windows, the GUI (Graphical User Interface) is seen (by most people) as part of the operating system. It is not (or it shouldn't have to be. In Linux, it is not). The operating system is software that lets the applications and some peripheral hardware communicate with the hardware in your system (hard disks, RAM, CPU, ...). The operating system busies itself with processing and memory management, rather than showing colored backgrounds and blinking icons.

For Linux, the GUI is just a desktop or a window manager, yet another application meant to draw windows and buttons on your screen. As a consequence, you have a choice of desktop environments, or you can run the operating system completely without GUI (very common on servers).

Because there is more than one GUI to Linux, and the people who created those have their own priorities and their own way of dealing with certain issues, you're likely to be confused : you will not find "The Control Panel", but you may find several things that more or less look like they could be control panels. And some others that don't look like control panels at all, but still let you 'control' parts of the operating systems. Hm.

Similar functionality is not necessarily implemented identically

Windows knows concepts such as files, folders, users, networking, etc. As does Linux. Although often they look the same or very similar in functionality, the underlying design and implementation can be completely different. Knowing your way around Windows File Sharing may help you grasp the idea behind Network File System. However, simply trying to reproduce "Windows File Sharing" in an NFS (Network File System) configuration may not work, and might not even be a good idea as it could make you neglect features of NFS (or other file sharing solutions) that are not present in its Windows counterpart.

Microsoft presents its solutions as standards : the "One Microsoft Way"

... but in fact, either they've invented their own, proprietary solution in contrast with common standards, or they use accepted standards and modify them so that they only work well in a Microsoft environment, making others look "broken", "inadequate", or "primitive". Still, you consider the Microsoft way "the one true way" because you've never seen any other way of doing it. You'll have to get over that if you want to be successful with any other OS, such as Linux. Examples include the file system, networking solutions, user management, and many others.

The filesystem

Windows filesystems always start with a drive letter. This is a legacy DOS thing : a file resides on a disk, so paths always have the form C:\file.ext or D:\folder\file.ext.

The linux file system(s) (and others) use a unified filesystem that is independent of the underlying storage devices : the filesystem starts at "/", subdirectories look like "/etc", "/var/log" or "/srv/www/mywebsite". The directory /var/log can be a just a directory on a partition, or a partition by its own, or a completely separate hard disk. The advantage is clear : file and folder paths are always the same, even when you change the underlying storage (eg by adding drives or partitions). Downside : partitions need to be "mounted" in the filesystem - but there are tools for automatic mounting. The mounting mechanism also allows you to create complex configurations, eg mount the same partition on multiple locations, incorporate parts of a remote filesystem (i.e. on an other computer) into your local filesystem, and so on.

Note : as of Windows 2000 (server ?), Windows also knows the concept of mount points to "mount a partition to a folder" but it is, to my knowledge, not widely used and is often only presented as a "workaround for when you run out of drive letters".

Some notable facts about files :

A file can exist on more than 1 place

In Windows you have "links" or "shortcuts to files" (typical extension : .lnk). Windows clearly distinguishes between the file itself, and the shortcut that refers to it. In a Linux (Unix) filesystem, the distinction is less absolute. You can create "hard links" to a file, and the hard link will behave like it was the file itself, so it is completely transparent, to the extent that you can consider it "the same file that exists in 2 places simultaneously". There are also "symbolic links" (symlinks or soft links) where this effect is less outspoken, but still more than it is with a windows ".lnk".

It is very common to have a file in a given place, an link to it from several other places : it reduces redundancy (eg you don't need to update multiple copies), you can separate the physical location (file on disk) from the logical location (path to file as used in a script or configuration file), and accommodates for programs that expect the same file in a different location.

Extensions don't matter

Although it is possible (in desktop environments) to create associations between file extensions and applications to open them with, there is no real need to use extensions in file names. You can choose to use no extensions, or extensions of your own choice. There is no limit to the length. In fact, as it is legal to use . anywhere in a file name, it does not really makes sense to call the part after the last dot "the extension". It's just a part of the file name.

Here's an example of how a backup copy of a file is created by simply adding a suffix to the file name. Its common to do something similar before modifying a configuration file or a source file, or any other important file

	cp /etc/hosts /etc/hosts.bu
	

Linux filesystems support long filenames, with characters such as -_ . $ . You can use spaces in file names, but it is not advisable : spaces in filenames interferes with some commands and scripting and make it necessary to use escape characters or quotes around filenames. The same is true in Windows, but you'll more easily turn to scripting on a Linux system, so it's more obvious there than in Windows.

Hidden files

Hidden files start with . files that start with . are hidden. This is not to be considered a security measure (hidden files can be made visible quite easily) but it is helpful to keep directory listings clean. A user home directory, for one, also contains preferences and configuration files , but usually these are in hidden directories so that they don't clobber a directory listing (or a file browser window) when the user is only interested in viewing his 'data' files (documents, movies, pictures, ...)

case sensitive

Linux is case sensitive. This is true for commands, but also for filenames. MyFirstLetter.doc is not the same file as myfirstletter.doc. It's usual to use lower case in general and reserve upper case for special cases, although this is not a fixed rule.

Shared files

You're accustomed to Windows File Sharing. This is based on a proprietary Microsoft protocol (smb), originally used in combination with proprietary network protocols (netbios, lanmanager) but now also running over standard TCP/IP. The typical way of approaching shared files or folders is via a "UNC" path (\\server\share\...), but these can be made "transparent" to users and applications by mapping a drive letter to a shared folder (creating a so-called network drive). That is a suitable workaround for older applications that do not understand UNC paths.

To allow Linux users to access Windows shares, SAMBA was created. SAMBA was created in a process that the guy who created SAMBA describes as "learn to speak a foreign language, just by going to a foreign country and hear native speakers talk to each other" (How Samba was written).
Apart from allowing Linux users yo access Windows shares, Samba also works as a server (so that Windows systems can access files shared on a Linux machine), and offers related functionality such as a WINS service (for resolution of netbios names, as opposed to dns). Samba can function as an NT4 domain controller (for centralized user account management and such) but not (yet ?) as an Active Directory Domain Controller.

One thing to keep in mind is that Samba runs on Linux/Unix, so access to files is both governed by SAMBA permissions (via a SAMBA user account) and filesystem permissions (via a UNIX account). Something similar happens with shared folders on a Windows NTFS filesystem, where granted or denied file access is the result of a combination of "share permissions" and "NTFS security", but this is often hidden behind the "simple file sharing" feature (as of Windows XP). FAT32 doe not know any file system security, so Windows systems with FAT32 are only secured by "share permissions".

Other than SAMBA, originally only there for compatibility with Windows, Linux incorporates the traditional Unix NFS (Network File System). This works much like "shared files" but takes advantage of the way the filesystem works : the NFS server exports (makes accessible) part of his filesystem, which other computers can access or mount in their filesystem : the 'shared folders' then become part of the local filesystem (a bit similar to "network drives", but without driveletters). The SAMBA account vs UNIX account problem obviously does not apply to NFS shares. Furthermore, there are plenty other network protocols that can be used to work with remote files (ftp, ssh, smb, ... ) and for most of them there are tools that allow mounting of remote directories (smbmount, ftpmount, ...) - as with NFS.

Finally, in stead of accessing remote files and process them locally (on your own computer), you can also open a session on a remote server and process files on the server (either command line : secure shell (ssh), or with one of several remote desktop solutions).

For simply copying files between computers, you don't necessarily need access to shares to copy to, but you can just "remote copy" to a remote filesystem with tools such as rcp (remote copy), scp (secure copy over SSL), rsync (remote synchronization), ....

Shared Printers, Network Printers

As with File Sharing, you can mimic Windows Network Printing by using Samba either as print server (which can be used by Windows clients and Linux clients) or use a network printer that is shared on a Windows machine. These are suitable if you're in a mixed (Linux + Windows) environment.
However, Unix also had network printing systems of its own, and these are also implemented in Linux : the older lpd solutions and the more recent CUPS (Common Unix printing System), and others. TCP/IP networking is sufficent - you shouldn't really need any other networking protocol.

Linux configuration is done in text files.

There is no "Registry". Most configuration is done in text files (comparable in concept with .inf and .ini files, but sometimes very elaborate), and sometimes scripts. The advantages are that these are easily readable (by humans and by programs) and easy to modify without any need for designated tools (a text editor is enough). Automated configuration is often accomplished by scripts that simply write to those text files. Most applications provide also "user-friendly" front ends to modify a configuration (dialogs, menus, ...).

More recently, xml files are also used to store configuration parameters for applications, desktop environments, etc. It's still human-readable and editable as text, but can become quite complex.

"Everything is a file"

This is a classic Unix adagio. You shouldn't worry about it too much, because unless your a programmer, you hardly notice anything about it. I mainly included it as an example of "different design principles".
"Everything is a file" roughly means that Unix (Linux) has "file handlers" or "file descriptors" for things that are not really files (devices, processes, network connections (aka sockets), ...) , but the file handler allows you to approach them as files anyway. You can see this in the /proc directory, where you find (file handlers that point to) running processes. The advantage of this approach is that you can "see" processes by browsing the file system and "write" to a process the same way you'd write to a file, either from programs, or by output redirection in scripts. This is sometimes used to manipulate a running process without the need to reconfigure and restart it.

users, groups, and file system security

Both Linux and Windows understand the concept of users, groups, and file access or other privileges granted to users or based on group membership. But they way they handle it is completely different.
Microsoft roughly handles it this way:

Then, for each file (or aggregated by directories and subdirectories, or inherited from parent directories) you indicate which users and groups have access - access being either "read", "modify", "execute" or a combination of "special access" elements : delete file, modify file, modify directory, modify files in this directory, traverse directory to allow access to underlying files or directories without any other rights to the directory or the files in it, change ownership of files, etc. etc. etc

So if you manage your user accounts and groups well, you can set access rights in great detail - but it can become a pain in the lower part of the back to maintain a consistent access policy. Often, people resort to "Everyone Full Control" to make stuff work ("Everyone Full Control" was the default setting in Windows 2000 Professional Workstation and Windows 2000 Server, just to make sure things would work 'out of the box').

Linux handles things this way

There are only 3 permissions : read, modify, and execute. For a file, execute means that it can be executed (eg a script is an executable text file; the file extension doesn't matter). For a directory, execute means you can pull a directory listing (show contents of directory). Permissions are not cumulative : "write" does not implicitly means "read and write". Therefore, permissions are expressed as follows :

A file is always owned by a user (by default: the account that created the file), and by a group (by default : the primary group of the creator). File permissions are assigned to 'user' (owner), group, and 'others' - thus a file's permissions can be described as rwxr-xr--; meaning : user owner can read, write and execute (rwx), group members can read and execute (r-x), all others can read. Or r-xr----- : owner can read, write(r-x), group members can read (r--), all others have no access (---).
Same goes for directories, but the have "d" added; eg drwxrwxrwx

Now this sounds complicated when I try to explain it in words (it is actually quite simple in bits), and it takes some getting used to when approaching it with the Windows way in mind. Once you get your head around it, it is actually quite simple : look at a file and you immediately know who can do what : eg the hosts file :

	bibi@nix:/etc$ ls -al /etc/hosts
	-rw-r--r-- 1 root root 260 2007-01-04 19:06 /etc/hosts
	

it's owned by user 'root' and group 'root'. user root can read and modify it. members of group 'root' can read it (but not modify it). all others can read it too. That's all there is too it.

Files and directories are given default permissions upon creation and are owned by their creator, but ownership (user as well as group) and permissions can be modified - recursively if so desired (you can set/add/remove permissions for "this directory and all subdirectories in it, and their subdirectories, and so on). But in practice, you will hardly ever have to change file system settings : ordinary users by default can not write anywhere outside their home directory (and still everything works !), files are owned by their creator and given adequate but secure permissions by default, programs are given a user account so they can use 'their' files (configuration, executables, data they need to read, ...), and access to system files is limited to root.

A common situation where you change file permissions is to make a text file executable (to create a script). Example. you (user jdoe) have created a script, but you want everyone and anyone to use it. Simply add execute for owner, group, and others. You're the only one that should use it ? add execute for owner, and leave the rest as it is. No-one else should even read it ? remove read (and write) for group and others. A selected group of other people should be able to use it as well - slightly more complicated : create (or choose) a group that contains your selected users, assign the file to that group, set permissions for group.

While the Linux method seems rather limited compared to Microsoft's elaborated users, groups and access control options, it remains easily possible to resolve just about any situation you can imagine. Because of its simplicity and a clever implementation (just a couple of bytes) it is also very performant and robust. OK, so you can not assign a file to more than 1 group, so you might have users belonging to multiple groups if you have an elaborate file sharing system - that only means you have to pay attention at how you manage users and groups to support your permission scheme. You probably had to do that on a Windows file server as well, and there you'd have to combine it with an elaborate and complex ACL scheme. And on Linux, you could perform these actions with 1 simple command - eg chmod a+rwx myshare adds read, write and execute for "all" (user, group, others) to myshare. Compare that with adding users or groups and ticking check boxes for permissions on a windows directory, or with the corresponding CALCS or XCACLS statements.

Since just about everything is handled by file permissions in Linux, there is no need to worry about security on registry keys (file system security on config files handles that), the right to run certain applications or scripts (again, simply setting adequate filesystem security suffices), and so on.

small detail : It's common to assign admin rights (such as execute rights on system administration scripts or write permissions to system configuration files) to the user account root, but not necessarily to the group 'root'. Therefore, just making your user account member of the group 'root' does not give administrator privs to that account (as you might expect from Windows where you just add accounts to the Administrators group). There are other ways to do that, such as sudo and su (comparable with Windows imitations "Run As" and "Switch User", but in Linux, it actually works). This also leaves the "group" permissions available for a group other than "root", if such would be required.

Shells, Scripts, and Command Line

In Window you have cmd.exe (and command.exe), aka "a DOS Box " or "a command prompt" that typically runs in a small window on your desktop. Linux offers a choice of 'shells' (as if you'd have a choice of different cmd.exe's) - the most commonly used is bash. Commands can be run in a console or in a "terminal window". The first will remind you of an old DOS computer (nothing but text on your screen), while the latter is comparable with the DOS window, but a Linux desktop usually offers a choice of 'terminals', each with their own characteristics that you'll come to appreciate. Effortless cut and paste text in a terminal window, to name just one. Color-coded output is also nice.

Because of the importance of text editing and command lines in Linux, it comes with a variety of text editors with advanced features suitable for programming and scripting. Most shells also support autocompletion (tab) and history (scroll back to previous commands to repeat them). You'll find that these work better than the cmd.exe equivalents.

Similar to .bat files, you can create shell scripts, and you'll find that the control structures etc. offered by a Linux shell are much more powerful than dose in DOS/Windows batch language. Ever tried to use a case statement, random numbers, or formatted date and time strings in batch ? Not to mention regular expressions ("wildcards on steroids"), easy access to the file system, and advanced text manipulation tools (find, replace, modify ...). No more messing with extensive open - read - manipulate - write -close contraptions in visual basic script or whatever it was you used for such tasks.

You can probably imagine how powerful scripting can be in combination with text-based configuration files. If you're into automating menial sysadmin tasks, Linux is just for you - if you take the time to learn. And then we haven't mentioned the plethora of advanced scripting languages (perl, python, php, ... ) that you can use to replace your visual basic scripts, should you find shell scripts are too limited after all

Do one thing and do it well - and cooperate

Traditional Unix tools (all of which exist in Linux / GNU) are designed to do 1 thing, and do it well. Eg to write a program, you'd use your preferred text editor to write code (the text editor will offer advanced tools specific to programming), a compiler to compile it, a linker to include libraries, and so on. Mechanisms such as pipes, "tool chains" and config and make files etc are used to make them cooperate in a streamlined manner. Apart from being less resource-hungry, this allows developers to built specialized tool chains (eg for automated nightly builds of there latest versions), but if you're coming form an integrated development environment (Visual Studio ?),the look and feel of it may be quite a shock. But you may find some IDE's for linux as well.

"Do one thing and do it well" does not only apply to compile and link tools. Sooner or later you'll come across things where your windows offered a fully integrated application that did everything remotely related to the task at hand, while in linux, you'd have to "collect the tools" yourself. Sometimes, this is a nuisance, sometimes you'll be glad you had a choice.

Run levels

This is something you don't know from Windows. A Unix/Linux systems can be running in one of multiple (usually : 5) runlevels. Each runlevel describes a configuration : with or without networking, with or without GUI, or with/without anything you specify. The use of runlevels differs between Linux distributions, but you may find for instance

The system boots to a runlevel (the default runlevel) and executes a number of scripts that are associated with that runlevel (eg activate networking, setup firewall, set up GUI) to establish a predefined configuration. You can also deliberately choose to boot to or switch to any given runlevel. It is quite simple to modify existing runlevels (eg let runlevel 3 have GUI as well) or define specific purposes for other runlevels (eg design a configuration optimized for, say, online gaming in runlevel 4 so that the changes don't interfere with your normal working config in runlevel 5. Or whatever.)

Functionally (though not technically), you can compare this to what Windows does by means of registry 'run' 'runoce' runexec' and 'runservice' keys, the old autoexec.bat and config.sys, and startup or shutdown scripts, but then 5 different sets of those; each of which configure the system in their own specific way.

multiple consoles

An other thing you don't know from Windows. Being originally conceived as a multi-user system, a Linux system runs 6 or more "virtual terminals" : 6 (or more) users can be logged in and using the system. Obviously, on a PC the use of those terminals is limited as you have only one keyboard and monitor, but it does allow you to leave one session open, log on on another console (you switch with alt + FunctionKey ie ALt+ F1, F2, F3 ...) and do something else, like fixing something that went wrong in the original console (kill a process, ...)

Of course, Linux also offers remote sessions (ether to a shell or a desktop). This is often used for remote administration, but you could also have multiple users working on the same computer, without any limit to the number of users (as opposed to Windows XP) other than the processing power of the computer.

Background processes

Yet another thing Windows doesn't really use. Sure, you have "services running" without user interaction, and you can do other things at the same time. A GUI, with multiple windows, allows for that. This is called multi-tasking. Linux can also do that in a console (a text environment, like a DOS computer, without Windows). You enter a command at the prompt and put it in the background, so you can execute other commands while the background process keeps running. Usually, you can do the same by running multiple terminal windows in a GUI. Backgrounding is useful in situations where you don't have a GUI and need multiple processes in the same session, so they can interact with each other.

"documents and settings"

The home directory (/home/username on a linux system) typically holds any and all files a user should ever need to access. This includes "documents" as well as personalized configurations settings, system and application preferences, etc. On recent Windows systems, something similar exists in "Documents and Settings".
If the user install software on his own (in stead of the system administrator), this will either fail or be installed to (a subdirectory of) the user home directory, as the user typically can not modify any other directory.

What's with the daemons ?

A daemon is a process that runs (usually : in the background) an does nothing except listen until someone (some other program) requests something. daemons can be understood as "services". dhcpd, the dhcp daemon, just sits there listening until some computer requests a dhcp lease (an ip address and other network configuration parameters).

Daemons usually run with a limited, specific user account, that only allows them access to the files they need to do their job. Compare this with Windows, where services often run under the Administrator account, SYSTEM, or a service account that often is granted admin privileges. Practical consequences during daily use : non. The way Linux handles this, you just have to worry less about exploited services (daemons are not admins so it's harder to have them do damage to the system) and you can change root password without having to think about services (whereas windows services that run as administrator might fail if admin password is changed)

Active Directory, Group policy objects, etc

If you are (were ?) a Windows Administrator, you're probably wondering where the Active Directory is in Linux.
There is no such thing for Linux. This is partially because, in true Microsoft tradition, Active Directory is an agglomerate of functionality and services, all intertwined, interdependent, integrated in 1 "can be used for anything and everything" application. This goes very much against the Unix way of "do one thing and do it well". Also, the concept of a "domain" as implemented in Active Directory, does not exist in a Unix / Linux environment. Whereas Unix/Linux is very good at networking, multi-user and multi-tasking, etc, it has always been much more server-based. The need to incorporate stand-alone personal computers into an manageable networked environment, which could be considered the driving force behind Windows networking and the Domain concept, was apparently less of an issue in a Unix environment, where Linux originated from.

Here are some more thoughts on the subject

Advanced Networking tools

Linux comes with lots of free / build-in networking tools (nmap, ethereal, iptables...) and they all can easily use "raw sockets" so their features are generally more advanced and more powerful than their windows equivalents. Reading and analyzing packets as the appear "on the wire" or "in the air" is easy. Modifying and manipulating them is do-able. A network administrator's wet dream.

not everything works "out of the box"

Microsoft is a commercial firm : they want their customers to have a pleasant experience, and they want as much customers as possible. Therefore, Windows is preconfigured to work "out of the box", i.e. without additional post-installation configuration. In the rare occasion that post-installation configuration is required, the user is guided through the process by "wizards" and defaults are set in such a way that always accepting the proposed answer will result in a working system. Anyone who can move a mouse and click the "OK" buttons can setup a working Windows system, even a server. The result will be a system that works, but often less than optimal (in terms of resources used), and, more important, less secure than is desirable. The typical example is Internet Information Server (IIS) that is installed, configured and running in any Windows Server operating system. So even on a dedicated database server, you'll have a web server running. For no reason. On top of that IIS is also by default configured to "just work" - offering features and services that you in your specific situation may not need, but are activated anyway, offering opportunities to be exploited by anyone so inclined.

Linux (and lots of other OS'es) approach things differently. Usually you (the sysadmin) will have to decide what services your server or computer should run, install the software and configure it to make it accessible to the computers / users you see fit. So it doesn't work "out of the box" : you have to decide how you need it to work, and make it so.

There is a tendency towards more of a "just works" approach, especially for desktop-oriented systems, but even then some additional configuration may be needed. This is certainly the case for server applications and security-sensitive configurations. Typical example : Squid proxy server will install by default with a "deny all" configuration, an you need to allow users or computers access to the proxy by editing squid.conf.

mixed feelings

Linux will sometimes look like a strange mix of old, arcane commands on one hand, flashy graphics on the other, and advanced network applications next to almost but not quite right productivity applications in the middle. That is because Linux contains at the same time native unix stuff (command line tools originating from the 1970's and 1980's, native linux stuff ranging from the 1990's till today , and ports/reverse engineered stuff/clones and imitations of applications and services that come from the Microsoft realm but are developed for linux to offer compatibility with or a linux alternative to Windows applications.

You might find the unix and linux stuff strange at first, but extremely powerful and simple and logical in their own special way once you get accustomed. On the other hand, you'll find that, especially in the early stages of their development', Linux imitations of Windows applications somewhat stay behind in comparison with the Windows stuff. No wonder, as the pace is set by Microsoft, and the linux counterparts content themselves to imitation. It might be a good idea to gradually gravitated towards native linux applications and only rely on "imitations" if you need them for compatibility with a Windows environment or with other Windows applications. On the other hand , you will find that these "alternative" applications such as OpenOffice.org (originally conceived as a MS OFfice clone) develop momentum and become valuable in their own right rather than mere replacements for Microsoft / Windows apps. Firefox is a splendid example.



While there are many more subjects that could be covered in an article such as this, I hope this is sufficient to help you overcome most of the hurdles in your transition to Linux. It's not just a matter of "finding alternatives" or "do the same thing with different means" - sometimes it is (also) a question of "looking at things differently" or just "do it differently". The examples I've been giving here will hopefully inspire you to develop your own solutions.

Further reading

If you prefer to read some more in stead of just having a go at it :


Koen Noens
January 2007