Syslog Server

If you have several servers, going through each server's logs to detect errors, problems or security breaches quickly becomes a pain. If you can collect, filter and sort these logs on a central server, you have a starting point to automate this and make your life easier. With syslog (or rsyslog), the linux system log daemon, you can do just that. You can even collect events from Windows event logs.

Concept : What are we talking about here ?

Every operating system has provisions for logging, for system logs as well as application logs. Unix/Linux has syslog, Windows has its Event Logs, mostly know by the program used to view them : eventvwr or the Event Viewer. Logs help you to detect problems, or to find the cause of a specific problem, or to generally keep track of what's going on on your systems.

That's all good and well if you have 1 or 2 (maybe 3) servers. More that that and it quickly becomes a headache to review each individual server's logs. That Windows allows you to view an other system's logs from any eventvwr helps, but not really : to much click and scoll and sort and click again before you get something usefull. The linux syslog (a traditional unix service) looks somewhat less sophisticated but -- typically, it's linux -- can be maniplated with shell commands and standard text tools (less, grep, sort, cat, cut, ...) so once you get the hang of that you don't need to acquire any specialised knowledge or skills : you already know all you need to know to easily track events and detect present or imminent problems

As it happens, syslog can also talk to a syslog on another system, and that's where it becomes really interesting : you can collect (copies of) events on a central "logging server" and apply your tools and tricks and skills for log analysis there, all in one place. That alone is a good reason to look into central logging. But there is more ... (see further).

Will this also work for Windows servers and software ? Yes. There are fairly easy ways to transfer Event Log entries to a syslog server, in real time, over the network

Proof of Concept : remote logging


On the clients

This is simple : edit /etc/syslog.conf (or /etc/syslog.conf) and add something like

*.*		@syslogserver

This means : forward anything that gets logged to this syslog server to syslogserver. What will happen on syslogserver is not yet our problem. It's interesting to know that the other rules in this (local, client) syslog.conf still apply : messages will still get logged to local log files (typically in /var/log) as well.

Note that *.* means any_facility.any_severity. You can limit what gets send out to the syslog server by specifying the usual syslog selectors such as auth.* or *.err or This choice depends on what you want tpo accomplish exactly (more on that later). But be aware that your syslog server will only see what the clients send it, it won't pull in data by itself. If you want your central logging system to do something fancy with info-level messages but it only receives messages of level error and higher, that ain't going to work without you reconfiguring all the clients (again). So for testing and getting the hang of things, you might want to just send *.*, unless your network is so large and your systems so noisy that your central logging server can't handle all that data.

On the server

On the server, we only need to configure rsyslog to listen for remote logging. This is done by adding the following to /etc/rsyslog.conf, then restart or reload the service.

# provides UDP syslog reception
$ModLoad imudp

# start listeners on 514/udp 
$UDPServerRun 514

With this setup, your remote log entries will show up in this server's logs according to the rules already present in the rsyslog conf file. That is a good thing: it means you don't need extra settings to include the syslogserver itself in your "enterprise" logs. As you probably know, syslog records the originating system's hostname in the log entries, so you'll know which is which.
If you do however want your enterprise-wide logs seperate from the syslog server's local logs, you can easily send them to separate files by adding rules like

auth,authpriv.*         	/var/log/enterprise/auth.log
*.err				/var/log/enterprise/syslog

Rationale : Why would you want this

We mentioned one reason already : to make your live easier. If you can pull together your logs in one place, reviewing or analysing them gets easier. Even more so if you're using tools for log monitoring and analysing, or if you're thinking of using something like Nagios to read your logs and, say, notify you of a passive service status change if it sees a particular error or warning showing up in the logs. Or any other sort of notification system that alerts you of important log entries (e.g. rsyslog's own 'mail' action. It's obvious it's way batter to set this up once, on a central machine, rather than on each server separately.

Lots of devices and appliances use Linux "under the hood" and/or offer syslog compatible logging capabilities. Lots of storage (netapp), hypervisors (VMware ESX), enterprise grade routers and switches, network printers, ... offer syslog (compatible) logging. So now you can easily collect and review those logs without having to log on to each device.

Security is also a good motivation. When a system is compromised, an attacker will almost always try to cover his tracks by removing log entries, or deleting logs altogether. Having his actions logged in real time on a separate system offers some protection here.
When a system crashes beyound repair, a copy of its logs might also be one of those things you wished you had, if only to have a clue as to what the f*** happened.

Audit logging, keeping track of who logs on where, failed and succesful sudo attempts, etc are also excellent candidates for central (and redundant) logging.


If all of the above isn't enough, think one step further. Sure, you can have copies of all those logs on one server. And you might be thinking : how do I create separate logs for each (client) host and device? In fact, rsyslogd has powerful mechanisms to do let you create log files based on host's names or ip addresses, so yes, you can do that.

I prefer a slightly different approach.

You can reconsider what benefits remote logging has to offer, and then set up your central logging to reinforce that.
Say you need audit trails of who logs on where, and what your sudoers are doing. Why not let your central syslog daemon collect all that info, across all hosts, in 1 file (possibly rotated on a daily basis, so one file per day). Gives you a good overview of the entire network, and you single out specific hosts, or specific accounts, or specific activities whenever you need to (eg with grep)

Or collect all errors, critical errors, alerts and emergencies from any host on your network(s), dump them in a file, and use that is a starting point to detect problems, configurations that need tweaking, or to create TODO lists for your team. And if all urgent stuff is dealt with, do the same with all warnings and proactively detect problems, and fix the issue before it becomes a problem. Sure, you can also do that by reviewing logs on each host separately, but having it together in one file on one server makes automating this task way easier. A consolidated view also helps to find out if a problem or a specific class of problems occurs on more than one host. If the file is large just use grep and such to narrow it down.

Or you can do any combination of the above, as it is possible to log 1 event to multiple logs simultanously.

Yet another step further : log all that stuff in a database and simply query that for the sort of info you need -- ad hoc or with predifined reports. Yes, you can do that, too.

And further

Once you get the hang of this remote logging stuff, you can easily envisage more fail safe redundancy, load balancing, etc. by logging to multiple servers at once, or by having 'proxy' syslog servers accept messages from clients, possibly do some processing (filtering, triage, ..) and submit them to another syslog server, or by having your multiple "central" syslog servers sync each other's logs (yes, without creating an endless loop).

Solution Design

Syslog is a client/server solution : the "clients" submit their msgs to the syslog server over the network (typically udp port 514). That's TCP/IP, so the underlying operating systems don't matter : a syslog client on a Windows system can log to a syslog server on linux no sweat.

So, we'll have a linux server running syslog, and accepting logging from

The server will in fact be running rsyslogd, an extension of the traditional linux syslog daemon. It is now standard on Debian (6) but supports traditional syslog clients as well. This is a Good Thing because your older linuxes, your devices and appliances with embedded linux or proprietary syslog solutions, your Windows clients, ... may be operating a standard syslog - you can still use these to log to rsyslogd. Using rsyslog as the log server is a requirement for many of the more advanced features I mentioned earlier. It also supports logging over TCP, which has transmission control that protects against the loss of data that would occur with UDP if the receiving udp buffer fills up. Rsyslog also has its own Reliable Logging Protocol (RELP) that offers even more failsafe data transmission. I'll stick with TCP for the time being, and UDP to support the older (syslog) clients. Yes, you can do both TCP and UDP together on one server.

Setting up the syslog server

Start from rminimal debian system. It will already have some (local) logging set up out of the box, so the only thing we'll have to do is extend that to receive and process log entries from other machines as wel.

A recent Debian (Debian 6 or the corresponding Ubuntu Server version, ...) will use rsyslogd. This is an enhanced syslog server with more filtering and processing features, but compatible with the traditional syslogd. If your system happens to still use the traditional syslog, you can still set it up as a central syslog server, but you might run into some limitations.

Server conf for listening on 514/udp and 10514/tcp, with loging to files in a dedicated /var/log/enterprise/ directory : auth.log collects all "aythentication" events, syslog gets any event of severity 'warning' and higher, i.e. anything that has a syslog tag of *warn, .err, .crit or .emerg.

# provides UDP syslog reception
$ModLoad imudp

# provides TCP syslog reception
$ModLoad imtcp

# start listeners on 514/udp AND 10154/tcp
$InputTCPServerRun 10514
$UDPServerRun 514

auth,authpriv.*         	/var/log/enterprise/auth.log
*.err				/var/log/enterprise/syslog

Log Rotation

As you'll be logging a lot more data than before you might want to review your logrotate configuration. If you've consigured syslog to use new, previously non-existing log files, they probably will not be included in your current logrotate config. So you need to configure that.

Configuring Linux clients

Here's a sample config that sends a selection of log entries to the central server. Note that you could also just send everything (as shown in the proof of concept).

authpriv,auth.*		@syslogserver		# all of authentication facility, this includes sudo
*.warn			@syslogserver		# everything with severity 'warning' or higher

if the client is an rsyslogd, you can make it use TCP by doubling the @. You can also specify custom ports (supported on UDP and TCP both) with :portnumber, eg talk to a server using port 10514/tcp. Note that the server needs to be configured to listen on that port, duh.

authpriv,auth.*		@@syslogserver:10514		# all of authentication facility, this includes sudo
*.warn			@@syslogserver:10514		# everything with severity 'warning' or higher


logger is a program that functions as an interface to the syslogd. It sends arbitrary messages with chosen severity to a syslog facility of your choice. The message will be received and handled by a local syslogd, and if that syslog is configured to forward messages to a remote logging system, they obviously will also end ip there

This is very handy to test your configuration : you use logger to generate a fake test log entry, and you see if it ends up in the logs where you expect it. eg
sillyserver:~# logger -p syslog.crit "test 123"
shows up in syslogserver's syslog as
Jun 21 15:19:52 sillyserver root: test 123

usage: logger [-is] [-f file] [-p pri] [-t tag] [-u socket] [ message ... ] and see man logger


Some applications have custom logging arrangements that might need tweaking, e.g. apache access logs

If you have cron jobs and such that run important tasks, you can also use logger to log their results, errors, warnings, ... to syslog.

Configuring Windows clients

or: howto transfer Windows Event log entries to a central syslog server.

Event to Syslog is a small executable that takes whatever is sent to Windows Event Logs, and forwards it to a syslog server of your choice; events will be visible both in Windows Event Viewer and your syslog files. It runs as a Windows Service, works on recent Windows systems (eg Windows 2008), and exists in 32 and 64 bit versions.

It's trivial to set up :

  1. download the zip archive from the project website (on Google Code)
  2. copy evtsys.exe en evtsys.dll into %windir%\system32\ on the servers you want to log from
  3. install by running the exe with -i (install) + options : evtsys -i -h syslogsrv -l 2
  4. start the service : net start evtsys

During the service installation procedure, an (empty) %windir%\system32\evtsys.cfg is created; this can be used as an exclude file for events you want to discard.

The actual configuration is stored in registry keys. Note that this service will store the IP address of your syslog server, not the FQDN or hostname. Something to remember when you move you logging to an other server or mess with your network numbering.

Windows Events will show up in your syslog files something like this :

Jun 20 12:23:44 WINSRV01 Service_Control_Manager: 7001: The WinHTTP Web Proxy Auto-Discovery Service
service depends on the DHCP Client service which failed to start because of the following error: The 
service cannot be started, either because it is disabled or because it has no enabled devices associ
ated with it.

(Yay, grep-able text !)

Filtering, templates, ...

Contrary to a stock syslog, the rsyslogd on your server, where you centralize your logs, offers extensive features for filtering and otherwise processing of the incomming log entries. There are roughly 4 distict ways of filtering messages.

  1. "selectors" - this is the standard syslog technique. the 'selector' is the facility.severity tag of the messages
  2. 'blocks' - this is a traditional BSD style. It's not common in linux, but an rsyslogd server has (partial) support for them
  3. 'property-based filters'. With these, you filter on 1 (and only one) property of the log entry. There are quite a few properties availible, a.o. the logging host's hostname, the logging program's name, the actual message text, ..., and you can use several operators (equal to, starts with, contains, ...)
  4. 'expression-based filters' : by far the most powerful filters, but with a performance penalty. These consist of if <expression> then <action> statements where <expression> can be complex, i.e. an (AND/OR) combination of several conditions

Here are some examples to give you an idea of where you can take this central logging thing. The rsyslogd online manual explains all of this in more detail. We focus on some applications related to the rationale we set forward earlier. your take-away : there's plenty of filtering mechanisms available; you need to know what it is you're trying to achieve before you can make a decision on how to filter.


With triage, I mean : making a distinction between categories of log entries, and send them to separate log files.

Say you want all authentication auditing seperate from (other) errors and warnings - the purpose of the first is to have records in case you get audited. The other is something you want to actively work with to fix occurring problems and anticipate imminent pnes.

# simple triage, by facility.severity selector. This is standard syslog stuff.

auth,authpriv.*         /var/log/enterprise/auth.log   # all linux authentication, including sudo commands
*.warn	                /var/log/enterprise/main.log   # everything of loglevel "waening' and higher, from any host

Note that a critical athentication issue (auth.crit) matches both statements, and will be logged twice acordingly, once in each file. This is normal; see further how you can change this standard behaviour by means of the "discard' operator.

Turns out you're also getting Security Auditing Events from Windows servers. You could send them to the auth.log, or you could choose to give them a seperate log. Here we do a seperate log. The filter is an expression-based filter

# Microsoft security auditing gets dedicated log
if $programname == 'Security-Auditing' then /var/log/enterprise/ms-secaudit.log

Different type of triage : by hostname. Say you have a couple of network devices or storage appliances that you want to have logs from on your linux server, because you've got better tools to read, search, monitor or analyse them. Here you could use a property-based filter to create a (fast) filter based on just the 'hostname' property :

#log entries from 'bigrouter' get their own log file          # this is a property-based filter

:hostname, isequal, "bigrouter"     /var/log/enterprise/bigrouter.log

"programname" can, in some cases, be an interesting property as well. Say you're running arpwatch on several hosts spread out across multiple subnets, and you want to consolidate the logs on 1 server. You now want a seperate arpwatch.log (for all reporting hosts) so you can monitor the MAC/IP layer of your network in its entirety.
this property-based filter will send all arpwatch log entries to the same file :

:programname, isequal, "arpwatch"     /var/log/enterprise/arpwatch.log

Note that you could also use an expression-based filter here, as in the Microsoft 'Security-Auditing' example. But property-based filters are more performant.

Templates example : Modify the log format

The customary syslog message format (at least on the linuxes I've seen) does not mention the facility or severity of the log entry. For local logging this is usually not a grave issue, because most linux distrs will have a default rsyslog conf that separates facilities and severities over dedicated files (have a look at the default /etc/rsyslog.conf on your system).

With remote logging, especially when you are consolidating the logs, having an indication of what facility generated the message and how severe it is is interesting. Rsyslogd provides a mechanism of adding additional property values to the output string. The clever way to do this is with a template. (Programmers may think of rsyslogd templates as "macros")

This is an example of an rsyslog template definition to insert (%syslogfacility-text%.%syslogseverity-text%) into the log entries,

$template formatAddPriority,"%TIMESTAMP% %HOSTNAME% (%syslogfacility-text%.%syslogseverity-text%) %syslogtag:1:32%%msg:

This is how you use the template to change the output on the given selector rules

auth,authpriv.*         /var/log/enterprise/auth.log;formatAddPriority
*.warn	                /var/log/enterprise/main.log;formatAddPriority

You can also just set a default template for all rules.

Templates example : 'on the fly' dynamic log file names

Templates can be used as shorthand for all kind of strings which include property values. E.g. you could generate dynamic file names based on the value of the "hostname" or 'ipaddress' of the client that submits the message. Or any other property.
examples from

template definition

# template for dynamic file name by hostname

$template DynFile,"/var/log/%hostname%.log"


## split all logging in files by originating host
*.*                      ?DynFile

This, without any other filters or what not, will generate a seperate log file for each host. If you're only doing central logging to create copies of your logs, this might be just what you need, and it would be a far better solution than the hostname filter we showed earlier. You can also use other properties, sus as 'programname', in which case these dynamic filenames are an alternative for the (property or expression based) 'programname' rules we applied earlier.
The problem is that you'll match *every* host and not only the "bigrouter" host, or *every* program and not just arpwatch, so it's really a whole different approach.

Still, it can be an interesting technique for some use cases

If you want both consolidated logs and per-host logs with dynamic log file names, you can also try to work around this problem by being selective on the client side (which also has its downsides ...) or using seperate servers or something along those lines.

Discard selectively, by message property or contents

It's not a really bad idea to be as nonrestrictivbe as possible on the clients (just sent everything) and dump what you don't need on the server. If you've been too selective on the clients and then later find out that in some cases those .info and .notice messages from some programs or some hosts are actually interesting, you have to go back and reconfigure those clients. You probably prefer to only have to deal with the server configuration. I know I do ...

An other reason that you'll want to discard messages is simply that syslog rules "fall through", i.e. the filtering/processing doesn't stop at the first match; all matching rules are applied unless you take action to prevent that. Discarding messages after they've met all the rules you want applied to them, is such an action.

Example of dumping 'noise" messages. Say we're filtering on severity "error and higher', and some stuff we really don't care about still get's trough. We can discard those (using any of the selection mechanisms we've discussed before) :

# the nsca End of Connections seem to have too high a  priority, discard them
:msg, startswith, " End of connection..."  ~

#this is a bug - the event is bogus
if $programname == 'Security-Auditing' and $msg startswith ' 5038' and $msg contains 'tcpip.sys'  then  ~ 

## anything discarded by the previous filters will never reach the following actions

auth,authpriv.*         /var/log/enterprise/auth.log;formatAddPriority
*.warn	                /var/log/enterprise/main.log;formatAddPriority

When you have a discard action ( ~ ); matching messages will be discarded and never reach other actions defined further down in the config. This means the order of your filters matters.

In the following example, we match, log, then discard. Look back at the triage. The 'Security-Auditing' is generated by Windows with a severity of 2 : Error. That means they also match *.warn /var/log/enterprise/main.log # everything with severity 'warning' or higher and will (also) be entered in enterprise/main.log. We dont want that, so we do this:

if $programname == 'Security-Auditing' then /var/log/enterprise/ms-secaudit.log
& ~ 

Note the & ~. ~ is a "discard" action, & signifies : the result of the preceding selection. This construct has as effect that these log entries will be logged /var/log/enterprise/ms-secaudit.log, and then be dropped so they wont pass into other filters or reach any other logging actions further down.

Likewise, you probably want a rule that prevents "remote" logging from cluttering up the local logs of the central log server. Typically, you 'll start with all the "central logging rules, and then do a filter by hostname to discard enything that isn't generated by the central log server itself, so that only 'local" messages reasch the section beneat it, where your local rules are.

## before going to local log rules, drop remote logging, it's been
## processed in the "central logging" section
:hostname, !isequal, "biglogserver"       ~

assuming your central logging server's hostname is 'biglogserver', of course.

newer versions of rsyslogd have alternative methods to deal with this, but the version in Debian 6 hasn't, yet.

Putting it all together

As you apply more and more action rules to your logging, and then define exceptions to deal with special cases, you'll find that the rsyslog.cobnf files becomes cluttered. In That case, you can split the configuration over several separate files. This requires the following directive in your conf file

$IncludeConfig /etc/rsyslog.d/*.conf

This will get all files that are in /etc/rsyslog.d/ and have a name ending in end in .conf included in the configuration. $IncludeConfig appears pretty early in the config file. This matters, because that's where the included statements will be read in, and we've seen that the order of actions sometimes matters, i.c. when you discard messages.

So, when you split up your configuration, you probably want to group the statemenst so that they interfere with each other as little as possible. Furthermore, 'include' files are processed alphabetically. This means you can steer the order in which the selector lines are applied by cleverly choosing the file names of the include files. Have their names start with numerals is probably a good idea (Ubuntu does that).

Barely scratched the surface

It's obvious that we've barely scratched the surface here. These mechanisms of templates, property replacers, selectors, filters and exclusions, ... can be combined to suit almost any need, with just a few lines of configuration. Templates especially make it easy to reaply complex statements with ease througout a file, or throughout multiple configuration files (which in turn helps to keep yopur config organized).

Rsyslog is also activily developed, and new features for even more advanced logging will be forthcoming. Multiple rule sets (allowing you to define separate chains of log handling where now you'd do combinations of excludes and discards) are one of the thigs I'm looking forward to.

Note that, on clients that run rsyslogd (eg recent Linux systems), you can also apply all this mangling and filtering to the local logging, or to the log entries before they get send to the central syslog server. But if your goal is ease of system administration and consolidation and all that, it might be a better idea to leave most or all of that local logging alone, send everything (of interest) to the central log server(s) and manage your tweaks and modifications centrally.

Want to know more ?

Koen Noens
June 2012