Making a Free Log Server With Elasticsearch and Kibana

elasticsearchIf you run a network of any size, it quickly becomes obvious that having a place to aggregate all your logs is a necessity.  There are a number of options out there, some free others not.  You could always spin up a linux host running rsyslog and send all logs to either a single or multiple files.  Unfortunately, this leaves your operations staff with the unpleasant task of having to use grep and other tools to parse these messages in order to understand what is happening. You could always add a pretty web front-end like Adiscon’s LogAnalyzer, but that starts getting slow after a few weeks as log messages build up.  Another option could be Splunk, which is a great log aggregation and analysis platform with many bells and whistles, but the licensing may be too expensive for your organization.

In this article we will present the steps to make your own free log server with many of the bells and whistles that you get from Splunk, and a pretty web front-end for your operations staff.  For our server, we will use a combination of Elasticsearch, rsyslog, and Kibana3.  While much of the information in this article was gleaned from Sematext’s excellent article Recipe: rsyslog + Elasticsearch + Kibana by Radu Gheorghe, we decided to detail the exact configuration here, and add to it some information about periodic maintenance of your log data.

In this article, we are assuming you have a CentOS 6 Linux host with the following installed:

  • httpd
  • unzip
  • wget

Of course this will work on other Linux platforms, but you may need to make some simple alterations.

Step 1: Install rsyslog version 7

We need to use the latest stable version of rsyslog in order to make use of the Elasticsearch plugin.  Centos 6 comes with rsyslog already, but it’s an older version.  You can get the latest stable rsyslog version (7) from Adiscon’s YUM repository, but first you must install this repo into yum:

cd /etc/yum.repos.d
wget “”
yum update

At this point, your rsyslog should be updated to version 7.  Now we just need to install the Elasticsearch plugin:

yum install rsyslog-elasticsearch

Step 2: Install Elasticsearch

Elasticsearch requires java, so we’ll install it now:

yum install java-1.6.0-openjdk

Now we can download the Elasticsearch RPM and install it:

wget “”
rpm -ivh elasticsearch-1.1.0.noarch.rpm
/sbin/chkconfig –add elasticsearch
service elasticsearch start

Hopefully at this point, Elasticsearch is up and running.

Step 3: Install Kibana3

Kibana3 is the pretty web front-end that allows your operations staff to easily see what is going on in the network.  It’s a very nice interface that simplifies searching, so you can easily see only what you are interested in.  You will download Kibana3 into your webserver’s root directory:

cd /var/www/html
wget “”
mv kibana-latest kibana

You should edit the included config.js file and make sure that this line is there:

elasticsearch: “http://”+window.location.hostname+”:9200″

If this is instead set to localhost:9200, you’ll want to change that to the line above.

Also, if you want a nice syslog-style dashboard, you can replace the kibana/app/dashboards/default.json file with this one.  Make sure to remove the .txt extension.

Step 4: Configure rsyslog to send logs to Elasticsearch

Kibana3 is expecting to see logs sent from Logstash, so we need to configure rsyslog to store logs in the same fashion.  In order to make this process as simple as possible, you may download this file to replace your existing rsyslog.conf.  Make sure to remove the .txt extension.

After you’ve replaced your rsyslog.conf file, you’ll need to restart rsyslog:

service rsyslog restart

At this point you should be able to receive syslog messages from your remote equipment.  Of course you’ll have to configure your equipment to send logs to this host.  You should also be able to point your web browser to http://yourserver/kibana and see your new log server!

Step 5: Install and configure Curator

After a month or so, you will notice that the drive space on your log server is filling up.  You’ll need some way to tell Elasticsearch to delete old indices and free up storage space.  In order to simplify this process, we’ll use Curator.  Curator is a python script, and is installed with pip.  You can get pip from the EPEL repository:

wget “”
rpm -ihv epel-release-6-8.noarch.rpm
yum install python-pip

Now we’ll use pip to install Curator:

pip install elasticsearch-curator

Finally, we need to run Curator nightly (4AM in this example) to delete old indices.  This can be done with cron.  In this example, we will close indices older than 5 days, and delete indices older than 30 days

0 4 * * * /usr/bin/curator –host localhost –prefix logstash- -c 5 -d 30 –timeout 3600 > /dev/null 2>&1

Step 6: Security considerations

Elasticsearch can be queried directly by connecting to port 9200 of your log server.  It would be a good idea to put this log server behind a firewall of some kind and restrict access.  Also Kibana3 has no authentication mechanism so you might consider using apache simple authentication, or some other method for authentication.

A central log server is just one tool in your network monitoring and management toolbox.  We can help you customize and secure your new log server, and all the other equipment in your network.  Let HavenSys Technologies design a network monitoring and management solution to meet your growing needs!

The Elasticsearch+Kibana syslog server is also included in the HavenSys Technologies A.L.A.R.M. system.