Making a Free Log Server With Elasticsearch and Kibana

elasticsearchIf you run a network of any size, it quickly becomes obvious that having a place to aggregate all your logs is a necessity.  There are a number of options out there, some free others not.  You could always spin up a linux host running rsyslog and send all logs to either a single or multiple files.  Unfortunately, this leaves your operations staff with the unpleasant task of having to use grep and other tools to parse these messages in order to understand what is happening. You could always add a pretty web front-end like Adiscon’s LogAnalyzer, but that starts getting slow after a few weeks as log messages build up.  Another option could be Splunk, which is a great log aggregation and analysis platform with many bells and whistles, but the licensing may be too expensive for your organization.

In this article we will present the steps to make your own free log server with many of the bells and whistles that you get from Splunk, and a pretty web front-end for your operations staff.  For our server, we will use a combination of Elasticsearch, rsyslog, and Kibana3.  While much of the information in this article was gleaned from Sematext’s excellent article Recipe: rsyslog + Elasticsearch + Kibana by Radu Gheorghe, we decided to detail the exact configuration here, and add to it some information about periodic maintenance of your log data.

In this article, we are assuming you have a CentOS 6 Linux host with the following installed:

  • httpd
  • unzip
  • wget

Of course this will work on other Linux platforms, but you may need to make some simple alterations.

Step 1: Install rsyslog version 7

We need to use the latest stable version of rsyslog in order to make use of the Elasticsearch plugin.  Centos 6 comes with rsyslog already, but it’s an older version.  You can get the latest stable rsyslog version (7) from Adiscon’s YUM repository, but first you must install this repo into yum:

cd /etc/yum.repos.d
wget “http://rpms.adiscon.com/v7-stable/rsyslog.repo”
yum update

At this point, your rsyslog should be updated to version 7.  Now we just need to install the Elasticsearch plugin:

yum install rsyslog-elasticsearch

Step 2: Install Elasticsearch

Elasticsearch requires java, so we’ll install it now:

yum install java-1.6.0-openjdk

Now we can download the Elasticsearch RPM and install it:

wget “https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.1.0.noarch.rpm”
rpm -ivh elasticsearch-1.1.0.noarch.rpm
/sbin/chkconfig –add elasticsearch
service elasticsearch start

Hopefully at this point, Elasticsearch is up and running.

Step 3: Install Kibana3

Kibana3 is the pretty web front-end that allows your operations staff to easily see what is going on in the network.  It’s a very nice interface that simplifies searching, so you can easily see only what you are interested in.  You will download Kibana3 into your webserver’s root directory:

cd /var/www/html
wget “http://download.elasticsearch.org/kibana/kibana/kibana-latest.zip”
unzip kibana-latest.zip
mv kibana-latest kibana

You should edit the included config.js file and make sure that this line is there:

elasticsearch: “http://”+window.location.hostname+”:9200″

If this is instead set to localhost:9200, you’ll want to change that to the line above.

Also, if you want a nice syslog-style dashboard, you can replace the kibana/app/dashboards/default.json file with this one.  Make sure to remove the .txt extension.

Step 4: Configure rsyslog to send logs to Elasticsearch

Kibana3 is expecting to see logs sent from Logstash, so we need to configure rsyslog to store logs in the same fashion.  In order to make this process as simple as possible, you may download this file to replace your existing rsyslog.conf.  Make sure to remove the .txt extension.

After you’ve replaced your rsyslog.conf file, you’ll need to restart rsyslog:

service rsyslog restart

At this point you should be able to receive syslog messages from your remote equipment.  Of course you’ll have to configure your equipment to send logs to this host.  You should also be able to point your web browser to http://yourserver/kibana and see your new log server!

Step 5: Install and configure Curator

After a month or so, you will notice that the drive space on your log server is filling up.  You’ll need some way to tell Elasticsearch to delete old indices and free up storage space.  In order to simplify this process, we’ll use Curator.  Curator is a python script, and is installed with pip.  You can get pip from the EPEL repository:

wget “http://mirror-fpt-telecom.fpt.net/fedora/epel/6/i386/epel-release-6-8.noarch.rpm”
rpm -ihv epel-release-6-8.noarch.rpm
yum install python-pip

Now we’ll use pip to install Curator:

pip install elasticsearch-curator

Finally, we need to run Curator nightly (4AM in this example) to delete old indices.  This can be done with cron.  In this example, we will close indices older than 5 days, and delete indices older than 30 days

0 4 * * * /usr/bin/curator –host localhost –prefix logstash- -c 5 -d 30 –timeout 3600 > /dev/null 2>&1

Step 6: Security considerations

Elasticsearch can be queried directly by connecting to port 9200 of your log server.  It would be a good idea to put this log server behind a firewall of some kind and restrict access.  Also Kibana3 has no authentication mechanism so you might consider using apache simple authentication, or some other method for authentication.

A central log server is just one tool in your network monitoring and management toolbox.  We can help you customize and secure your new log server, and all the other equipment in your network.  Let HavenSys Technologies design a network monitoring and management solution to meet your growing needs!

The Elasticsearch+Kibana syslog server is also included in the HavenSys Technologies A.L.A.R.M. system.

Bridge the Gap Between Home and Office

We’ve come a long way in the past 50 years.  It used to be that a businesses required it’s workers onsite, in order to operate machinery or monitor business systems.  With the push for mobility, today’s technology allows us to operate the same machinery, produce deliverables and even have meetings from anywhere.  Nowadays it’s common, and even preferable to have your workforce in remote/virtual offices or even working from home full time.  It’s easy to see why this paradigm would be attractive for business when we consider the benefits:

  • Less physical office space is needed
  • Fewer desks, chairs, office equipment to keep up with
  • Less infrastructure is required to serve a smaller subset of employees
  • Greater agility of employees to be able to serve customer needs from anywhere at any time

This option is also attractive to employees because it gives them:

  • More home time with families
  • A relaxed working environment
  • Shorter commutes to the “office”

Unfortunately most businesses have a hard time implementing this kind of practice.  The biggest challenge is in a solution for getting these remote employees access to the corporate infrastructure required in order to do their jobs.  If the domain controllers, exchange servers, and other systems are located at the corporate office, how can your branch offices function?  The answer is Branch Office VPN’s (BOVPN).  This technology allows you to create a kind of virtual tunnel between your main corporate office and your remote locations.  For all intents and purposes, your remote employees will exist on your local network, regardless of their physical location!

Typical Branch Office VPN

HavenSys Technologies uses open-source products to provide a robust and highly scalable BOVPN service, allowing your company to take advantage of these new capabilities and enable your company to save money that could be better spent elsewhere.  Contact us to see how easy this solution would work for your company.