This entry is part 1 of 4 in the series A newbies guide to ELK

There are many ways to get an ELK (ElasticSearch, Logstash, Kibana) stack up and running – there are a ton of pre-built appliances, docker images, vagrant images, etc…  For this go around, however, I decided to install it piece by piece as I wanted to test some integration with some other visualization products such as Graylog and Grafana.  If you have deployed an elkstack before you know it isn’t that hard – however I figured I’d document my processes here as it’s the first time I’ve run through it.  I chose to use an Ubuntu 16.04 server build for this, but keep in mind that elk provides rpm packages and windows installations as well – and while the process of installing them is a bit different there are many similarities in regards to the configuration – so with that said let’s rock!

With an OOB Ubuntu server install we really only have one pre-requisite we need to install before jumping into the elk portions – and that’s getting Java installed.  The following bits of script will get a complete, fully configured java instance on the machine.

sudo add-apt-repository -y ppa:webupd8team/java
sudo apt-get update
sudo apt-get -y install oracle-java8-installer

Step 1 – Elasticsearch

With Java installed we can now move on to getting elasticsearch up and running.  Elasticssearch is a distributed JSON based analytics engine and is basically the heart of our elk stack.  This is where all of our logs which we receive and transform with logstash will be stored.  As mentioned before we have a number of options to install Elasticsearch, including simply running an apt-get install elasticsearch as it’s included in the defaults repositories with Ubuntu.  That said, the included version is a little old so I decided to pull down the package myself.  I chose to go with version 5.6.2 as I read that some bugs were still yet to be addressed within the 5.6.4 current version.  You can find all of the current, plus old releases of every piece of software within the elkstack here – chose a version and try and stay consistent with the remaining pieces of software.  Syntax to pull down v5.6.2 and install is as follows…

dpkg -i elasticsearch-5.6.2.deb

This completes the installation of Elasticsearch – easy right?  In fact, every component is pretty much that simple to get installed.  There is, however, some configuration that needs to be done within elastic before we can “make it work”. All of the configuration is done through elasticsearch.yml file located in /etc/elasticsearch.  So go ahead and load that in your favorite editor (vim – is there any other 🙂 ).  Now I’ve only deployed ES a handful of times, but from what I’ve found the best options to edit right off the hop are as follows.  To edit these from there defaults simply uncomment out the line in the file and assign it a value.

  • – I usually try and specify a cluster name, even if I only have one node.
  • – I always change this to better reflect the ES node I’m on.  Having consistent and descriptive node names will help you down the road when you are looking at clustering ES instances.
  • – I only change this if I have a second drive attached to the instance where I would like to store data – otherwise, leave it as the default.
  • – this will default to localhost, however, I’ve found it doesn’t always work the greatest when left at the default.  I usually set this to the IP of the server ES is running on.
  • http.port – defaults to 9200, however, I always uncomment this and set explicitly to 9200.

After an /etc/init.d/elasticsearch restart we are pretty much done with everything we need to in order to make our ES instance work.  We will get into some of the zen host settings in another post once we get to setting up an ES cluster, etc.  But for now, we are ready to move one.  With that said, if you wanted to do a quick test of your ES setup you can do so by executing ‘curl http://IP_OF_ES:9200/’ from the bash shell.  You should get something similar as to what is shown below…

Step 2 – logstash

With Elastic all set up let’s move on to the next letter in the stack – l, or logstash.  Similar to Elastic we simply chose our version of logstash, download it and install it with our package manager.  Now I’m not positive, but I’m assuming that sticking to the same version throughout the stack is a good thing so I’ve chosen to again install version 5.6.2, this time of logstash with the below syntax…

dpkg -i logstash-5.6.2.deb

Again we have very little configuration to do – my common changes to the /etc/logstash/logstash.yml file are as follows

  • node-name – you can change this if you want.  It will, however, default to the hostname of the server if you don’t.
  • – defaults to localhost or – however just as with ES I find setting this to the actual IP of the server works best.

The updates to the logstash .yml file simply just apply to the service as a whole – before we can actually parse any logs we will also need a configuration file within /etc/logstash/conf.d/ as well which describes a certain service and any conditionals we want to apply to that service.  Now I will go over these in a bit more detail in a later post as there is quite a bit to them – but to get us started with logstash let’s just create a configuration file to handle syslog traffic on port 5514.  To do this I’ve created the following file (syslog-5514.conf) inside of /etc/logstash/conf.d/ with the following syntax…

input {
  tcp {
    port => 5514
    type => syslog
  udp {
    port => 5514
    type => syslog
filter {
output {
  elasticsearch { hosts => [""] }

Working through file we can see we have three main sections; input (defines our inputs, in our case tcp/udp 5514), filter (empty for now but this is where we could do groks and custom extractions from the logs), and our outputs (in our case, the IP/Port of our ES instance).  So in essense, we are saying anything coming to this machine on port 5514 will be indexed and forwarded to our Elasticsearch instance.  I get if this is getting confusing – this is where I had to spend most of my time – however I’ll be sure to circle around in another post and talk about the different types of input configurations we can have.  But for now, let’s restart our service….

Now the logstash Debian package does not come pre-built with an initscript – meaning we can’t simply do an init.d restart on it.  Instead, we can handle this package with systemctl.  In order to enable the logstash service and start it run the following bash commands

systemctl enable logstash
service logstash stop
service logstash start

Step 3 – Kibana

Finally we now move on to the last package we need to get and install – Kibana.  Kibana provides some pretty nifty visualizations around our data and gives us visibility into the lightening fast results that elasticsearch brings.  You might notice a pattern here, but to install Kibana use the following syntax….

dpkg -i kibana-5.6.2-amd64.deb

As with the other packages kibana has a few configuration changes that need to be made to its’ config file located in /etc/kibana/kibana.yml.  I normally change the following…

  • server.port – again, I like to have these explicitly declared – even if its still using the default 5601
  • – As with elastic and logstash I explicitly set this to the IP of my server.
  • – this is only used as a display name in kibana, however, I always change this
  • elasticsearch.url – Again, explicitly set this to the IP and port of your es instance (ie.

Once done restart the kibana service using /etc/init.d/kibana restart and point your browser to your new instance (http://kibana_ip:5601).

The first thing we need to do is configure our index patterns – these are the indexes that are inside of elasticsearch.  The defaults (logstash-*) are exactly what we need as logstash creates this index for us (note, if you haven’t sent any syslog data to your logstash instance yet then this will fail as the index isn’t created until the first bit of data gets processed).  Go ahead and click create, then on the following screen click the ‘star’ icon in the top right in order to make this our default index.

Clicking on the ‘Discover’ section should allow us to actually see any of the indexed logs we currently have, and as they are coming in.

As you can see I have a couple of log entries that I’ve shipped over from my auth.log on the local server.  Although we can’t do much with data now, it does illustrate how we have successfully got everything up and running…  Looking for a good example of a server to send logs from – why not send over your ESXi hosts syslog information?

So that does it for part 1!  Congrats!  You have your elk stack up and waiting for some data!  In the next few parts of this series we will take a look at how we can start transforming and manipulating log files based on their content – (IE, breaking out information from Apache access logs, , parsing windows event logs, logging a simple text file, etc) through or logstash configurations – as well as a few of the transporters within elk (ie, filebeat).

Keep Reading!

Part 1 – Deployment <— you are here 🙂

Part 2 – Forwarding Logs

Series NavigationA newbies guide to ELK – Part 2 – Forwarding logs >>