In part 1 of this series we took a look at how to get all of the components of elkstack up and running, configured, and talking to each other. Now not to say those aren’t important and necessary steps but having an elk stack up is not even 1/4 the amount of work required and quite honestly useless without any servers actually forwarding us their logs. So with that said let’s take a look at a few different ways we can forward off some logs to logstash.
Syslog
First up is the ever so familiar syslog! This is perhaps the easiest method to get our logs over to logstash as most Linux distributions have already utilized something like rsyslog to handle their logging – which means we simply just need to add a line into that configuration. Now there are some gotcha’s – First up by default due to privileged ports logstash can’t listen on port 514, the standard syslog port. Now normally I just use a higher port, typically 5514 but sometimes that’s not a possibility – if that’s the case with you than check out this workaround on how to bind logstash to 514 – but for the rest of this example, we will deal with sending logs over 5514.
First up, we need to place a syslog configuration (say syslog.conf) file within our logstash conf.d directory (typically /etc/logstash/conf.d/). I did show an example of this in part 1, but for the sake of saving you clicks let’s look at it again…
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
input { tcp { port => 5514 type => syslog } udp { port => 5514 type => syslog } } filter { } output { elasticsearch { hosts => ["10.0.0.3:9200"] } } |
Again, there are three sections in our configuration; our input, where we define how we are getting the data (we will explore different types of input below), our filters (another part in the series altogether), and our output – in this case, elasticsearch. As far as logstash goes this is all the configuration we need. Now we can simply point our client to our logstash ip on port 5514 and we should be receiving logs!
FileBeat
FileBeat is another way of getting our logs over to logstash. It’s basically a light-weight, aware application that can tail our log files and send them to logstash or directly into elasticsearch. What I mean by aware is that FileBeat will actually slow down or halt itself if it detects that its target is being overrun with processes. FileBeat will also remember where it was if it ever fails or faults – thus it can pick up right where it left off. Just as we created a configuration file for syslog (/etc/logstash/conf.d/syslog.conf) we need to create one for FileBeat. Below you can see my configuration (/etc/logstash/conf.d/beats.conf).
1 2 3 4 5 6 7 8 9 10 |
input { beats { port => 5044 } } filter { } output { elasticsearch { hosts => ["10.0.0.3:9200"] } } |
As you can see, the output remains the same – we want everything to eventually get indexed in elasticsearch. Again, we will cover the filters in the next post??? It’s the input which differs. All we need to do here is basically let logstash know that we want to use the beats input (this is built into logstash) and what port we want to listen on. Now beats isn’t necessarily an industry standard – so we do have a little work to do on our client.
First, we need to install FileBeat- if you recall I am currently using v5.6.2 of logstash so I’m going to go ahead and get the same version of FileBeat. The installers for all versions are available here. I’ve gone ahead and run the following to install FileBeat on my web server running apache.
1 2 |
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.6.2-amd64.deb dpkg -i filebeat-5.6.2-amd64.deb |
Once we have the package we need to make a few changes to its configuration file (/etc/filebeat/filebeat.yml)
- input_type – this can be either stdin or log. In most cases ‘log’
- paths – if you look at the file you can see that you need to specify the path the log files. In my case, since I just want apache access logs from this server I will set it to /var/log/apache2/access.log
- tags – allows us to add different tags to different paths. We can then use these with our logstash conditionals (covered in Part 3)???
- Be sure to properly comment out and uncomment what you want for outputs. – you can see a snippet of my configuration below. As we want to go to logstash first and not directly to ES I’ve uncommented the logstash portion and set up my hosts, while commenting out the ES output.
Simply restarting our clients’ FileBeat service (/etc/init.d/filebeat restart) will begin the flow of logs into logstash. The log files will come in as is, meaning it will extract certain items like timestamp but the core of the apache messages will all still be bundled up in the message field. In our next post in this series, we will take a look at logstash filters which will allow us to break out all of this information into searchable, individual fields.
WinLogBeat
Let’s have a look at one more client package that we can use – this time to send our Windows event logs over logstash for processing. WinLogBeat comes from the same package as FileBeat (The Beats Package). There is actually much much more than this as well, there are packages to monitor metrics, packages to monitor network packets and even audit and ping packages. You can find them all here.
Now if you have been following along and you already set up a beats.conf file to handle our FileBeat inputs in logstash then we don’t need to do anything further in order to allow logstash to accept WinLogBeat traffic – we can simply use that same configuration file and accept beats traffic on the same port. If you haven’t, then take a look at the beats.conf file in the above FileBeat section as you will need this present in your logstash conf.d folder in order to parse the logs…
As far as the client, WinlogBeat comes as a zip file, in both 32 and 64-bit architectures. The first thing I always do is extract this and store it – normally somewhere within c:\program files. From there, to get it configured it’s quite similar to FileBeats as we simply just need to edit the included winlogbeat.yml file. As for the syntax in the log file, once again we see the similarities, however, instead of parsing actual files we are enabling actual names of Windows Event logs. Below you can see that I’ve simply specified some event logs, added a tag, commented out the elasticsearch output and uncommented and configured the logstash output… Easy Peasy!
Now at this point we could go ahead and run our winlogbeat client by simply executing c:\program files\winlogbeat\winlogbeat.exe file but who wants to do this every time – instead, we can easily have winlogbeat run as a windows service. To do so, there is a command called install-service-winlogbeat.ps1 located within the folder as well. Drop to a Powershell console as administrator and execute this – you should now see a new service created within your services list called winlogbeat – go ahead and start that!
As soon as that service has started if we check out our Kibana discover tab or logstash log file we will see that events from the Windows box are immediately sent and parsed into elasticsearch!
I’ll leave this as is here as most of the other packages included within the beats package are quite similar to set up and configure. Remember, we aren’t just limited to things like syslog, filebeat, and winlogbeat – we can also pull down packages to ship metrics, ping results, and audits from our infrastructure into logstash – all of which are well documented on the elastic site. Thus far we have gone through part 1, getting elk installed and configured and this part, where we simply just forwarded some logs to logstash. The real magic of elkstack starts when we dive into the next part of this series, where we start filtering, using grok, and using grok patterns to extract information out of the logs in order to store it in pretty much whatever format we want! Thanks for reading!
Keep Reading!
Part 2 – Forwarding Logs <— you are here 🙂
Hi Mike,
just wanted to point out that in this article your code snippets are converting the character “>” into “>” as this can be a bit confusing is someone wants to just copy/paste your configs.
Seems like on part1 of the series this didn’t happen.
By the way, great idea to explore this subject because seems like everybody is adopting ELK or sometimes EFK (with FluentD replacing Logstash)