Splunk

From Nuclear Physics Group Documentation Pages
Revision as of 17:29, 5 June 2009 by Maurik (talk | contribs)
Jump to navigationJump to search

Splunk is a flexible data aggregation system. In laymens' words, Splunk is a system that combs through log files (and anything else that contains structured information that you want to throw at it) and presents the results in a summarized format. It is really a pretty neat thing. See the splunk website.

Splunk at UNH

We are now (June 2009) running the free 3.4.x on our system Pumpkin, Endeavour, etc. Splunk is resource hungry, but no longer too bad if configured as a forwarder.

Our setup:

  • Splunk runs on servers, with Pumpkin the master node.
  • On Pumpkin, it is installed in /data1/splunk, with a link to /opt/splunk.
  • Pumpkin mounts the /var/log directories from roentgen so that it can be accessed by splunk for aggregation, without the need to run a splunk copy on roentgen (which is virtual).
  • Splunk runs on Endeavour as a full server, on Einstein,Taro,[[Pepper] and Improv as a forwarding server.
  • The free version of splunk does not allow for login. We restrict access to the splunk console in iptables. Use an ssh tunnel to access the splunk web portal
  • This can be extended to do many different tasks!

Connecting to Splunk

Pumpkin blocks all port 80 and port 8000 connections in the iptables, so it's not possible to access the interface to Splunk by simply opening a web browser and going to the appropriate port. This is a safety issue, so it is not going to change. There is a fairly simple workaround, you can open an ssh tunnel:

  1. ssh -L8001:localhost:8000 pumpkin. It doesn't necessarily have to be 8001, but some non-priviledged available port on your machine.
  2. Open a web browser with good Javascript support (and optionally Flash, for some fancy graphing features), and go to localhost:8001 (or whatever port you chose). On Linux and OS X only Firefox is compatible. On Window IE is compatible as well.

Sophisticated stuff for Splunk

You can use the admin button on the splunk web interface to do administration, add a user account (licensed version only), add new input streams. This is pretty simple. More sophisticated use is documented here: Splunk.com then go to documentation and click on the version used.


Filtering the input files

See Splunk File whitelist/blacklist.

We usually just let splunk loose on an entire directory (/var/log) of several machines (einstein, roentgen, pumpkin...). There are files splunk will skip automatically (mostly binaries). Others can be filtered out by editing /opt/splunk/etc/bundles/local/inputs.conf and adding a line like:

_blacklist = audit\.log|\.[12345]  # Ignore the audit files, which you should read with aureport anyhow.

You can see what the input files splunked will be with:

. /opt/splunk/bin/setSplunkEnv
/opt/splunk/bin/listtails

Splunk getting sysinfo from other nodes

This is discontinued. Too much ssh connections, causes lots of entries in log files, which is no good since it obfuscates what happens and ssh is important!

To get sysinfo (cpu load, users logged in, memory useage) from other nodes, without running splunk everywhere and without creating huge log files with this info everywhere, I made a "pipe" for splunk. This is a script that runs on splunk in $SPLUNKHOME/etc/bundles/sysinfo that will ssh over to each node monitorred and execute the command /root/splunk_ssh_info_pipe.

To make this whole thing secure, I did the following:

  • Modify the /root/.ssh/authorized_keys to have an entry that will only execute one command when jalapeno tries to connect to the node (pepper, taro,...) with a passwordless ssh connection. This command is our pipe script:
no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty,from="jalapeno.farm.physics.unh.edu",command="/root/splunk_ssh_info_pipe" ssh-rsa verylongsshkeyishere root@jalapeno.unh.edu
  • This will only work is root is allowed to connect like this, so I modified /etc/security/access.conf to allow a root login from jalapeno.
  • The script when run on the node creates output that is then parsed by splunk.

This is fairly secure. I could have created a used "splunk" for all machines and set it up so that that user can only execute one command. Perhaps I'll switch to that at some point.