I have a virtual instance of pfSense running on my ESXi box that serves as a firewall/gateway for my VMs. One of these VMs happens to be running Splunk. I’m going to fumble my way through getting some of the great logs out of pfSense and into Splunk to see just what is actually going on within my network.
Please note I’m not an expert in either pfSense or Splunk and would gladly welcome any feedback! Following this guide is no guarantee of setting things up correctly. Please reach out if you find errors or know of better ways of accomplishing things.
I’m currently running the following sytems. If there is interest I can do a quick write-up on deploying them.
- pfSense - 2.3.2-RELEASE-p1
- Splunk 6.5.1, installed on Ubuntu 16.04.1 LTS
Fresh off of a new Splunk install, log into Splunk
http://[Splunk]:8000 and navigate to Settings and click the big Add Data button.
We’re going to set ultimately set up a UDP Forwarded input. Start by clicking the big eye icon titled monitor and then select
TCP / UDP as your source. I want to set this listener up as a generic syslog listener for my whole network so I’m going to configure the following settings:
TCP / UDP- Select UDP
Port- Enter 514
Source name override- blank
Only accept connection from- blank so all of my hosts can forward their logs
Click the Next > button at the top of the screen.
Source type- Select
Selectand then find
syslogin the drop down
APp context- Keep the default
Search & Reporting
Host- I’m using
DNSas my home network has decent dynamic records setup for each hostname
Index- Leaving this as
Click the Review > button at the top of the screen.
Splunk will show you all the settings you just set. Here’s a redundant list of my settings:
Input Type- UDP Port
Port Number- 514
Source name override- N/A
Restrict to Host- N/A
Source Type- syslog
AppContext - search
Host- (DNS entry of the remote server)
Click the Submit > button at the top of the screen. Splunk is now listening on that port for logs. You can verify this by SSH’ing to your Splunk host running netstat. The output below is truncated to only show port 514 that I opened just now.
$ sudo netstat -lnp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name udp 0 0 0.0.0.0:514 0.0.0.0:* 15629/splunkd
Note: sudo allows you to see the PID name, running without would just show the port
I’m a fan of logging everything so we start this adventure by logging into pfSense and going to
Status / System Logs / Settings.
General Logging Options
My recommendation is to check all of the Log firewall default blocks options, but you may want to consider impacts to performance and disk space (more checks = more logs).
Remote Logging Options
This is where we tell pfSense to remotely log to our Splunk instance. Checking
Enable Remote Logging will expand the options screen to show all of the fun settings. My recommendations/interpretations:
Source Address- I prefer to not select
Default (any)and instead select the interface that is either directly connected to the Splunk VM’s network OR select an internal/LAN interface. In my case I’m selecting the
SERVERSinterface becuase that’s where Splunk lives. Assuming you don’t go moving your Splunk server around all that often this won’t make a difference, but I’d rather be specific for this types of settings.
IP Protocol- Select IPv4 unless you’re specifically doing IPv6
Remote log servers- This is the IP and port for your Splunk server.
Remote Syslog Contents- Did I mention that I like logging everything?
Click Save to enabling logging!
Go back to your Splunk instance and select the
Search & Reporting app on the left sidebar. Under
What to Search you should see events being indexed. Clicking
Data Summary and going through the three tabs will also show you a, drumroll please, summary of the data. I just enabled logging a few minutes ago:
Viola, we have logs. Next up, parsing!