Post

SQL for the SOC Part 1: Standard SIEM Rules

Background

Writing rules for your SIEM is a complex process. You need to start with prototyping, determine the impact on the SOC, test it and finally push it to production. Today, I am going to show you how we do that in JASK utilizing Zeppelin notebooks and Structured Query Language (SQL). The beautiful thing about doing this in Zeppelin is that we can push out templates to get new users up to speed quickly, and our customers can adjust the template to fit their processes. After that, prototyping, measuring impact, testing and pushing to production are as simple as running a few queries.

 

Data Ingest and Parsing Validation

To validate our data is being ingested and parsed properly, we need to know the following things:

  1. What log types are coming in differentiated by vendor, product and event_id
  2. The logs are being parsed
  3. What the logs look like to confirm it is the information we need

 

To get this information in JASK, it is just a simple SQL query like so…

 

Now, we can write rules against anything that is parsed. On to the fun part!

 

Creating and Testing the Rule

Your next step will be sorting through the parsed data. In JASK, we make this easy with a simple search tool to find your log type that also auto-generates a SQL statement for you to pull all of the data.

 

Sample mapping:

 

Lastly, you can click Generate SQL to get a query to run like this one:

 

Now that we have pulled some data, we need to determine which fields to key off of. We generally search for any information related to the type of event that it was, threat type, threat classification and severity while prototyping for rules. A good rule of thumb is to avoid things that will be specific like URLs, hostnames and IPs because they are not as important while working on a broader rule. They will be quite important when you are trying to tune the rule after the fact though.

 

So, that brings the large query from about down to this:

 

Now, we will want to determine if any of these fields have common types that we do not care about. For example, we are not going to care if Palo Alto blocked an event. That event is blocked and is ZERO risk to us, so we will add that to our logic.

 

So, we have an idea for a rule, but the next step is to actually test what the impact will be on the system. To do this, we need to make a few changes to our query. What we will do is turn our SQL statement into a GROUP BY that includes the DAY value. This will give us all of the combinations of events we would see each day.

 

This will give you something like this:

 

So, once this runs, we will get a daily volume of each of the different event type combinations. Another useful query is to just get the daily count of the events in general with something like this:

 

Both of these queries are useful when trying to determine severity. The higher the volume of the events, the lower severity you will want to give them and vice versa. The nice thing about JASK is that you can also set dynamic severities which allows for the severity to change based on a value in the log. A good example here would be the extra.severity field in Palo Alto logs. We will learn more about that in the next section.

 

Adding the Rule to your Ruleset

From here, you will add the rule to your ruleset. In JASK, this means when we see those events, we will create a Signal associated with the entity from the log. Rather than alerting the analyst every time one of these Signals fires, we take the groups of Signals associated with each entity and analyze them as new Signals fire. Once an entity’s Signals hit a certain threshold, an Insight will fire displaying all of the Signals associated with that Entity.

 

In part 2 of this blog series, we will look at how you can create your own anomaly detections in JASK. Stay tuned!

 

 

About the Author

Tyler Diderich was the first member of the customer success team at JASK. His focus is on continuous improvement at the personal level for the team along with overall workflow and process improvement for the organization.

LinkedIn: https://www.linkedin.com/in/tylerdiderich 

 

 

Share on
CLOSE