Imagine this scenario: You’re running a security operations center, and your team is processing a bunch of alerts coming out of Splunk Enterprise Security. As you add new detections and log sources, the volume of alerts begins to rise, your analysts start to get overwhelmed, and that familiar alert fatigue starts to kick in. Your security engineering team jumps in with yet another cycle of tuning to get the alert volumes back down to manageable levels. Then the cycle starts all over again...
Hurricane Labs lives this reality every day, as a managed services provider that is 100% focused on Splunk and Phantom. The company manages these platforms for customers, providing 24x7 monitoring services supported by a team of Splunk ninjas to handle the heavy lifting.
Recently the Hurricane team found a way to leverage GreyNoise Intelligence data to identify the noise in Splunk and Phantom alert traffic, reducing the load on their analysts by 25%.
We had the good fortune to chat with Hurricane Labs Director of Managed Services, Steve McMaster, to learn how the company had done it.
STEVE MCMASTER: Sure - to set the stage, we manage Splunk and Phantom deployments for our customers, running the gamut from infrastructure management, to search and SPL creation, to security analysis and alerting, to rapid incident response. As part of these services, we provide 24x7 security monitoring, and I oversee the two teams that deliver this service. I’ve been with Hurricane almost 14 years at this point - it's actually the only job I've ever had. I started here at age 17 right out of high school, and along the way I've done a little bit of everything.
We process a high volume of alerts on behalf of our customers, and as part of this traffic, we often see a lot of noisy alerts. This includes everything from known benign scanners (such as Shodan) to what we call "internet background noise," scanners that are constantly scanning the internet as a whole looking for things to poke at.
The alert volumes we see are a constant ebb and flow, and we spend a lot of time tuning our customer’s environments to turn down the noise to a manageable level. But then, as we add new customers and more detections, the alert volumes start to creep back up. That’s when we see our analysts getting overwhelmed and feeling alert fatigue. It’s a normal cycle for our business, alert volumes go up, and then we need to dig in and tune our implementations to bring the volumes back down.
We were starting a new cycle of tuning to get alert volumes to come back down when I stumbled on Andrew [Morris, GreyNoise founder]. I saw my boss and Andrew exchanging views on Twitter and I thought, “Who is this guy, and why does the stuff he’s saying make so much sense?” So I took a look at the GreyNoise product.
I have to say, I’ve always been a big sceptic of threat intelligence as an industry, because once enough people know about a piece of intelligence to publish it for general consumption, it’s already burned infrastructure. Yes, you might find some threats with it, but it's not worth the $150K per year that they charge for it.
But GreyNoise was different. I always refer to GreyNoise now as “anti-threat intelligence.” It’s specifically the opposite of traditional threat intelligence. Instead of saying “pay attention to this,” GreyNoise says “STOP paying attention to this, go spend your time on other things.”
So this approach fit into an argument I was having with our customers a lot at the time - my point was that the things that you’re detecting are not actionable. They’re not targeted at you, they’re not somebody trying to exploit you, they’re JUST scanning the internet. So you shouldn’t worry about them or spend time on them.
Shodan makes a really good example of this, because Shodan is scanning everybody, and showing up in a lot of alerts. But this is not something YOU need to care about Mr. Customer, and so therefore it's not something WE need to care about as part of our monitoring service. The problem is that this is really hard to quantify. You can show people that alerts from Shodan should be ignored, that's easy, but trying to identify broader Internet background noise was a lot harder to do. To be able to say “Okay, we've seen this thing across our 40 customers, so let’s ignore it,” that's not really enough scale to be able to make this conclusion definitively.
And then along came this guy who had a product that did exactly that. So GreyNoise let us expand our service in a way that customers kind of already expected from us. Now we could say “Look, this is opportunistic, this is generalized, this is global, this is not targeted.” And we could say this definitively because of the scale that GreyNoise operates.
And it really just happened to fit at a time where that was a problem we were trying to tackle with our customers.
We’re using GreyNoise in two ways - we use the data inside Splunk to eliminate alerts before they ever get to us, and then we do enrichment of alerts in Phantom once they reach us. We integrate the GreyNoise data into these environments using pre-built integrations that we initially developed, although we have turned these over to GreyNoise to maintain and extend for all their customers.
Our goal with these integrations is that, if the IP address is in GreyNoise, then we eliminate it from alerting. GreyNoise actually returns 3 basic categories of “noise:”
The reality is that some customers are not totally comfortable with the accuracy of these classifications (at least at first), so for some customers we still raise a low-priority alert so they can have a human look at it. I think that this comes down to trust --and based on OUR experience-- if GreyNoise is confident the IP address is a scanner, then we are happy to ignore it. I would say about two thirds of our customers are there today.
I’m also really excited about something Andrew mentioned at the last Open Forum conference GreyNoise held, the new RIOT service [which stands for “Rule It OuT”]. Instead of identifying internet scanner “noise,” RIOT looks at the flip side of this by identifying the “known good,” common business services that everyone uses, like Slack or Office365 or Google IP addresses. So now we can easily tell the difference between a legitimate service like a Microsoft update and a "not legitimate" service like a malware download.
Sure, so when we use GreyNoise to filter noisy alerts out of Splunk, we use the GreyNoise-Splunk integration. We install that in our customer’s Splunk environment and add it to the search language for various detections. The logic is straightforward: if something in the search results matches GreyNoise, it gets excluded.
More specifically, we apply GreyNoise to any correlation search or detection inside of Splunk that we think is relevant. The big ones are IDS events and Splunk’s vulnerability scanner detection rules; those are the main ones that we use it for.
For Phantom on the other hand, we use GreyNoise as an enrichment. The way our monitoring service works, we bring all of our customers’ alerts into a single Phantom instance and we have the GreyNoise-Phantom integration installed. For our analysts looking at the alerts, one of the first things they do is look at the GreyNoise data. If the verdict is “noise,” then we know we don't need to do a lot more digging beyond that. This ends up saving us a lot of time, because rather than investigating whether this is a targeted attack and figuring out whether anything was exploited, we can just short circuit the process, note it as a scanner and send it over. Today we’re doing this as a manual process, but it's on our list to automate this into our incoming Phantom playbooks.
When I first told Andrew that we'd be interested in subscribing, I told him I have no concept of how much something like this would be worth. I didn’t know if we were going to turn this on and see matches with 10 alerts per day across our customers or 100. But impact on alerts is really the only thing I had to help quantify the value of GreyNoise.
So Andrew gave us a two week trial and I added it to our triage tool (this was before we stood up Phantom). And I just counted up how many alerts over a week would have been something we could totally ignore with GreyNoise versus what we would not have been able to. We were somewhere in the vicinity of saving the equivalent of one to one and a half analysts per day, out of 22 total analysts at the time, based on the alerts that we could safely eliminate with GreyNoise.
Today, after implementation and with more customers in hand, GreyNoise helps my teams eliminate background noise and focus on the most actionable and relevant alerts for our customers. Rather than presenting our analysts with even more data to investigate, GreyNoise has allowed us to reduce the volume of alerts that are triggered by 25%, which makes for a happier and more effective SOC team.
And for us as an MSSP this is even more significant than it might be for an enterprise. When you’re in a normal enterprise business, you can make a decision to spend money on a person or spend money on a product that eliminates alerts--you kind of have to pick your poison. But for us the calculus is a bit different. When we invest in a person, that analyst can do, say, 20 alerts per day. But if we invest in a product, that product can triage a certain number of alerts at every one of our customers. And for every new customer, it can repeat that work. So even if GreyNoise could only eliminate 8 alerts per day, across 40 different customers, that’s 320 alerts. As we add more customers, GreyNoise scales in a way a person wouldn’t.