August 9, 2023
-
Dan Ortega
,

How GPT and Security Analytics accelerates Cybercrime Investigations

How GPT and Security Analytics accelerates Cybercrime Investigations

Cybercrime has unfortunately become deeply embedded in the current technology landscape. Every single day another enterprise or government entity executes a very public faceplant, and in spite of an endless series of cautionary tales, the dynamic just keeps on executing. This is driven by a broad-based acceleration of attacks, paired with an alarming increase in the sophistication shown by threat actors. 

This dynamic hits the CISO most directly, as they are ultimately the ones responsible for cyber security. When their corporate boards start asking sharp questions they’re the ones responsible for an answer, when the regulators show up and run security audits, they’re held accountable, and when budget considerations surface it’s always a good news/bad news scenario. And on top of all this, there is the issue of an overwhelmed support staff, who are required to do more with less at constantly faster speeds. This sector of the market not only suffers from burnout but there is also massive underemployment (3.5 million unfilled positions globally, over 700k just in the US).

What is going on?

While there are a lot of reasons this market is “dynamic”, there are a few core variables that seem to be herding the environment.   

First off, a “level” playing field. Adversaries have access to the same technology as the good guys; bad intentions with no skills are suddenly enabled by GPT, while bad intentions with programming skills just became significantly more dangerous. Couple this new capability with (in many instances) deep pockets and no constraints around rules of engagement and the field isn’t actually level, it's titled in the wrong direction. 

Second, enterprises and government agencies rely on cybersecurity solutions to protect themselves, but these solutions are often the result of organic infrastructure growth (very similar to the overall IT market). Purchases are often made on tactical needs, and there is very little consideration given to a longer-term view of requirements. This happens regularly within departments, and if you start talking cross-divisional requirements, it gets far less coordinated. The net result is siloed solutions that don’t exchange information well (if at all), leading to a complete lack of actionable visibility across the enterprise. When the fan gets hit, no one is in a position to know with precision and speed what’s going on, and therefore no one is in a position to do anything meaningful about it.

Third, this organic model has a heavy reliance on manual (that is, humans in a SOC) analysis of data. This leads to several issues: the amount of data coming in is well past the ability of even an experienced analyst to keep up, the signal-to-noise ratio is off the charts, burnout and associated mental health issues are a real and growing concern, and the threats they are tasked with identifying are harder to spot and coming in at a higher rate. When a potential threat is identified, there is no context to understand the potential severity of the event; has this happened before, in what context, how recently, and what was done about it? Lack of contextual awareness of threats is a huge gap in the market, creating an enormous risk exposure at the worst possible time.

And why is there a lack of context? Short answer - lack of timely access to historical information that is immediately correlated to external threat data. Anything that happens anywhere on a network generates a log file, which means log files can be generated at a rate of millions per second. These then need to be made available for quick access, but that particular bucket gets filled in a hurry, so after (on average) ninety days it gets archived, which means going back further than 90 days requires accessing long-term storage, which is far slower and more expensive. That critical bit of information can tell an analyst the context of an attack, but taking days or weeks to get a response leaves a lot of time to do damage or hide little snippets of code that can lie unnoticed until needed. Knowing what you don’t know about telemetry data, how that correlates to your external threat landscape, as well as how your security stack is optimized to deal with it, is a critical workflow most organizations are lacking. This is now front and center when the CISO is speaking to their board, and having complete actioned visibility driven by integral GPT-enabled security analytics is the difference between getting hit in the head by a pitch, or knocking it out of the park.

How do you get out of this?

The way forward is not particularly difficult, but it is definitely complicated. There are a couple of key moves that can help, starting with correlation. There are functional areas of your security stack that need to be fully aware of each other; your internal telemetry (all the information from your internal systems captured as logs), which should be accessible going back years (not weeks) and accessible immediately through AI (specifically GPT) enabled Security Analytics, coupled with current data from your external environment, tracked through a massive and properly (GPT) curated threat repository. Bringing those two pieces together results in continuity of vision; you can see what is going on where and why, how it affects you, and ideally identify threats quickly enough to stop attackers in their tracks.

The other helpful capability is being able to translate dense technical information to executive-level information - which lets your SOC drive operational data into the strategic level of the organization, that is, the folks who decide your budget. Being able to take a massive amount of information and quickly reduce and frame it in a human-meaningful way is a value-add GPT construct. The concern with GPT is that everyone has access to some form of this technology, including cyber adversaries, who have quickly learned to poison data sets that are used to feed open-source GPT solutions. The trick, therefore, is to have a curated, vetted threat repository that protects your GPT-enabled threat intelligence from being corrupted by malicious data.

A thoroughly curated intelligence repository (like ThreatStream), driven at GPT speeds by robust Security Analytics (like Match) can provide context, do it when it's needed, (which is immediately), and do it in a fully automated exec-friendly fashion (like Lens+GPT). This is, in fact, how you knock it out of the park and into the next county. If you’d like more information on how Anomali is keeping some of the world’s largest companies and government entities one step ahead of their adversaries, please contact us here.

<h2 id=""><strong id="">How GPT and Security Analytics accelerates Cybercrime Investigations</strong></h2><p id="">Cybercrime has unfortunately become deeply embedded in the current technology landscape. Every single day another enterprise or government entity executes a very public faceplant, and in spite of an endless series of cautionary tales, the dynamic just keeps on executing. This is driven by a broad-based acceleration of attacks, paired with an alarming increase in the sophistication shown by threat actors. </p><p id="">This dynamic hits the CISO most directly, as they are ultimately the ones responsible for cyber security. When their corporate boards start asking sharp questions they’re the ones responsible for an answer, when the regulators show up and run security audits, they’re held accountable, and when budget considerations surface it’s always a good news/bad news scenario. And on top of all this, there is the issue of an overwhelmed support staff, who are required to do more with less at constantly faster speeds. This sector of the market not only suffers from burnout but there is also massive underemployment (3.5 million unfilled positions globally, over 700k just in the US).</p><h2 id=""><strong id="">What is going on?</strong></h2><p id="">While there are a lot of reasons this market is “dynamic”, there are a few core variables that seem to be herding the environment.   </p><p id="">First off, a “level” playing field. Adversaries have access to the same technology as the good guys; bad intentions with no skills are suddenly enabled by GPT, while bad intentions with programming skills just became significantly more dangerous. Couple this new capability with (in many instances) deep pockets and no constraints around rules of engagement and the field isn’t actually level, it's titled in the wrong direction. </p><p id="">Second, enterprises and government agencies rely on cybersecurity solutions to protect themselves, but these solutions are often the result of organic infrastructure growth (very similar to the overall IT market). Purchases are often made on tactical needs, and there is very little consideration given to a longer-term view of requirements. This happens regularly within departments, and if you start talking cross-divisional requirements, it gets far less coordinated. The net result is siloed solutions that don’t exchange information well (if at all), leading to a complete lack of actionable visibility across the enterprise. When the fan gets hit, no one is in a position to know with precision and speed what’s going on, and therefore no one is in a position to do anything meaningful about it.</p><p id="">Third, this organic model has a heavy reliance on manual (that is, humans in a SOC) analysis of data. This leads to several issues: the amount of data coming in is well past the ability of even an experienced analyst to keep up, the signal-to-noise ratio is off the charts, burnout and associated mental health issues are a real and growing concern, and the threats they are tasked with identifying are harder to spot and coming in at a higher rate. When a potential threat is identified, there is no context to understand the potential severity of the event; has this happened before, in what context, how recently, and what was done about it? Lack of contextual awareness of threats is a huge gap in the market, creating an enormous risk exposure at the worst possible time.</p><p id="">And why is there a lack of context? Short answer - lack of timely access to historical information that is immediately correlated to external threat data. Anything that happens anywhere on a network generates a log file, which means log files can be generated at a rate of millions per second. These then need to be made available for quick access, but that particular bucket gets filled in a hurry, so after (on average) ninety days it gets archived, which means going back further than 90 days requires accessing long-term storage, which is far slower and more expensive. That critical bit of information can tell an analyst the context of an attack, but taking days or weeks to get a response leaves a lot of time to do damage or hide little snippets of code that can lie unnoticed until needed. Knowing what you don’t know about telemetry data, how that correlates to your external threat landscape, as well as how your security stack is optimized to deal with it, is a critical workflow most organizations are lacking. This is now front and center when the CISO is speaking to their board, and having complete actioned visibility driven by integral GPT-enabled security analytics is the difference between getting hit in the head by a pitch, or knocking it out of the park.</p><h2 id=""><strong id="">How do you get out of this?</strong></h2><p id="">The way forward is not particularly difficult, but it is definitely complicated. There are a couple of key moves that can help, starting with correlation. There are functional areas of your security stack that need to be fully aware of each other; your internal telemetry (all the information from your internal systems captured as logs), which should be accessible going back years (not weeks) and accessible immediately through AI (specifically GPT) enabled Security Analytics, coupled with current data from your external environment, tracked through a massive and properly (GPT) curated threat repository. Bringing those two pieces together results in continuity of vision; you can see what is going on where and why, how it affects you, and ideally identify threats quickly enough to stop attackers in their tracks.</p><p id="">The other helpful capability is being able to translate dense technical information to executive-level information - which lets your SOC drive operational data into the strategic level of the organization, that is, the folks who decide your budget. Being able to take a massive amount of information and <em id="">quickly</em> reduce and frame it in a human-meaningful way is a value-add GPT construct. The concern with GPT is that everyone has access to some form of this technology, including cyber adversaries, who have quickly learned to poison data sets that are used to feed open-source GPT solutions. The trick, therefore, is to have a curated, vetted threat repository that protects your GPT-enabled threat intelligence from being corrupted by malicious data.</p><p id="">A thoroughly curated intelligence repository (like ThreatStream), driven at GPT speeds by robust Security Analytics (like Match) can provide context, do it when it's needed, (which is immediately), and do it in a fully automated exec-friendly fashion (like Lens+GPT). This is, in fact, how you knock it out of the park and into the next county. If you’d like more information on how Anomali is keeping some of the world’s largest companies and government entities one step ahead of their adversaries, please contact us here.</p>

Get the Latest Anomali Updates and Cybersecurity News – Straight To Your Inbox

Become a subscriber to the Anomali Newsletter
Receive a monthly summary of our latest threat intelligence content, research, news, events, and more.
__wf_reserved_heredar