Intelligence Powered Vulnerability Management: Detect ‘19 Series
After you have watched this Webinar, please feel free to contact us with any questions you may have at firstname.lastname@example.org.
GREG MATHES: I just want to introduce myself and the talk.
This is around how we can combine threat intelligence, what we have, with vulnerability management.
Vulnerability management, it's a big problem and all of our organizations.
Nobody likes to tackle it.
But if you put a little threat intelligence behind it, it actually would make our jobs a lot easier.
And if you actually suggest that, you end up with running vulnerability management as well, as I will dictate.
A little about me graduate from the University of Arkansas about eight years ago.
I've been at Arvest Bank ever since.
Recently moved into a management role covering threat and vulnerability management and infrastructure security.
Began my career focusing on perimeter defenses and SIEM technologies.
Researched a lot about threats that we were incurring.
About 2016 is when, using that knowledge, started building out our threat intelligence capabilities beyond what was traditional, beyond the, oh, we're just going to throw a couple feeds on the SIEM and stuff like that, but actually built a real threat intelligence program.
And with that, been maturing that program over the last few years, as of recently, adding in vulnerability management into that program.
So a little bit about what vulnerability management is.
Had to kind of dive into it myself whenever I was tasked with it, after suggesting.
So we'll kind of step way back and actually just kind of talk about what a vulnerability is.
A lot of people think of it has to be a published CVE, known bug in the software code or something like that an attacker can exploit.
But it can be everything from bad procedures to no controls to an implementation configuration that allows an attacker to exploit, or it could this be actual bug in the software that an attacker can exploit.
So basically, any weakness in any system or procedure that an attacker can exploit is what we're going to classify as a vulnerability.
So in simpler terms, it's just an opportunity for an actor to do harm to a system or a company.
This was one of my favorite memes I've seen for a few years.
The other one is the rug with the key underneath with all the holes underneath it.
On the flip side, I want to what threat intelligence.
You have vulnerabilities, but then your real threat and risk is knowing what threat actors have the capability to use those against you.
We'll talk a little bit about the lifecycle management of vulnerability in vulnerability management.
It's kind of similar to the threat intelligence lifecycle that I was used to from the threat intelligence side.
You're kind of gathering what you're wanting to do, you're discovering what's out there, you're analyzing it and you're reporting on it, and then you're having to prioritize how you'll mediate it.
We're going to add a few steps to that if you throw threat intelligence into it.
In your prepare and discover phase, you're going to start classifying your vulnerabilities and classifying all vulnerabilities with certain threat scores and stuff like that.
And then you're going to prioritize those at the end based on those scores and based on which assets those fall on, like, normal.
But we're going to kind of look at it a little bit differently than what people have done in the past, and we'll kind of explain a few reasons why that is.
When I was first tasked with taking on vulnerability management, one of the things, I want to look at it differently than anybody else does.
They were wanting us to patch all vulnerabilities up like we should.
But the big things I told everybody was, hey, from a threat intelligence point of view, there's only really a couple of main areas that we need to focus on.
We need to focus on the workstations that's where our users are.
That's where the malware is going to get on.
That's where that type of exploits are going to be ran.
And then we also need to worry about our DMZ assets, stuff that can be breached through the firewall, and attack those, or misuse of exposed services.
But when we're talking about vulnerability management and traditional vulnerability management, those two key areas are the big parts on how some going break him from outside.
Once they're inside, you don't really see the use of vulnerabilities at that point.
They're already in.
But if we go back to our original term of vulnerability, they are exploiting vulnerabilities, but not in the sense of what we would consider having to patch.
Once they're in, there trying to escalate privileges.
They're trying to look at unsecured configurations, and that's how we're starting to get all those things classified under vulnerability management as well, because those are really the threat to the organization.
So right off the bat, when we're prioritizing patching and stuff like that, I always bleed towards the other side of thinking about where an attacker is going to go.
So let's kind of look at what the current state of vulnerability management really is.
In the last few years, we can see the number of reported vulnerabilities has skyrocketed.
We're getting three times on an annual basis what we used to.
I think a lot of that, from looking through the statistics and stuff like that, is the onset of mobile and some of those OSes that are now on the market.
They're really vulnerable and stuff like that.
But it's just really showing us that this is not a slow down.
And we don't have a handle on it when it was 5,000 and 6,000 a year.
We're really not going to have a handle on it when it's 14,000 and 15,000 a year.
So that really showed why we really need to focus on it now and fix it for the future.
So kind of goes into what else the crisis is.
from [INAUDIBLE] threat intelligence research done on vulnerability management.
have the time and resources to mitigate all vulnerabilities in order to avoid a data breach.
That's a pretty daunting number.
When I first saw that, and everybody have shown for proofreading this presentation, that was one that everybody kind of stuck on.
That means most companies do not feel they are OK in vulnerability management to stop a data breach.
Only 15% believe that they have effective patching.
We all know it's a problem in organizations, but that's really scary.
As well as 39% believe that the criticality of effective vulnerability management is avoiding data breaches, but yet they're not devoting the resources.
So you kind of see almost the same number, but nobody's actually giving those resources up to do it.
So we've got to find a different creative way to make this more effective if we're not going to get more resources.
What that means for the world is 27% of all countries polled in a poll done by Tripwire believe that they were breached as a result of a patch.
So we'll talk about a few real world examples of this.
WannaCry-- we all kind of felt that pain.
That was an exploit of S&V that took over the world.
It kind of fell down in traditional patch management's point of view because of these CVSS score that it had.
It did not meet-- most people do CVSS 7 and above for what they are really pushing on their patching systems.
This kind of fell-- I think it was a 5 or 6.
So it almost meant that mark, but it didn't quite get there.
Yet it wreaked havoc on the whole world.
When if you would have done-- and we'll talk about the future.
We'll recap some of these and show that if we would have done it just a little bit differently, this might not have happened.
Another one that we've all heard of is Equifax.
It was a stress vulnerability, which are at the core harder to patch.
But if you go back to [INAUDIBLE] hacker methodology, if you're a major company and you have a major vulnerability exposed on the outside where threat actors have readily accessible to attempt all day and all night against, that should have been prioritized way higher than any other vulnerability in the environment.
So just a couple of examples of how prioritization is not being done right and what we need to start doing to unravel that a little bit to focus more in the future.
This is kind of how I felt, purely on the intel side, watching what's going on, reporting what's going on, but not having any control over vulnerability management.
And even still, as vulnerability management, most organizations patch management separate.
So you're kind of sitting there going, hey, guys, we want to patch?
Yeah, this looks great.
So a lot of the questions I had was, OK, how do we do this better?
Where do we go?
We all know it's bad.
Everybody's been reporting.
Every audit that comes through almost any organization says patch management, vulnerability management is a problem.
How do we make it better?
One of the things I proposed-- and this is all how I ended up with vulnerability management.
So if you don't want to be vulnerability management, maybe don't do these things.
Don't focus-- I mean, we should still focus on CVSS.
MITRE does a great job of classifying at the point of time of what they feel, but it's all about the potential.
It's not actually-- they don't go back and re-update these things once attacks actually occur.
So once WannaCry occurred, they didn't ever go back and update that score.
They don't have the time.
We saw there's 15,000 new vulnerabilities that we know about being reported every year that they have to classify.
They don't have the manpower to go back and then reclassify any of these.
And then these are just kind of the ones then by MVD and MITRE and stuff like that.
We're not even talking about the privately held vulnerable databases held by China and some of the other countries that don't really release that is out.
But we should use this as a basis of where to start, but we don't use it as it's the Bible.
CVSS 3 tried to help with that, but it's just not really what we need to do from a threat intelligence point of view.
What we should focus on is exploitation.
So I did a little math on this.
So 5 and 1/2% of all vulnerabilities published between 2009 and 2013 where exploited in the wild.
That's a lot smaller chunk of the elephant that we can tackle.
If we're talking about the full were released [?
of ?] that's only 700 vulnerabilities.
And that's before we even start knocking down, I don't use that vendor.
Oh, that's a mobile OS.
That's still-- that ends up getting to a very more manageable number that you can actually tackle and prioritize in your environment.
Next thing is partner with your threat intel team, if they're not combined, if you don't have a threat and vulnerability team.
Partner with them to prioritize based on the likelihood of attack based on the attacks actually occurring.
What it really helps is, is you can let most vulnerabilities go through their normal patch cycle, which all organizations are working to get in a better normal, autonomous patch cycle.
And then you can treat these true threats out of band.
So that's where it really helps with moving to this model.
So why should we switch from CVSS?
So like I said, CVSS because you could actually go in and change some scores based on your environment and stuff like that, and based on the asset that was applied to.
But when we're talking about the scale of this, we don't have a dedicated team to go reclassify what MITRE's already done.
So that was kind of out of the picture.
And what can we focus on to make it better?
We can use threat intelligence to more accurately convey true risk.
Prior to this model, our threat intel team was asked anytime there was a CVSS 7 and above, we were asked to write a report of a low, high, medium rating on these vulnerabilities, but we were writing a full report.
That's unmanageable as well with more critical vulnerabilities being released all the time.
So we were trying to find a way to make this more automated, and that's where we end up moving to make it more quantitative then less qualitative because there was numerous times that, even within security, people didn't like our reports because they wanted some new vulnerability that came out to be rated high so they could go push on the patching teams to make them go out.
When we came back and go, that's a low.
There are not very many attacks even going on, and you have to be within the same building as the infected machine.
That's not an active threat.
So we wanted to move to a way that it wasn't report based so you had the scale, and you don't have the qualitative measure of someone's words muddying up the waters.
It also helps support There was numerous times we went to our workstation team and go, hey, guess what.
Flash is being attacked again.
And they would just basically get-- they didn't want to hear it.
That was more than they wanted to know.
But they always had, well, how do you know it's being attacked?
Well, we have intelligence.
OK, well, that's great.
What do you have?
And so we didn't really have any good concrete data to give them, so this is another way to help support whatever-- when you go to them and say, hey, this is a really high-rated-- this is being attacked in the wild, and here's why.
That'll really help.
So first one you go to, it's a 0-day.
You only go to them once every six months.
It's like, OK, we see that burning over there.
We'll patch it.
This was after about three months in a row a couple of years ago of Flash constantly being attacked and new versions need to be pushed out, so much that our workstation team tried to tell me that you only get one Flash update a month.
And I'm like, no, that's not-- if Flash gets attacked again tomorrow with the new version, then we're going to go through this routine again.
But you've got to find ways to make it apply in real life and how we really convey application to the business and make it scale for an enterprise business.
It's fine if you're small shop or there's not very many vulnerabilities.
But when you're a fairly large organization with numerous applications across the board, tens of thousands of endpoints, then you really start needing something help scale.
So it came down to I've got the buy-in from our CSO that we need to prioritize these more quantitatively.
But it came down to are we going to build versus buy.
So we're going to focus more on the building side, but I wanted to build this slide in to kind of show that there are buy options for this.
You can go research vulnerability prioritization products that actually do the same thing.
Some of them are partners of Anomali and some of them aren't.
So it just kind of came down to what we wanted to do.
Some of the other threat intelligence we already paid for had some of the CVE attack data that we needed.
We wanted more control over our model.
We wanted to know we have importance in these key factors, and we want to make sure those are important in our model.
And then we have less recurring costs.
It's going to cost more upfront because we want to get a data scientist that might not be familiar with all this data to go kind of dive into it and build it with us.
But in the long run, we're not going to have to pay that yearly maintenance fee.
We're going to have to update our model here and there, but we're not going to have to pay for some platform and some fancy UI that we don't actually need when we can just go converge this data in a single database and make it a little bit more robust.
The other side, with buy, is we have less operational costs.
I don't have to go back to my guys to pull these data sets together to pull new threat intelligence together.
If we change the threat intelligence vendor that changes their API, then we don't have to go rebuild all this stuff.
And you need, potentially, more data.
As an organization, I can only convince our organization buy as much threat intelligence as we can get.
When some of these in the back end, if it's a specialized vulnerability intelligence product, they have all that data in the back end for you, and it's much quicker to implement.
You just kind of SAP your scan data in and then you're done.
But for this talk, we're actually going to focus on more on the building and what it takes to build it in your environment.
So how going to build this on my own?
This is some of the factors we used building our vulnerability threat score.
And this score will apply to every vulnerability that is pulled in through our vulnerability scanner, whether or not we find it in our environment or not.
Once we apply it to our environment, it kind of gets multiplied by the asset and stuff like that.
But we wanted to classify and have a score for each and every vulnerability that our scanner had in its knowledge base.
So we started with that CVSS score.
Like I said, that's a fairly good starting point.
You can kind of know where you want to go from there.
But then we kind of talked about other stuff.
The big things is exploits.
So what kind of start breaking that down and scaling that back and kind of dissecting it of, OK, exploits aren't all created equal.
Sometimes we find blogs where they're just talking about how you can exploit it.
That takes a different level of skill.
You've got to have a fairly advanced person to go read that code and then translate into the actual attack.
Whether then you step up and make it easier by now it's in an app on GitHub, that's a different rating of a score.
Then it ends up in Metasploit or on Exploit-DB, you just need to buy a different tool.
That's a whole different score.
And then you start getting it where it's productized.
So it's ending up in exploit kits or used by different malware, used in different attacks.
We can kind of see, OK, well, it's starting to be productized.
That's one of the highest ratings.
Even before exploits are available or if you can't find that they're truly available, you can kind of watch actor interest to see if they're going to be starting the use of attacks because there's so many vulnerabilities released every month, but you'll only see that the law should be talking about a very select few.
And they'll be talking about it for quite a while.
So we actually built in a-- it's a binning model where we're looking at a heat map.
So on a month-to-month basis or a day-to-day basis, how is the chatter going up or down on these vulnerabilities?
And that will change our score for the actor interest because over time, actors will lose interest, and then those vulnerabilities reduce their threat score.
Then the epitome is if you hear of breaches being exploited by this.
That's kind of your highest target because at that point you can rate a different score based on is that a very targeted approach, or are they spraying and praying?
What really was the target for this?
And then we left a place in our model-- I didn't put it in the slide-- of an analyst input.
So I want to be able to, if my automated sources don't find one of these evidence of one of these things that I find in my additional research, we wanted a way that we could go directly impact the model input that we found things in certain other places.
So we actually built that into our analysis system to be able to go interject our own research that was not found by external sources.
So next thing you're going to do is to be able to gather your sources.
So the big thing for us was to pull in our scanner knowledge base.
So they already had all the vulnerabilities as they were released by MITRE.
They had some of that exploit information that we needed, but didn't have everything.
So we still need to go beyond that.
So we would need to go to dark web sources to get threat actor interests.
We need to go figure out how much they're talking about it, where they're talking about it.
Is it a lower level forum that anybody can get into?
Or is it a higher level forum that only the top-tier actors can get into?
So we actually rated that differently.
Then there's also paste sites.
So they might post exploit code or do some chatter on there.
And then there's different information sharing, so different groups that we're in.
That's where that manual intervention comes in.
We needed to be able to know, if we get tipped off that this is being used but our other automated sources don't have it, we need a way to inject into that.
As well as blog sites, getting some of that pseudocode and some of that, and then criminal marketplaces is starting to be productized that we can find evidence of.
So we'll kind of talk about what it takes to build this model.
So there is a couple of different scores that take place to get the full risk score.
So we've kind of talked about what it takes to build the vulnerability-- oh, does this work?
Oh, nice-- the vulnerability threat score.
So all those factors that I talked about before.
In a little bit, we'll talk about what an asset impact score is.
It's a little bit different than what we think about traditionally.
It's criticality and stuff like that.
We're taking account location into it.
We're also taking account data, but we're also looking at what's the likelihood of attack.
So we're looking at what I talked about before.
Workstations and DMZ assets are going to get the highest score because they're most likely going to come into contact with one of these attacks.
So this ends up being a 1 to 100 score.
You multiply it by 1 to 10 score, and you end up with a wonder with that 1 to 1,000 score for risk score.
And that really shows, asset to asset, how a vulnerability applies differently across your organization, and you can properly prioritize which ones need to be fixed first.
So we kind of talk about how do we get an overall asset risk score.
So asset, let's say it has 10 vulnerabilities.
You add up in all their risk scores for each of those, and you get an overall asset score.
Why this is important is as machines get out of date in their patches and stuff like that, you need to figure out which ones you need to fully patch quicker than others, which business units are falling behind.
It allows for the grouping of assets into applications or verticals.
So then you can go properly report of, well, this application owned by this vertical is posing a way higher risk to the organization versus this other application.
So it really allows for grouping and-- we like to use the word [INAUDIBLE] shaming-- to really come full effect because especially in the financial world being in the banking industry, everybody wants to talk about risk.
So when we talk about grouping things, we try to do it by number of vulnerabilities and stuff like that.
But that doesn't really convey to most business leaders.
So once we start talking about this risk score, it really makes a big difference.
So we'll talk about phase We built an internal database that we pulled in all of the knowledge base from our vulnerability scanner, all of our detection data here.
So these are two big things that we pulled from our vulnerability scanner was their knowledge base and then our individual detections.
And then we started picking our sources to pump in the actual threat until data that we needed to use to build the score.
It's actually simple enough that it can actually be done in SQL.
You don't have to have some big machine learning running or script running on it.
It's just a scheduled SQL job that looks at, per vulnerability, what data is pumping into it.
So it actually makes it pretty easy to do.
And then once we tie that vulnerability to the asset, we can move into the next phase.
These are just a few of the factors that we used to map our assets to applications and stuff like that.
So some of the other things are data-- is it PCI data?
What all data is it?
Is a HIPAA data?
Is it just general sensitive data, or is it just a general server that doesn't really have any data?
What application is it part of?
Not necessarily what all applications are running on it, but is this server part of an overarching application in our environment that we can rate the risk of?
And then general network segments-- is it in our user segments?
Is it in our DMZ segment?
Is it in our PCI segment?
What all kind of other things?
And then to map it all back, we can also report on system owners or verticals as well.
And with all that, we can kind of get our asset impact score.
So we built this model in a way that we could add additional stuff.
Traditionally, you see pin testing stuff kind of ending up in a whole separate repository.
The application security guys have their own repository of code scanning of applications, and then we have missed configurations management in a whole different area.
So looking at it from a reporting point of view and a threat point of view, I wanted to convey a total risk of a system or an application.
So to do that, we're going to need to combine all these other data.
This is kind of in our future step of we're going to add in these pin testing results tied to that application.
We're going to tie-in that applications code scanning results.
And then any misconfiguration is done on the individual systems that are tied to those applications.
So overall, we're not reporting just on the vulnerabilities.
We're able to report on the overall risk of that application as a whole.
So we'll talk about a few metrics that will be beneficial when rolling this out.
So you can roll out highest top five risk score by a different category.
So you can go all your systems.
You can kind of rank those in a list of which ones have the highest risks.
As you combine the systems into applications, which of those applications need to be addressed the most.
In doing that, we kind of did a-- in Tableau we kind of did a graphical chart to show the number of systems to make the circle bigger.
But then we also changed the color and based on how risky it was.
So we knew how hard of a problem it would be to tackle.
If we got a really small circle but it's red, we know that we wanted to tackle that because it's going to be potentially less-- it's going to be easier, potentially, because of the number of systems involved.
So we wanted to have a few different ways to slice and dice that.
And the same way with vertical.
How many systems do they have in their vertical?
How risky are those?
Does our CSO need to go talk to their VP to go, hey, what do you what do you guys need to from us to make this work?
But the big thing is showing trends of risk score.
So before, from a vulnerability standpoint, a lot people show number of increase of vulnerabilities or decrease in vulnerabilities.
But we're not really showing how risk is going up or down for the environment.
So that's what we really wanted to focus on, and it's a lot easier for a VP or a business leader to understand risk imposed to the environment versus just a number of vulnerabilities because those numbers-- when we talk about IPS attacks or anything like that, those numbers is really large, and nobody really understands what that really means.
Lastly, I thought was a really good one as well, is if you just want to pick out a patch across the environment that is going to reduce the risk the most, what would that be?
And so you can actually just pick out one patch that we can blow out across the environment, and that is the one that causes the greatest reduction of risk across the environment.
So these are just a few examples of some of the metrics that we cooked up for this as we move forward to make it more meaningful to the business.
That's kind of what we're moving towards.
So few of the sources I got it from.
Thank you so much.