After you have watched this Webinar, please feel free to contact us with any questions you may have at firstname.lastname@example.org.
I'm going to do quick introductions for the three of us that will be doing the presentation this morning.
And then, I'll lay the groundwork for what we're going to cover, and we'll get to it.
But I think the way that we work, don't hold your questions to the end.
If we are touching on something that you want some clarity on or something, just feel free to throw your hand up or bark out some comments or questions.
We're happy to make this interactive.
I think it works really well, especially in a little room like this, so just putting that out there.
I'm Travis Farral.
I'm the director of security strategy with Anomali.
And I'll pass it over to my colleagues and let them introduce themselves.
I'm Mike Anderson, and I do partnerships.
I'm over at Intel 471.
I'm a CTI operator by trade, spent some time in different F 100 companies and cut my teeth on the intel side on the government.
I was a special agent with OSI.
It's similar to NCIS, but we didn't have a television show.
Yeah, still don't.
But nevertheless, using that methodology from the government side and the different teams that I was part of or helped build has led me to this moment in my life with Travis and Charissa.
So that leaves me.
I'm Charissa Hightower.
I'm the consulting services manager for Anomali.
I go in and build intel programs based on best practices in the industry, but also, of course, I work for [INAUDIBLE],, so we insert ThreatStream where we can.
But my background is very similar to Mike's.
I'm ex-Navy though, a cryptologist.
I spent my entire career at NSA, and I've only been in the private sector for about two years, so that's definitely where I get my mindset and methodology, too.
And so you'll probably notice that through this talk.
But it's good to have you guys this morning.
So let's get started.
So we're talking about CTI metrics.
This is not necessarily a sexy topic, really, but I think it's a much needed topic and something that we should all be trying to think about.
Hopefully, this opens up a broader discussion in our community and throughout intelligence community.
And I'd love to hear someone else's take on how they're approaching metrics in their organization at some other conference or something.
What I want to do, though, is I want to lay the framework for what the expectations should be for this talk.
What we're not going to be doing is giving you a checklist of this is how you do metrics in your organization.
Instead, what we're going to be doing is covering the concepts behind doing metrics and how metrics should work around threat intelligence and basically empower you to go back to your organization and be able to get to have some tangible ideas about where you could get metrics that work for you, for your organization.
So we're going to cover a number of things.
Instead of just assuming everybody's super threat intelligence-- you know, I've been doing it for 20 years-- we're going to cover some basic concepts to make sure we're all on the same page.
Basically, the idea behind doing metrics in threat intelligence is treating it just exactly like an intel problem.
You have requirements that you're trying to meet for your metrics.
You basically develop your collections.
What information do I need to collect to be able to create metrics or to address that requirement?
And just flow through the rest of the lifecycle.
That's basically how you can do metrics, and we'll go into all those concepts and cover all that to make sure everybody understands.
But again, feel free to throw your hand up if you have any questions.
We'll try to give some examples so that the concepts become a little more tangible to you and practical.
And basically, at the end, you'll be able to see that there is some business value that can be derived from being able to measure this stuff, so that's really the intent behind this.
Is this working?
So my wife says that I approach my speaking when I'm out and about as a TV evangelist, which means I may end up in the back of the room.
I have not left, but I'm very passionate about the topics that I like to get up and talk about.
I'm passionate because, as a CTI practitioner, what we want to be able to do is to, as Travis noted, show value, demonstrate that value.
And this slide, I like.
It's my second favorite one, and the reason why-- get back here so I can see.
The reason why it's the second favorite one is because there is one more that is my favorite that's in this one.
And we're obviously using the Ghostbuster theme.
And the very top up there where it says the WTF, just for the record, that means basically what Travis for all is a metric and why do I care?
I want to make sure that acronym is well known.
But nevertheless, it really is a representation of the challenges that we face when we're dealing with metrics.
I mean, I'm sure most of you in the room have some sense of an understanding of the value of a metric.
It's a standard of measuring something, and that's the premise that we're operating off of as well.
Because at the end of the day, we want to be able to show value behind what we do, especially oftentimes the CTI program is seen as a cost center.
And it's not necessarily showing that value beyond just the bean counting.
So how can we take that data set and demonstrate that value?
We can do it in a lot of different ways, so mapping it back to that philosophy, as Travis mentioned, is going to be important.
So it is a challenge as you can see on the screen, right?
How do we wrangle that?
How do we harness that capability beyond just counting those beans?
And then, of course, the Stay Puft Marshmallow in the background is no different than anybody else in here has, and that's some sense of leadership that is trying to-- show me the value.
What are we doing?
How is this program driving out the security decisions that need to be made at the end of the day?
Because if we can't measure something, it makes it challenging to understand it.
And if we don't have that knowledge, then we're not necessarily able to demonstrate that value, right?
So from a proactive perspective, using the metrics to drive different decisions is what we're trying to accomplish.
So still setting the stage for what we're going to talk about, before we start diving into a ton of metrics examples, we wanted to really start off basic.
So most of you, especially if you're an analyst, have seen that intel cycle, right?
I mean this is ingrained into me.
Sixteen years ago, this is what I followed in my entire career.
But what I will say is that in all the pictures I've seen about it or documentation, one thing you usually don't see on this is the feedback.
We have this up on this slide because we really need to focus on that for a second because this is the last and vital step of the intelligence cycle.
And if you don't have feedback within your organization and within your teams, you don't even have the data to start metrics.
I mean this is what drives you forward every day operationally, and this is what's going to drive your metrics as well.
In being a consultant and going into our clients, this is-- and from leading my own CTI team-- this is the biggest challenge that I have seen in any environment.
I like to give real world examples.
So maybe five years ago, I was put in charge as the lead analyst for working in nuclear procurement agents.
So in that job, I came in-- and this is no different than-- put this to cyber.
This is no different.
So when I came in, I didn't know crap about what components it took to make a nuclear weapon.
I didn't know what all my-- and my consumers were your consumers or SOC through IR.
My consumers were FBI, Treasury, CIA, folks like that.
And every day I put out a report, I got different feedback from each consumer about details, details that they needed to see to make my report actionable for them.
And that was a really good experience for me as an intel analyst because a year later, after us collaborating like that-- and it is rare, even in the intel community at a three-letter agency, this is still a struggle.
We were a high priority target though.
We worked together every day.
We collaborated every day.
And at the end of that year, this guy's no longer in business.
So just to give you guys a real life example of how effective and how needed that feedback is because we would have gotten nowhere without it.
So with that in mind, we're going to approach metrics just like we do any other intel problem.
We still going to use this intel cycle, but we're just going to flip it.
We're going to reverse it.
We're still asking questions.
This is what we do.
We're still asking questions on a daily basis.
And I wanted to put this up here because-- just to give you an example, if I just reverse this one step to production, this is your reporting, right?
This is your dissemination.
So if I put this back one step, I'm going to be able if I'm getting feedback from my teams like I was just talking about and we switch this over to SOC and IR, now I'm going to be able to see did I get my report to them on time?
Were they able to action it?
Did I put the detail needed in for a stock analyst to go in and sweep the environment for additional IOCs?
Did I give enough context then?
Did the SOC and my CTI team give enough context to IR for them to be able to go in and clean it up?
So these are questions that-- it's the same approach, right?
You're just reversing it and going back.
And this is how we're going to start to drive metrics.
How many of you understand the intel cycle and it's very second nature to you guys?
If you don't mind, a show of hands.
Intel requirements, do you guys operate or leverage those or at least have an understanding of what we mean?
Yeah, that's actually good for us to understand who we're talking to here.
I think one of the challenges that we all have that work in inter-intelligence is that in the commercial setting, we work for companies that they're not in business to just create intelligence really unless you work for Intel 471, diamond sponsor by the way, or our Flashpoint or something.
I came from Exxon Mobil running the threat intel team there.
They're obviously not interested in producing intelligence, so management there obviously doesn't-- they don't know anything about requirements.
They don't know anything about the lifecycle.
All they see is this cost.
These tools are expensive.
The analysts are expensive.
Good ones are very expensive, and they're hard to find.
And so being able to justify value back to them, I think, is very important for all of us, which is why we felt very passionate about putting this together.
But fundamentally, the real question is how valuable is this?
Even we want to know.
When I go through all the trouble of putting a report together, I want to know that whoever is receiving that is actually reading it and getting value from it, making decisions, taking actions based on the information that I'm giving them.
And so this goes back to the feedback question.
If I'm not getting feedback on that, how can I possibly answer that question?
This then becomes on me to go back and try to get feedback, even if I have to pull it out of people, so that I can understand how I can better serve them.
Here's some things, though, that start to get into a little bit of metrics.
We've got some ideas up there.
What things were put together?
What sources were used?
What collections were used for me to produce that intelligence?
We pay for some of these things.
And knowing that at the end of the month, I paid x number of dollars for this tool, and it looks like I used it one time for one report this month.
Well, if that report was super important for the business and they made some big executive decision on the board level as a result, that may be totally cool.
But if it went to the SOC and they may or may not have taken action on it, after some time, it may be obvious that that tool or that piece of collection is no longer necessary.
But being able to have those insights into where you're getting your data from, how you're producing these things, is exactly what we're talking about in terms of metrics.
So the resources, being able to measure the impact.
So if I produce something, and it goes to the SOC or it goes to the IR team, what actions were they able to take as a result of that?
Did it speed their efficiency?
Was instant response now able to make faster decisions, operate faster on their feet, and be able to shave maybe a day or a certain number of man hours off of their response as a result the information I gave them about that threat?
You pretty much have to go back and beat that out of them because they're not necessarily going to volunteer that back to you.
But if you can set up processes where they're automatically giving that back-- maybe your ticketing system or whatever system that they're using to capture the instant response team's activity is maybe a way to collect that automatically.
So this goes back to the collections piece of the intelligence lifecycle.
Just consider all of the different types of intelligence that your team produces or that you produce-- operational, strategic, tactical, whatever that is.
Consider all the components that go into those things, all of the various costs, how much of those things being used, how effective are they, and how is the resulting intelligence being utilized in the environment.
That's really kind of an overarching view of what we're trying to convey here.
That's pretty much the end of this.
Any questions on any of that before we move on?
I just want to make sure those concepts are being-- Yes, sir?
Are you going to give an example of operational metrics?
We actually have some examples later that should answer that.
But if we don't cover what your expectation is around that, when we get to examples later, feel free to throw that back out there, and we'll try to work through it.
So from a strategic footprint, I mean the maturity, scalability of an organization, a team, how many of you in here do produce what you would consider strategic intel?
That's always good to see.
It goes back to the resource management aspect, which is why I was asking.
This is a, again, methodology.
Just to remind you guys, as Travis had mentioned, it is the premise of the mindset, right?
The methodology behind why you need metrics and giving examples of that when we get to it driving that mindset is important.
And this kind of speaks to it.
I like this slide because-- and I'll anchor back to what Charissa was pointing out in the intel lifecycle itself and reversing it, but no different and then Alice and the Cheshire cat.
If you don't know where you're going to go with your intel program and not having things to measure, then just like Alice, you'll end up somewhere.
You'll be down the road with your program.
It'll be reactive, and you don't have that guidance.
So that's why I asked the question about intel requirements because if you're setting that operational environment for your team for that function and you have a direction, a strategy, to go, then you're able to capture those metrics along the way.
And being able to capture those metrics, you'll be able to guide your team to where you ultimately want to be for the organization.
So I like this slide because it kind of represents that mindset, that philosophy, behind being able to implement that CTI program where you want to go the direction.
Now naturally, things are going to happen in any organization, in any CTI team, where you didn't expect it, right?
I mean there's so many different events that occur that will drive you left or right.
But having those resources to expend on it is what I'm going to talk about here in a second.
This is not my first favorite image.
But again, keeping with the theme.
So the question up there, going back to what Charissa had mentioned, is if we attack the metric problem the same way that we would do an intel requirement, an intel question.
And in this case, what is the capability and capacity for collection regarding weaponization of CVEs?
I'm sure most the folks in here have a focus in on CVEs, the weaponization of them.
Looking at the question itself and keeping in mind most of you in here know what I mean when I say an intel requirement or an intel question, that's kind of the way we're doing it here from a grander scale, the framework.
But two things that really call out here.
And I learned quite quickly on the government side, we have to justify our program.
We have to justify our hires, our tools, et cetera.
So one of the ways I hope that you walk away from today with and go back and say, OK, I'm not tracking my capability, and I'm not tracking my capacity.
And ultimately, that means what is my utilization as the organization.
Because at the end of the day, a business is in business most times to be profitable, to make money.
So we have to approach that mindset, that framework, the same way.
So as Travis noted, when you look at that question, two things that break down, the first one be capability.
If my leadership is asking questions around CVEs, maybe that specific intel requirement that I've built is going to be around plant and manufacturing in Asia-Pacific.
So specifically in that space, I need to be able to task my assets, my sources, to be able to answer that question, meaning what is our exposure rate to, let's say, 15 different CVEs that's presented to us.
So now I have to turn and look, OK, do I have the ability to answer this question?
And if I attack it the same way that Charissa is mentioning, looking at it also from an intel requirement or attacking it from an intel question, then I can begin to go through that process just in the collection phase, just in that step.
This can be applied to production, analysis, et cetera.
So looking at it from a capability, do we have the ability to do it?
Can I answer these questions about these CVEs?
Well, if I don't know where I'm going, I don't know my operational environment, meaning do I even need to worry about the CVEs, do we even have the attack surface, are we even vulnerable if something was to be weaponized and exploited, I need to understand that.
So internal data is huge.
What is my attack surface?
That's going to allow me then to say, OK, I do care about all of those And then I began to focus on those 16.
But if I don't have, for instance-- well, we'll just look at the different vendors, the industry information, government information.
Maybe I have an indigenous capability.
But if I know, as a CTI practitioner, that I have to have a specific skill set within-- and yes, I'm going to talk about the cyber criminal underground because it's relative here.
That is a resource or a source of information that I can leverage.
If I don't necessarily have that insight into are these CVEs being product ties or threat actors taking them and saying, yes, we can use this from a marketplace perspective.
I can make money off of this.
And now, I'm going to weaponize it.
And then, I'm going to use it, meaning where do I need to put my focus?
So if I don't have the visibility, then that's going to be a gap, right?
And it doesn't necessarily have to be only cyber criminal intelligence.
My point is, when we're looking at the ability to do our job in the collection stage understanding that, oh, I have a gap.
Or maybe I don't.
Maybe I can answer the question completely, which is great because then I can go back and validate, as Travis had noted, the type of sources, the vendors, my relationships, whether it's with the government or specifically having that capability on staff to say, you know what, Cindy is a Chinese linguist.
She was able to help us focus in in Asia-Pacific.
I can wrap man hours around the work that she did to answer that specific intel question and then associate dollars to that.
And at the end of the month or quarterly or whatever, I'm able to be able to say, yes, I have the ability to answer this question because you guys allowed us to have the assets to task to do that.
Or I don't have it.
And I have to be able to go back to leadership and say, I can answer 80% of your question.
And if you want me to answer the other 20, then these are the things I'm going to need.
So understanding our capability, our ability to do it, is a huge aspect of the collection step when we reverse it.
And we could do this, again, with all the steps.
We only have 40 minutes, and I'm passionate.
And I'll keep talking.
And I'm going to talk about measurable impacts, too.
The capacity, the time to do it.
So let's say that we don't necessarily know exactly what our CISO wants.
She's given us some hints around it.
It's hard to sit her down and talk to her about what is her intel requirements, what's her focus.
But we do know she cares about vulnerabilities.
And now, I need to be able to go back to my leadership not only and answer the ability but the capacity to do it, the time to do it.
So if I know that that's a prioritization for her, my CISO, my leadership, then I can demonstrate that, yes, we have three analysts.
Here's everything that we're focused on.
It takes the three analysts that we have 40 hours a week, 120-- Numbers give me heart palpitations.
So anyways, we have the hours.
But if something new comes out, a new CVE that pops, or there's a cyber attack or ransomware event that occurs, I've got to be able to demonstrate that capacity week over week, month over month.
Because they don't understand, from an operational perspective, everything that we're doing.
So for me to justify a new hire, again, a metric for our CTI team beyond just counting reports, if I'm tracking my utilization, then I'm able to demonstrate the value of our ability and our capacity, the time to do it.
Because again, you may get six things, seven things shoved your way.
And if you don't have the data, the metrics, behind it to demonstrate, yes, I'm answering it, I either need to reprioritize what I'm doing on a daily basis to answer these intel requirements or we need to outsource.
We need to hire.
We need to be able to partner.
We've got to be able to demonstrate that.
If we're not able to demonstrate that, we're always going to feel like we're in a reactive mode.
So capability and capacity.
So let's look at what are the measurable impacts that can be measured, if you will, captured.
I've given you a few of those.
I really believe tracking the hours and where they're being spent almost running your CTI team almost like a little small business.
You have briefing.
You have your outreach.
You have your relationships, your reporting.
So track all that and the capability.
But then use that data.
Use that information and go over to the red team for instance.
And say, OK, I have a gap.
I know that the CVEs are important, the weaponization of them.
I don't have the intel, which ultimately means I can assess with some sense of confidence that if we are impacted by a cyber criminal financially motivated that hits our organization, the likelihood is going to be big.
But I don't know what's going to happen.
So take that gap from a capability perspective, run through it with a pen tester or a red team, and walk through it.
If this was to happen, exploit it.
What assets are going to be exploitable?
What data is on there?
How long will it take us to repair that from a remediation?
The data loss, how much would that be?
The impact from a brand and reputation?
Capture all that stuff, and then be able to go back and just simply not say, I need a new vendor.
We don't have the people on staff.
Also say, which will likely result in this and we've proven it with the experts that you hire that I work with.
So is this a risk decision that we're willing to accept?
And if so, great.
And you can move on and answer 80% of the question.
But at least you have the metrics to back up the decisions that are being made not only within the organization to help influence and drive those, but also within your team, too.
And that's huge.
Because again, at the end of the day, if I can't demonstrate my capability and capacity in a manner in which allows the team to do its job but do it effectively, it's going to be very challenging.
It will end up like Alice, like I said, down the road.
I'm going to stop there.
Do you have any questions on that, or can I dive in any further, any deeper?
You guys already know I'm passionate, so I'll keep going.
So we're going to basically walk through an example that may or may not happen in a typical organization.
But hopefully, there's enough here that resonates with you that you can apply this to your environment and hopefully make this a little more palpable to everybody here.
Since we're talking about CVEs, we'll go with an example on that.
I mean, it's something that a lot of us deal with in our day jobs when we're working with companies and stuff.
They're always concerned around this.
In this example, we find out through one of our sources-- maybe Intel 471-- that there is a brand new CVE that's actually being weaponized, and it's being offered for sale in the underground.
We, as part of our tasking, as part of our requirements, have to assess this to find out what the potential impact of this new CVE being offered and potentially encountered by our organization, what that would be.
So we gather the information.
We provide all the information that we have available to the SOC for it so we can generate alerts on this CVE.
We can elevate.
Maybe they already have detections available for it, but they didn't know the priority before because they didn't know it was being used potentially in the wild.
But now, we can give them that information.
Vulnerability management's now got it.
They can now assess which assets in the environment are vulnerable to this.
You can set priorities for patching perhaps and things like that.
We end up with some detections, maybe in our IDS, IPS, next-gen firewall, whatever, where the attempted use of these CVEs was being seen on the perimeter or something.
The investigations that led from those detections to four confirmed infections that had somehow got past before we were able to get all of the things spooled up like patching and stuff to prevent these.
They were able to take that information, escalate the patching even further, and protect the environment for the CVE.
So this is basic workflow of things that might happen in an organization.
Because there's always that lag when the CVE becomes known to the CVE being weaponized to you having patched your environment against it.
But then, being able to apply intelligence into that equation is something that a lot of us end up dealing with along the way.
And this is maybe a somewhat typical example of how things may go.
But in this, there's ways to measure impacts along the way here in terms of, hey, we started this discussion from an intel perspective.
We're the ones that found out the information that this was going to be weaponized before it was weaponized hopefully and before it was actually seen in our environment.
So now we have an opportunity to go gather some metrics on this particular example and give that back to management and say, look, this could've been a lot worse.
But because we were able to get this information into the right hands, we were able to take some actions and hopefully help save the company from something that could have been much worse.
Do you have an example, a CVE example?
I'm mean you're talking about internal and external or strictly just internal?
Well-- [INAUDIBLE] company you're dealing with external issues, and now you're dealing with both internal and external.
So-- [INAUDIBLE] internal within the company that could be an internal company issue relative to nobody even knowing what the effect could be on an external level?
But generally, those businesses just have to do things inside and out [INAUDIBLE]..
I'm kind of curious of your thoughts or opinion.
So I follow what you're saying.
For this demonstration, relative to internal and external, but as it relates to a CVE that now is being weaponized and used in attacks, how will that impact the organization from a workflow perspective?
So we're just saying that that aspect of looking at a CVE being weaponized and then walking it through, how would that impact the organization?
[INAUDIBLE] example of [INAUDIBLE] CVE example.
Or like maybe-- Or and example, a [INAUDIBLE] example.
You do-- Like a Flash.
So you know how it works.
Adobe will issue a patch for yet another Flash bug or something.
There'll be a CVE assigned to it.
Not every time is there examples of that particular CVE being used in the wild, right?
It may end up being a low priority because nobody ends up using it.
And so that just goes into the regular patch cycle and so on.
But let's say in this example Adobe drops the patch on Tuesday, and Wednesday evening, we're getting chatter from the underground that they've got this available that could be added to exploit kits and utilized.
So now, we've got an issue.
OK, this just came out yesterday.
There's no way we've already patched for it.
But now we've got intel that tells us, hey, we probably need to get on this.
First off, how fast can we detect on our perimeter any weaponization of this?
Is there already a signature available for our next-gen firewall or for our IPS environment?
Let's get those pushed out.
How fast can we get the patch rolled out?
Let's get those people involved.
The vulnerability management team, how can we assess the potential impact to our user community and internally?
Do you have detections in your vulnerability management platform to be able to do this?
It's like a lot of simultaneous operational things that have to happen inside the organization, but it all starts with the intelligence that says, hey, we're likely to see this being used against us.
What all do we need to do to be able to react to that basically?
So a good example of that is HackingTeam.
I think it was 2015, was hacked and breached.
And they leaked-- what was it?
So an example was the HackingTeam breach where HackingTeam had numerous zero days for Adobe in the banking sector at the time.
As the breaches rolled out, we were doing emergency zero day patching based off of-- not only was it the proof of concepts out there, but the actual exploits.
And we were seeing immediate adoption in the crimeware and different things that were affecting the internet.
So these are real experiences when it comes to large or large-scale or offensive breaches that can be weaponized immediately.
And to add to that, the backdrop, the framework of what we're saying here relative to exactly what you're saying is, OK, once we've done all those things, let's capture what we did and the cost avoidance of impact.
Don't just leave it there.
Follow it as far down as you can.
Did you see attempts three months later against something that we did patch?
We saved the organization x amount of money, x amount of man hours.
And that's the thing I think that's often lost from a metric perspective is being able to pull those numbers out.
That's absolutely true.
Because I know from my own experience, we would go through stuff like this.
We would tell them, OK, this new Microsoft thing that came out last week, we're already seeing signs that this is being used in the wild, so we need to have emergency patching, all this other stuff.
We would go through all that, and then we would be like, OK, whew.
And now we're on to the next thing.
But we would forget to go through this process of actually measuring the impacts of what we did and being able to go back to management and say, because we're able to react so efficiently against this, we just saved the company x number of dollars.
Or we were able to maybe expose some gaps as a result of this.
We don't have a capability here that we probably need if we're going to be able to address these types of problems in the future.
Or maybe we have process issues that need to be addressed like we ran into a problem.
When we were trying to patch this, we didn't have this system in place.
And we need to maybe put some energy behind, operationally, how we can react to those things better in the future.
But it all starts with being able to measure this stuff and being able to really sit down and think and apply analysis, just like we would in any other metric or any other intel problem to be able to give valuable insights to management on how to move forward.
So that's kind of the key here.
So I'm going to hand it over to Charissa to walk through the features.
We're getting close on time.
I want to make sure I get through this.
Hopefully this answers your question operationally, too.
I think this hits home to a lot more people.
Obviously, great examples of metrics, things that you can do, different ways you can look at it.
But from what I see, people hit home most with them is your day-to-day operations.
So up here, I've got a few examples.
So let's assume everything's working great.
Everybody is getting feedback.
You've got your CTI team that's sending their reports, different indicators compromised to your SOC team and then off to IR.
So you get at the end of the week, or the end of the day, let's say my CTI team has sent five reports this week to the SOC.
From those five reports, the SOC was able to identify 50 machines that were compromised and 50 additional IOCs related to that threat.
Now they're going create tickets that's going to go off to IR.
IR is now, because of that, been able to find out maybe some attribution.
They've cleaned this many machines.
They've had to reset this many passwords.
Because numbers really tends to hit home, right?
Then at the end of the week, you can go, OK, well, my report, because of CTI, resulted in this many additional IOCs, this many vulnerable machines.
And IR did this to clean all of this up.
So we have a hard number at the end of the week.
And this isn't metrics related, but I've seen us do wonders for teams as far as-- intel can be a black hole.
No one knows what they're doing, and where that intel is going, and how they affected anything.
So this is really-- I found, heading up a CTI team, my numbers at the end of the week and everybody seeing how they impacted the entire organization was great.
And it also speaks a lot to the value of your team and the different tools you're using to your C level.
So next slide.
And something I had really quick, if I may, that Chris and I have done.
We know the challenges, right?
You can get all this data.
It's so easy, Mike.
Just go and get it.
It is hard.
But something that Charissa and I have done in the past is actually use these metrics and share them with incident response.
Allow them to benefit from that capability.
That way, it's just not self-serving.
You're saying, hey, I need these feedback so I can demonstrate x, y, and z.
And that's what I meant by not the unrelated metrics question about bringing a team together.
Because we faced this.
Like I said in the very beginning, feedback is the hardest damn thing you will do.
And to get people on the same page, to get people to want to work together, I mean, it sounds like, hey, we should be doing this.
But it's not that easy.
So when I started, I had 10 SOCs globally.
I had a lot of people to catch up with.
So when I did a biweekly meeting and I started throwing these numbers out to everybody, they were like, oh, holy crap.
We're actually catching threats.
We're actually mitigating something and preventing things.
So that was great.
And of course, management was even happier because they've given me x amount of dollars for whatever tool or whatever people that I wanted to bring on board.
And this is the reason for this slide is we talk about numbers being important.
Well, it's even more important to C level.
They want to be able to see where their dollars are going and if you're actually producing.
So if we want to take a specific threat-- WannaCry, NotPetya, anything like that-- and we go back and talk about, well, we found this in our environment.
We found this many machines compromised because of this threat.
And we were able to clean this up.
It takes a little bit of effort, but it's not that hard to then dig deeper and go, well, if we would had this many machines compromised by that threat, then how many millions of dollars did we just save you?
And then you start talking, right?
You start talking numbers to your team, and you're also talking numbers to C level, which gets-- and I know you guys can add even more on the profitability thing.
It maps back to profitability, right?
I mean, at the end of the day, that's the terminology we have to be able to speak in.
And so one more step and we can get there.
Yeah, we gotta two different numbers.
We gotta talk numbers for our teams for ourselves, right, and tracking threats and knowing what we're doing and knowing what kind of impact we're making in our organization.
But we also gotta talk numbers to C level.
And that means a different number, dollar signs.
And that's why I said I think this hits home the fastest to most teams because this is the space that you're in every single day.
And this is a really quick way to show value and to show we can get metrics.
Hey, this is what we produced this week.
Sorry to interrupt you.
You want to-- the last slide here, really, echoes those things we were talking about.
And I'll turn it over to Travis here in a second before we get to our favorite slide.
But it is the methodology.
I hope we've given some examples of that.
But this methodology is, what is the value of intel?
We can wrap numbers around it.
If we can wrap dollars and cents around it, man hours, these types of things, using the vulnerability workflow, the CVE, there's lot of different directions that we can go with that.
And the applicability, for instance, of capability and capacity in the collection step, when we reverse it, then, do we have our right intel requirements.
Going after the feedback to allow us to understand our production.
So it really is attacking it in the same methodology that you would.
This kind of accentuates that.
What am I securing and protecting?
It really does come down ultimately to business continuity, resiliency, profitability.
If we can map everything that we do, then it truly does become value-driven.
And sometimes I get it.
That it is challenging.
But hopefully, as it relates to the workflow that we've walked through, you can take that back and attack each of those steps within the cycle.
I love the last comment being those that you read, that you wrap value around.
And in each organization, that value may be a little bit different.
It may be sustainability of a project.
It may be critical assets.
It may be a new footprint that you may be standing up.
Using something to drive that value.
What is that value to your organization, your CISO, et cetera.
It may be a little bit different, but the methodology is ultimately the same.
It comes down to treating it just like intel.
If I want to measure this, I know that I'm able to deliver this intelligence.
I have all the collections and everything.
I'm meeting my requirements, all that stuff.
But then when I want to measure it, now I may have to do-- it's a collections problem again.
What do I need to gather information on?
SOC tickets, operational things that happened, specific IOCs that were applied in different environments.
What happened as a result of those IOCs?
Can I get information from the operational team to find out what was blocked since that was put in, that was dropped in on the 17th?
How many blocks were there between there and the end of the month?
Because that might be information data that I can come back, analyze, and try to apply some numbers to to say this was an actual real world impact to this organization.
Even back to dollars and cents, if all of those could have been infections that we were able to help impact, then that's value that we can show.
Like I said, the last examples an easy win.
I mean, you guys are performing threat research and production on an everyday basis.
So if you look at it from the very basics like that, what you've done that day, you can immediately show value.
And then all these other examples that we've shown you, then you can dig deeper.
And just like Travis and Mike were saying, then you can start to dig deeper and put numbers, and dollar signs, and hey, is a source valuable to us or not.
Because those are all things that are going to derive from that very basic step really.
And that's our favorite one.
[LAUGHTER] My hands got smaller.
I don't know what happened.
So hopefully this wasn't confusing.
Hopefully the concepts that we're trying to convey to you around metrics.
We think this is something that is very important for us as an industry, as practitioners in this space.
And we really hope that people will continue to talk about metrics around CTI more even though it's not a very sexy topic.
But it is a very important topic, I think, for all of us.
It's not as sexy, but it's not as hard as everybody thinks.
That's the point.
I mean that's [INAUDIBLE].
It's talked about a lot.
How do we do it?
How do we do it?
Well, it's not as difficult as a lot of teams make it.
So if anything, hopefully you walked away today realizing that, hey, at least I can go back to my computer and [INAUDIBLE] this mindset and get some backup.
This is the first.
I mean, obviously, we could talk a long time about this.
We had to be very careful not to drill down super far into different areas.
But we do have plans to continue this conversation in different blocks of-- OK, Mike.
You kind of hit on capability, capacity.
Can we build some playbooks now?
Can we look at opportunities to develop that a little bit further?
And that's where we're at right now.
We've got a lot of good info.
I hope that next time you see us all three together, it may not be with this picture that you will check us out again.
So we'll open up to questions.
Any other questions?
Yeah, go ahead.
I was just going to make a comment.
We often get remiss and talk about metrics like some important KPI on your example of workflow from intel about CVE through impact.
We get so wrapped around measuring the impact of the intel, we don't think about impact of our processes and our tools.
[INAUDIBLE] dots as a date into a spreadsheet, and you track.
And say, hey, look, we have to go over to [INAUDIBLE] management.
Ask for this [INAUDIBLE] or ticket and wait three weeks.
Where they upload all their scans to and you have read only access to-- Yeah.
We could actually do our own analysis without impacting that team and be able to help them rack and sack this stuff in three hours instead of three weeks as we're waiting for an email back that's low priority [INAUDIBLE]..
You [INAUDIBLE] those KPIs can actually really show value to what threat intel can do to help [INAUDIBLE] prioritize [INAUDIBLE]..
And that's why I said it was so hard not to go too in-depth on certain things.
Because every single point that we made in here, it's exactly what you're saying.
You can go that in-depth with it.
You can show value and really anything if you have that mindset.
Like I said, we want to go further into this.
And we want to take every step of the intel cycle and just really go more in-depth exactly like you're saying.
That was a great, great point.
[INAUDIBLE] We're going to use that.
You want to join us?
You want to come up?
We'll use you.
So on the [INAUDIBLE] feedback.
We've done something similar to that.
And I'm curious.
Have you had the experience of getting the feedback back to you?
Because the feedback back to you is not always sunshine and rainbows.
Can you talk a little bit about that?
Well, just getting it back to you in general?
I mean, do you want the process for how would you do that or-- [INAUDIBLE] Yes.
Actually give the stakeholders intel.
So it causes the SOC to burn FTEs for the intel you give.
So they can give you back a report that says, yes, you gave me 100.
We worked on 100.
Only 10 were valid.
How do you deal with that?
Actually, I have experience with that.
Because a lot of times, from resource management perspective, we may be leveraging intel to help answer that question.
And that intel, through our due diligence, right, appears to be-- with initial confidence because we're still vetting them out-- providing valuable information.
And getting that feedback is a tremendous value.
Because now, it lets me assess that either my process from the CTI team is broken.
My source-- [INAUDIBLE] Yep.
Maybe my requirements are off.
So for me, I'm going to give positive feedback because it's not always going to be-- [INTERPOSING VOICES] We're not looking for, like you said, rainbows.
And we want negative feedback as well.
So if I send my SOC 50 indicators or however many threats and they get nothing back in our environment, OK, well, I need to refocus.
I mean, negative feedback is just as important as positive.
I need to refocus.
I need to hone my collection.
What do you do with that information from a metric perspective?
Well if it's feedback like we're not finding anything, we're not getting anything additional in, or we need more context, then that's where I'm talking you hone your collection.
You focus on something else.
And maybe you come back to that.
Because it's not to say it's not there right when they sweep the logs today, but it might be a week from now.
Those are things that you can do.
But when they come back to you and say, I've had 10 positive hits, then that's information that we were talking about is going to go to the SOC-- I mean the IR, excuse me.
They're going to clean that up.
And then, we're all going to use that together.
Does that answer your question?
But at the same time, for me, to echo what she's saying, but the reverse of it, it allows me to demonstrate metrics set to say, hey, working with IR, providing that type of information, allowed me to have cost savings.
I'm getting rid of a source or I'm not I'm not wasting my hours here.
And that resulted from the feedback loop that I have from your team that allowed me to make a value-driven decision internally to no longer use this resource or no longer a focus collection in that area.
So you still can capture metrics around that and even attribute it to-- say it's the SOC team or the hunt team that's providing that back.
Maybe we need to retune Tanium to do something else because it's not doing in a workflow process.
So to echo what she's saying, but at the same time, use those metrics to make risk decisions internally with the CTI team.
Thank you very much.
I think that's another really good point.
And hopefully, we're able to give you something.
Basically, that feedback, it's tough to measure.
But you should be able to, like he was saying, figure out that we made some process improvements or made some decisions as a result of that feedback that led to this or led to tuning.
So a month later, we got different feedback.
We got better feedback.
And we go back to management say, this is what we did.
This is what that resulted in.
And hopefully, there's some way to show some kind of value at least, maybe in dollars and cents perspective.
Like the SOC burned this many hours because of the 90 wasted hours of effort or whatever.
And now we've been able to lower that number this last month because of process improvements.
They only had 10 or 15.
So that's where the metrics part kind of comes in, if that makes any sense.
[INAUDIBLE] Yeah, go ahead.
[INAUDIBLE] I would say the [INAUDIBLE] And bringing it back to whiteboard sessions.
Do we agree with this statement?
How do we fix our product-- Awesome.
[INAUDIBLE] quantitative analysis.
Thank you everybody.