Webinar
Turning Intelligence Into Action with MITRE ATT&CK: Detect ‘19 Series
After you have watched this Webinar, please feel free to contact us with any questions you may have at general@anomali.com.
Learn more about MITRE ATT&CK.
View TranscriptKATIE NICKELS: Welcome everyone.
Thanks for joining us.
I'm Katie Nickels with the MITRE corporation, joined by my colleague Adam Pennington for Turning Intelligence into Action with MITRE ATT&CK.
So what we found is we go around and talk to a lot of people about ATT&CK.
They've maybe heard of it, so we'll be going over what it is and of course, as the title suggests, how you can put it into action.
A little bit about me.
I work for MITRE, obviously.
I've been doing instant response and threat Intel for about 10 years now.
That's my passion.
As my side hustle, I also teach for Sans.
So it keeps them busy.
ADAM PENNINGTON: I'm Adam Pennington.
I'm part of the core attack team, long with Katie.
I've been working in deception operations the last 10 years as well as a number of years working with ATT&CK.
So if you haven't heard of MITRE, we're a not for profit who's been doing research, engineering, and generally working in the public interest for over the last 60 years.
So we've put out a number of things like ATT&CK and other efforts that are out in the wild.
So I want to start with a quick show of hands.
So before walking into the room today, how many of you have actually heard of ATT&CK?
OK, pretty good number.
KATIE NICKELS: That's a lot.
ADAM PENNINGTON: How many of you have looked at specific tactics and techniques?
Gotten into the data a bit.
A bit less.
How many of you are using ATT&CK today?
OK, so it's actually a relatively high number from some of the audiences I've been around.
So since everyone's not at that same level, I am going to get into a little bit of the basics of ATT&CK and talk through how it's structured.
We're then going to get into working with threat intelligence, talking about how you can work with some of your own data in it.
A lot of the biases around some of the data we've added.
How to work with them, and then what you might be able to do with it for recommending to your defenders.
So start with, what is ATT&CK?
At the end of the day, it is a knowledge base of adversary behavior.
It's like an encyclopedia getting into different things that actors do largely after they break into an enterprise system.
It's based on real world observations.
Everything in ATT&CK are things that actors have been seen to do, generally state-level actors.
This isn't theoretical or stuff that we've just seen from red team.
It's stuff out there in the wild.
It's free, open, and globally accessible.
So any of the material we're talking about today, you can actually see on attack.mitre.org.
It's out there, it's public.
It can be used by anyone from a student to a corporation.
We try to make it as usable, license wise, as we possibly can.
It's a common language.
And that's something we'll be getting a bit more into today.
But it can be a way for people to be able to discuss behaviors and be on the same ground.
And finally, it's community driven.
So most of the content that's going into attack today is actually coming from contributions from the community.
Suggesting reports we should be mapping in, suggesting new techniques that we're missing.
And we encourage anyone here, if you take a look at our website, it talks about how to actually contribute to us.
And let us know if there's something we're missing.
So when we talk about the space that ATT&CK lives in, we often like to use David Bianco's Pyramid of Pain.
So this is a representation of how hard you make life for adversaries when you block their access to a specific type of indicator or behavior.
So as you work up the Pyramid of Pain, it gets tougher and tougher for an adversary to get off of.
So starting with hash values, where I add a bit to the end of a binary file, hash value is completely gone.
So it's something that's super easy.
I can buy a new IP address or domain name at the flip of a hat.
Working your way up into TTPs and behaviors.
So I've got my routine that I go through each and every day.
It's not easy to break and just suddenly decide I'm going to do something different today, and we feel that adversaries are in a very similar space.
And that it's hard for them to change up the fundamental ways that they actually do an intrusion.
And so ATT&CK tries to live in the space.
And to give a way of describing the tactics, techniques, and procedures or behaviors that adversaries do.
So if you're familiar with ATT&CK, and it looks like most the room is somewhat familiar with it, this is probably the view that you've seen in the past, what we would call the matrix.
And so you're talking about tactics, techniques, and procedures.
They're usually three words we just run together and think of as one thing, but an ATT&CK, there are three separate concepts.
So first we have tactics.
These are the adversary goals, it's working across the top And so these are the general goal that an adversary is trying to do.
Whether it's initial access where they're trying to get into the network, credential dumping where they're trying to bring down credentials out of a system.
Or something we just added in impact, which is where an adversary is trying to destroy or disrupt a [INAUDIBLE] system.
Working down under each of these tactics, we have the individual techniques.
And so that's how the goals are specifically achieved.
So instead of initial access, we have spearfishing attachment.
Instead of impact, we have data destruction.
So getting into more specific ways that the adversary is doing it.
Behind each of these cells, though, there is a lot more information.
And so you might be thinking to yourself, tactics, techniques, and procedures.
So within each of these techniques, we have procedures which are examples of specific ways that real actors have actually performed this technique.
They may have some technical detail that may be useful for writing your detections on.
And so each of the pieces of ATT&CK actually has this.
So talking about a lot being behind each of these cells.
I'm going to dive a little bit into what's here, especially so we can refer back to it later.
Each of these techniques starts with a description.
So this is an idea of how that technique is technically done.
Oftentimes what it is the adversary is trying to achieve with it, and giving an idea of what's going on in as simple language as possible.
Each technique has metadata, which can be really useful.
We have this technique ID that's used by a lot of different products.
It can be a way to pivot between different ways of looking at these behaviors.
We have what tactic, the technique or tactics, the technique appears in.
We have platforms.
So this is what sorts of environments can the technique be done on?
Right now it's Windows, Mac OS, and Linux.
We're actually working on adding a number of cloud platforms to this over the next couple of months.
And so that's something that you're going to see expand really soon.
Data sources.
What sorts of things might you want to collect on your network, or end hosts in order to be able to see this behavior?
And then finally, keeping track of things like versioning.
Each technique has information that might be really useful for your defenders or to pass onto defenders as well.
So we recently restructured mitigation.
And so each of these is now something that's clickable.
You can go in and see what other techniques a given thing you do might be able to stop from happening on your network.
And then detections, which are currently a bit more free form ways that you might be able to detect this particular behavior.
Into the procedure examples.
So this is what I was showing on top of the matrix earlier.
And these are the specific examples of adversaries doing this technique.
And then with everything in here, especially where we're talking about things specific adversaries have done, we have references.
So if you don't believe us, you can always go in and check our work.
It's right there, it's clickable.
It wants you to be able to see that you agree with what we've said about it.
So talking about the groups that we've got doing these techniques, we also have pages for groups and software.
So this is tracking the individual actors that we've seen do these techniques.
So we've got brief descriptions of the groups, again, cited back to the open public threat intelligence reporting this is coming from.
We've got associated groups.
And so it's-- a lot of different companies have different names for related behavior.
And so we've made an attempt to try to track where there's significant overlap between a lot of these group descriptions.
They may not be 100%.
They may be partial.
And in a lot of cases, we're going based on what reporting is actually said the overlap is.
But it can be a helpful way to try to understand related behavior.
Again, we cite everything.
So those associated groups, you can go back to where it is.
Techniques used, this is sort of the inverse of what I showed on the technique pages.
So for each technique, we're tracking what groups do it.
For each group, we're tracking what techniques go with it.
And again, fully cited.
I'm not going to dive into taking a look at him individually, but we also software pages that look a lot like this, where we've taken a look at various tools and pieces of malicious software and mapped out the various behaviors that can actually be done with that tool.
So this is something where you can pull it all together and start to have an idea of everything that a particular adversary could do.
And finally, yet more references.
And you'll find that is a theme throughout ATT&CK.
There are over thousand citations currently on attack.mitre.org.
So there are a number of ways that we see people primarily using ATT&CK.
Today we're going to be focused on threat intelligence with a teeny bit of detection.
But there's a number of different things people can be doing.
So detection.
This is probably the most popular use case for ATT&CK, where people are writing analytics, tracking what kinds of detections they're doing with their environment, and looking for ways to actually find adversaries.
Assessment and engineering.
So being able to take a look at your overall stance, where all of your pieces are together, and what sorts of changes you might want to make in the future for your environment.
Threat intelligence.
This is what we're going to be focusing on the rest of the time.
Looking at comparing the communicating adversary behaviors.
And finally, adversary emulation.
This is the use case that ATT&CK originally came out of, where you can use it as a way for red and blue teams to talk to each other, do purple teaming.
And a way to track different ways of doing adversary engagement.
So I'm going to turn it over to Katie to talk about threat intelligence use case.
KATIE NICKELS: So let's dive in a little more.
Those were the four key use cases.
Threat Intel is obviously the best one, because that's the focus of this conference, and I'm the threat Intel lead for the team.
So we're going to clearly focus there.
The value of structuring what we know about adversaries using ATT&CK is pretty powerful, because it lets us do a couple of key things we'll talk about.
Comparing behaviors, whether it's one group to another over time.
Also communicating in a common language, we'll talk about that.
As Adam said earlier, ATT&CK is this common language where people across teams, across different companies even.
Can be sure they're referring to the same thing.
So let's dive in.
Comparing groups to each other.
So constant challenge among CTI teams, what's the greatest threat to me doing things like threat modeling?
But from there it can can be a challenge to translate, OK, what's the adversary doing to how do we defend against it?
How do we detect, mitigate, those behaviors?
So here's a chart we've made using a tool called the ATT&CK Navigator.
Free open tool that lets you just visualize different things about ATT&CK.
So what we've done is in yellow are the techniques that APT28 has used that we've mapped based solely on open-source reporting.
That's a really, really important little asterisk down there, because this is not representative of everything that APT28 has ever done, because no way could we claim to have that visibility.
No way is all that stuff in open-source, for good reason.
This is a subset.
This is what we've seen in open-source reporting and we've had time to map.
So with that important caveat which we'll dive into later, yellow cells here are techniques that APT28 has used in the past.
Similarly, another Russian group, APT29.
In blue are the techniques that they've used in the past.
You can probably guess what's coming.
Yellow plus blue, drum roll equals green.
Because these behaviors are structured in the same way, you can start to overlay them.
And say for example, if we're really concerned about APT28 and APT29, maybe those techniques in green are the ones that we should start with.
A lot of folks come to us and say, ATT&CK seems really cool, my bosses told me to use it, I want to use it.
But it's overwhelming.
There are 244 techniques in enterprise ATT&CK, which is that initial access through impact, command and control.
This gives you a way to say based on the two groups that maybe I care about, let's start there.
Let's start with those green techniques.
And of course you can substitute any groups you care about.
You can do this on ATT&CK Navigator itself.
We have a hosted instance, we have a walkthrough showing you how to do just this.
Let's take it a step further, though.
OK, green maybe that's what we care about prioritizing.
What if our SOC or our defender had done an assessment across our entire enterprise of the techniques maybe we can or can't detect, overlay that on here, and then let's say, OK, maybe those five circled in red, those are the ones that both groups have used in the past and we know we cannot detect or defend or mitigate.
Pretty powerful way that we can take our threat Intel that we know adversaries are doing with it and overlay that on our defenses.
That's comparing groups to our defenses.
Next, comparing groups over time.
There's an awesome team called Unit 42 who has developed adversary playbooks based on attack.
And they take this approach.
They divide it by, I think it's about six month periods where they say, here's what we've seen an adversary doing in this six months, like oil rig.
For the next six months, in the next six months.
Maybe certain techniques are more or less popular.
So let's say, notionally, OK, this is a group in 2018 we're tracking last year.
techniques disappeared.
They stopped using them.
Gives us maybe a hypothesis to say, well, why did they change that up?
Is there something else they're doing?
Did they realize we were onto them?
So letting you build these hypotheses about adversaries.
So that's the comparing use case.
Comparing groups to each other over time.
The next key use case for mapping ATT&CK threat Intel; communicating.
And Adam already talked about this common language thing.
So maybe we have our CTI analyst who says, yep, this is what our adversary is doing.
There's a run key, Adobe Updater they want to look for.
And sometimes I found, in teams I've worked on, sometimes the threat Intel analysts and the defenders don't quite know what the other's saying.
They're not quite on the same page.
Well, with ATT&CK, if you're mapping this to ATT&CK, CTI analysts can say, OK, I mean T1060, that registry run key is an attack.
And let's say the defenders already been looking at attack.
Oh, oh T1060?
Oh, of course.
Yeah, we have registry data.
We can detect that.
A way for teams within the same organization to communicate.
CTI and defense.
Mapping to that common language of ATT&CK.
Next communicating more broadly across the community.
So maybe company A, APT1337 is using autorun.
Company B says FUZZYDUCK used a Run key.
Those are two ways of saying the same thing.
It's still just that registry run key.
So for any vendors in the room, this is great because then your consumer says oh, maybe they're two different vendors I'm reading logs for different products.
Oh, you're talking at the same thing.
Again, communicating in that common language.
That sounds really cool.
Like those heat maps I showed, like that seems fun.
I kind of want to do that.
Well, a minor issue.
You have to have the data structured in ATT&CK format.
So let's dive in a little bit on how you could go about doing that.
And luckily, to give you a little bit of a teaser, we have some things coming up to help out with this.
Some analysts that I work with are working on a tool right now that helps automate this.
So the goal is to have that released in this fall.
So keep an eye out on our Twitter for that.
This process of mapping data to ATT&CK, mapping information you have, it's not easy.
It takes some time.
So we've divided it into these rough five steps, and then plus the step zero, of course.
So we're going to walk through today.
It's really important to note that as you map different data to ATT&CK, you can use both finished reporting from others and your own raw data.
Which we'll talk about some of the pros and cons of each of those.
Step zero, cause we're math, computer people.
We have to start counting with zero.
Understand ATT&CK a little bit.
It's tough to map any data you have to it if you don't even know what it is.
So some resources we recommend-- well, you're coming to an ATT&CK presentation now, so that first one, check.
For anyone who's not here, we have a Getting Started page on our website.
Recommend checking that out.
We have a bunch of presentations recorded.
You can listen to, watch, try to get a sense of what is this ATT&CK thing.
What are these tactics, those adversaries, technical goals.
Those techniques, how those goals are implemented.
What are some of the use cases.
So understand a little bit about ATT&CK.
We also have a blog.
We have an ATT&CK 101 blog post that the lead Blake Strom wrote.
Get you started with that.
I recommend everyone read those tactic descriptions, what those goals are, so you know what you have to choose from.
And then skim those techniques.
They're kind of a lot of them, over 200.
So just skim those to get a sense of what they are.
Next thing that I recommend is work as a team, learn together.
One SOC that I know, they, for example, took a different technique every week and said hey, one analyst, you report on what that technique is and how we might be able detect it our environment.
So go technique by technique, learn together.
Step one, find the behavior.
Think of that Pyramid of Pain.
When we're working with ATT&CK, we're not looking for indicators like hash values and IPs.
And I know.
As a CTI analysts, I got so used to finding those, highlighting them, extracting those.
But you're thinking differently.
You're looking for behaviors.
Not what's the indicator, but what was the behavior behind it.
Also realize that some of the information that you have may not be the most useful for mapping to ATT&CK.
For example, static malware analysis, assembly reverse engineering there may or may not be the most useful.
Sometimes dynamic might be more useful.
If you have victim information, that's great for mapping to, for example, the diamond model, another framework.
But that's not so great for mapping to ATT&CK.
So realize that different frameworks, different models, have pros and cons.
So people often ask us, which one's better?
ATT&CK, Kill Chain or Diamond?
It depends on what you're trying to do.
It depends.
And you can use them all together, as well.
So finding the behavior.
Let's walk through.
This is an older FireEye report on APT3.
So thinking through.
And you might scan this and be like, well, I don't really have any indicators.
Like, this is a non routable IP.
No indicators I can do here.
So going through thinking of the behavior.
Some kind of exploitation.
Command line and who am I.
Creating persistence, establishing a connection.
Start finding those behaviors.
From there what we're going to do is we're going to find the tactics and techniques for each of those behaviors.
I'm going to walk you through that.
So establishing that SOCKS5 connection was a tactic and technique.
And then over TTP port 1913.
What's the tactic and technique there?
Next, research the behavior.
Maybe you've never heard of SOCKS before.
And that's fine.
CTI analysts, and as they should, come from all kinds of backgrounds.
If you've never heard of something, research it.
No problem.
Go talk to someone on your team.
Go to the internet.
This is time consuming, but over time, it builds better analysts who have a more thorough understanding of adversary behavior.
So Wikipedia, awesome starting point.
Maybe you don't know what SOCKS is, you look it up on Wikipedia.
Tells me a couple of things.
It's an internet protocol at layer 5, the session layer.
All right, so I got some information.
I know a little bit about SOCKS is.
Port 1913.
Has anyone ever seen port 1913 in use in your environment?
I know I hadn't.
I was like, what the heck is 1913?
Go to a speed guide has a port guide, so ARMADP, I had also never heard of that.
So kind of weird, but now I know a little more about that port and protocol.
Next up, translate that behavior into a tactic.
So we know what the behavior is.
What's the adversaries goal behind that?
And the good news is if you're doing enterprise ATT&CK, you only have 12 options to choose from.
So your odds are pretty darn good.
Think about what their goal is.
And if you're using finished Intel like a blog post, a lot of times that'll give you a hint about what goal the adversary might have.
So let's take a look at the snippet there.
OK, establishing a connection, issuing some commands.
So to me that sounds like command and control.
So we've got our tactic, we're doing good.
Next, techniques.
What technique applies?
And this can be one of the toughest parts because there are so many of them, over 200, 244 right now.
Important to remember, not every single behavior you find is going to be a technique.
We've chosen a certain path, a certain set of techniques.
That said, if you feel like something's missing, email us attack@mitre.org.
We're constantly adding in new things.
Again, we don't have all the visibility.
So some strategies if you know the tactic, great, it's command and control.
Let's pull up that tactic and look at the techniques under that.
Also, keyword searches.
Sometimes you'll get lucky and you'll find exactly what you're looking for.
So some ways that you can help out the analysts who are trying to do this.
So for example, let's do what I said.
Let's bring up that command and control page, 21 techniques, and start looking through.
This is part of knowing ATT&CK.
We see this custom command and control protocol, commonly used port.
So maybe that tells us something about the structure of ATT&CK that we divide out protocol and port.
Which we do because sometimes adversaries use non-standard ports for certain protocols.
So we do divide out port and protocol.
That's important to know.
So maybe we're actually looking for two techniques for this command and control behavior.
So let's try what I said.
Let's try a keyword search.
Of course it works perfectly because it's a demo.
But sometimes this will get you what you're looking for.
So we try searching for SOCKS and we look a few couple of techniques popped up.
We look at the standard, non application layer of protocol technique.
And there we go.
Such as Socket Secure SOCKS.
And then we look at different examples.
We have this malware called BUBBLEWRAP can communicate using SOCKS.
So we found our technique.
Standard, non application layer protocol.
But we're not done yet.
There was that other part.
That port.
That weird one that I'd never seen.
And no one raised their hand willing to admit that they've seen it as well.
All right, we search 1913.
Didn't pop up, dang.
What do we do now?
Let's go back to that list, that command and control technique list, and let's do Control F for the win, here.
Let's just search for port.
So we've got three options there.
We got commonly used port, uncommonly used port and port knocking.
So between the three, I'd never heard of it, none of you all had ever heard of it either, so I'm going to go with uncommonly used port.
So what we've done there, we've mapped behaviors that we found.
Command and control, standard non application layer protocol are the tactic and technique.
GCP port 1913, command and control uncommonly used port.
So two techniques there.
Should also note that, and I probably should have told you this upfront.
We put our slides on SlideShare.
So sorry folks who are taking pictures.
You're welcome to.
It's fine.
But we'll put him on SlideShare so you don't have to do that.
So rinse and repeat like a good shampoo and conditioner.
All right, successful exploitation.
Call that exploitation for privilege escalation.
Command line, who am I.
It's two techniques, command line discovery and system owner user discovery.
A lot of times, execution will happen along with other techniques that come together.
Next one, creating persistence.
Well that's easy.
There's the tactic and then schedule task.
So easy, this is an easy pairing.
Scheduled task is a technique name.
All right, pass it off to Adam to dive in more.
ADAM PENNINGTON: So Katie's gone through a little bit of the process of how we actually go through threat intelligence reporting and turn it into these techniques.
And so all of the information that I was showing earlier in terms of the procedure examples in techniques and the techniques used in software in groups have gone through something like this process.
Going through by hand, by human, going through these textual threat intelligence reports and figuring out what the behaviors are.
I've been doing this for quite a while, led by Katie for the past five years, where they've gone now through hundreds of reports and done this type of activity.
And so there's actually now a lot of data that's potentially useful on our site for what different groups do and what various pieces of software are able to do that are out there in the wild.
But the key thing here, and Katie mentioned this earlier.
Is that all of the intelligence in here only comes from freely available public reporting.
And so that's great.
It means that you can go back and check our sources.
It means that we keep ourselves safe.
But it does lead to some issues.
So there are definitely biases in the data that we've actually put up on the site.
And as threat intelligence analysts, it's really important for us to understand our own biases and the biases of the data that we're working with.
And so we want to talk about those a bit so that we can give a better idea so that people can actually start working with this data.
And so we think that there are two types of bias that are in these technique examples that are in here.
So first off, there's the bias introduced by us.
In the way that we actually find the reports in the first place and this mapping technique itself.
So we're introducing some into it from there.
But there's also bias inherent in the sources we use.
Not everything gets reported and not everything gets report evenly.
And I'll get into some of the more specific reasons why.
And so it's really important to understand these before you start doing grand big statistical analysis on this data and trying to say things about what actors do.
To start with.
One we can really quantify in a solid way is the sources we select.
So if you look at what's out there and what we've been able to pull in to the groups pages for publicly available threat intelligence reporting, the vast majority of it is coming from security vendors.
And that's not unexpected.
Now there are some outliers in there.
Occasionally there's really good reporting and technical press outlets.
There might be one Wired article that it's got a couple of good behaviors in it.
Or there's been a number of indictments that have come over the last couple of years that have had some really great behavioral information.
And so we use those too and they come along.
But it's definitely skewed towards security vendors.
So we have availability bias, or also known as availability heuristic, as we're looking through these reports and trying to find the techniques in them.
So if you want to think about this in a non cyber sense, oftentimes people are really concerned about rare events, like getting eaten by a shark or an airplane falling out of the sky.
Which are actually super unlikely.
But at the front of people's minds, because they've heard about it in the news, it's just something that's come to their attention more.
We're somewhat similar when we start mapping these behaviors.
We've got the techniques we've seen, we've heard about more, that we've seen more recently.
And we're a lot more likely to actually find those as we're going through the behaviors.
With there being so many techniques in ATT&CK now, it's hard for everything to be at the front of our minds equally.
So I'm a bit of a beer snob-- KATIE NICKELS: It's true/ He really, really is.
ADAM PENNINGTON: So I walk into a bar and look at a row of tap handles, and I'm often looking for that thing that I've never tried before.
I'm interested in the new experience, getting that new entry and untapped, if you used it.
I'm looking for the novel.
And we're not all that different when we're picking out threat intelligence reports.
And so the example I've got here, ATP using transmitted data manipulation.
So transmitted data manipulation is a new technique in the impact tactic.
We've only seen one actor in the wild that we've been able to map into ATT&CK at this point.
So somebody puts out a report on a new actor that's doing this technique, there's a really high likelihood that we're going to bring it in.
Rather than somebody publishing their FUZZYDUCK using Powershell report.
And so we're definitely biased in what's more interesting to us as we're looking at reporting.
So I talked about the vast majority of our data coming from security vendors.
And so that leads to biases in the sources we use as well.
So availability bias is not just a thing for us, but it's going to be a thing for the people that are creating the reporting in the first place.
So first, they've got a similar bias in that they've done many incident responses before, their behaviors they've seen by adversaries before.
And so when they spot that behavior again or something that looks like it in a incident, they're probably going to be more likely to recognize it, and it's going to come out in the finished reporting.
This can be similar with attribution.
So we're working with other people's reporting in terms of saying what groups things are.
So an example of this has been happening a lot lately is APT10.
So there's been a lot of reporting lately where if it's a generic Chinese activity, it has any overlapping activity all with APT10, it must be APT10.
Well, I'm sorry, it's not always APT10.
And so there's definitely biases in what gets reported based on what they've seen in the past.
Novelty bias hits our reporting sources at the end of the day, too.
So for people to be putting out threat intelligence reports that cost the money to create and putting them out for free.
At the end of the day, it's probably marketing for most people.
And so that's natural.
And a lot of the reporting is really good that we're bringing in from that.
But there is some motivation behind it.
And so there's definitely a motivation that we see for people to put out the APT1338 report, rather than the fifth in a series of APT1337 reports.
And again, an APT10 example.
So people who track APT10 say that they've been around continuously, acting pretty much constantly for the last 10 plus years.
But if you look at the public reporting of APT10, you see multi-year gaps where nothing came out, nothing's being spoken about them, until they do something really interesting like break into a bunch of managed service providers.
And so you've got these time periods where an actor might be novel, might not be novel, or where it's a new actor.
There's a victim bias.
So it might be more interesting to write a report or might be more likely to get out to write a report about certain victims rather than others.
And each of those victims is going to have their own biases in what they can see.
So for example, if you've got a report on someone who broke into a power plant or somebody broke into MITRE, it might be more interesting to report on the power plant.
A lot of the incident response companies also do go back to the victims and ask if they can write a report on activity, and some victims are more likely to say yes than others.
And all this contributes to the information that's coming out not being even.
And so one of the reasons why it's not even across these victims is visibility bias.
So especially if you're coming in, you're doing incident response, you're not going to have visibility from before the moment you step in passed what sensors were already in the environment, forensics information, the information that's there waiting for you.
And you might be missing out on some of the richest sources of behavioral data.
So decoded command and control traffic is a goldmine for behavioral data.
Watching over an adversary shoulder and actually seeing what they did, what they got for response, how they reacted to that, is very beautiful for translating to something like ATT&CK.
But you're not going to have it in all cases or in all environments.
Finally there's production bias.
It's just a simple fact that some sources put out more reports that have lots of behaviors that we're able to use than others.
And so whatever their other biases are in terms of availability, visibility, et cetera, come into how our final reporting looks.
But this all sounds pretty doom and gloom.
But there's still a lot you can do with it.
So I'm going to turn it over to Katie to talk about how you solve these.
KATIE NICKELS: He gives you the doom and gloom.
I give you the bright, shiny, the more you know.
So the good news is, yes, OK, we all have biases.
We all have different perspectives.
And that's fine, we're human.
We've talked you through many of them.
Adam did.
We can overcome them.
And one key way, again, the more you know, if you know these biases in the data or just apply this to the rest of threat Intel as well, know those biases you have.
Know that your team is subject to them as well.
So call them out and be honest about them.
If anyone claims that they have full visibility of all adversaries and their attack mapping is perfect, they're lying.
None of us have full visibility.
So be honest.
Ask people who are mapping to ATT&CK, where do you get the data, what are your sources?
And be honest in your own biases as a team when you're putting out reporting whether it's mapping to ATT&CK or not.
How do we hedge those biases OK we know them and that helps a little bit.
But another key way is working together.
On our team, whenever we map a report to ATT&CK, we try to have at least two analysts look at that report and do the mapping.
Why that is Adam talked about.
Visibility bias.
I'm more likely to see the techniques that I've mapped over the past years of doing this, whereas a newer analyst, maybe they notice a new technique.
Adam was instrumental during the impact tactic, so maybe he's more likely to see those techniques.
That's fine.
It's great.
We all have our own perspectives.
And this is part of the importance of having diversity in a team, diversity of thought, diversity in all aspects.
Helps with ATT&CK mapping as well.
Adjust and calibrate those sources.
Adam talked about how he had a lot of vendor reporting.
Maybe you hedge and you add in your own data sources.
You have awesome incident response data, some juicy command line output from an adversary.
Add that in, because that visibility might be different than what a vendor has, or an email focused vendor, or the government who puts out these public indictments.
We all have different visibilities.
So calibrate.
If you're really heavy on one, realize that.
Bring in some other data sources.
And lastly, remember, this isn't perfect.
What we're doing here, we're prioritizing the known over the unknown.
So without this or honestly without any threat Intel, we don't have any information.
So even if this type of mapping isn't perfect, it's something you know over something you didn't know before.
So remember that as you're doing this as well.
Made you sit through all of that because what happens, a lot of people come to us and they say, what are the top techniques adversaries are using?
And we try to explain, well, it's imperfect data.
And they're like yeah, yeah, yeah, just get to the good stuff.
What techniques should we care about?
So everything we said, OK, keep that in mind.
These are the top based on solely publicly available reporting that is imperfect.
So again, we're not saying this is an absolute.
We are saying, though, that we found hundreds of different examples of adversaries in the wild using these techniques.
So if you have nothing else to start from, maybe not a bad place to start.
And what's really great is other teams, other vendors you might have seen over the past months are releasing their own top techniques.
What are they seeing most frequently.
So that's a way you can say, OK, well, maybe there are certain techniques listed more than others.
Maybe a lot of these, a lot of discovery, a lot of execution type techniques, well, are those all from vendor public sources?
What else can we adjust here.
What can we calibrate on?
Maybe we can bring in our own data as we talked about.
You also might look and say, why are certain things missing like Spearphishing?
I know that's really, really common.
So, well, turns out that Spearphishing attachment, spear Spearphishing link were added to enterprise ATT&CK a little bit later, and so we didn't go back and map that historic reporting.
So knowing some limitations of attack, some of the nuances, and then work with different data.
Maybe there is someone, a vendor.
Maybe you've mapped your own historic reporting, or from a tool.
So you can start to hedge and realize there are biases.
But this is a great place to start.
ADAM PENNINGTON: So we wanted to finish with some of what you might be able to do as threat intelligence analyst to actually make recommendations from this, leveraging ATT&CK and leveraging some of this threat intelligence that you've been pulling in.
So we wanted to give a process for a way of actually working through the state, of working through some of the resources in ATT&CK to make recommendations.
And to start with, again, starting at zero, to determine your priority techniques.
We've shown a number of ways through this talk that you might be able to do that.
Whether it's focusing on a specific group you care about, maybe APT28's techniques.
Maybe you care about Russia in general, so it's that combination.
Or maybe you're just looking for-- want to start with something that's popular in general.
So the data that Katie just showed up on the screen.
Or maybe it's from your own red teaming or other data.
So this can be for a lot of different sources.
Research how techniques are being used.
So those procedures, those actual specific command lines can be super important in creating defenses.
If I create a new defense or buy a product and it doesn't cover the specific way that the adversary I care about is doing a technique, it's not going to stop anything.
So it's important to actually pull together that information and not just the fact they're doing a behavior.
Research the defensive options related to the technique.
So this is where working with ATT&CK can let you leverage a lot of existing resources.
So within ATT&CK itself at the beginning, I showed you some of the stuff in terms of looking at mitigation that we've listed, looking at detections, looking at which data sources to actually use.
But there are a number of other system assets out there that actually link back to ATT&CK.
We've created our own set of analytics called the Cyber Analytic Repository or CAR that you can go into.
There's another analytic repository called SIGMA, that has a lot of information out there.
And there's a number of other projects like Detect and others in GitHub that can get you information, already linked to ATT&CK technique IDs.
So research your organizational constraints.
So it's important to have an understanding of the defenders and what they can do and not do.
So knowing that our defenders are absolutely not going to be able to do an EDR solution because we don't have the budget, we don't have the management resources for it.
It can sort of help steer what you're actually going to recommend.
Determine what the trade offs are for specific options.
So go through the different options that you've gathered over the course of this process looking at what different options do I have, what analytics have I maybe gone to create, and which of those are going to actually work with my process.
And then finally, make recommendations.
And so you might be thinking with the recommendations, though, that the only thing I can do is create a new detection or stop something from happening.
But there are actually a number of ways that this can be done.
And so this is a notional idea of what an enterprise might look like in terms of analysis, things that we can detect today, things we can't.
And things that are in orange are notionally-- what might be high priority.
And I should caution, as I see people taking pictures, this is completely fake notional data.
This has no meaning whatsoever.
KATIE NICKELS: He does deception, so what do you expect?
ADAM PENNINGTON: So I talked about doing a detection, but we might have other things.
So say we're really concerned about Spearphishing.
We know that our adversaries do different types of Spearphishing, specifically attachment and link.
And so maybe our boundaries detection is already pretty good there.
We bought our appliances, they're working right along.
And so our recommendation is to do more user training.
Trying to get people not to click.
User execution is a really popular technique.
So there might be instances where something concerns us and keeps us up at night.
But it might not be the right thing to do to try to stop it.
So something like supply chain compromising component firmware.
Depending completely on your organization, maybe something that is too sophisticated-- maybe you have too sophisticated of actors and too hard for you to realistically have any chance of stopping if an adversary is going to go down this path.
And so the proper recommendation may be, accept the risk.
So document it, write it down, and move on.
And finally sort of a little bit more obvious, maybe a gap that we've got.
And so we can't just do a new detection, new analytic, turn on some feature.
We might actually have to go out and buy something.
ATT&CK can be a way to help evaluate some of those products as you're looking at them as well.
So we're hoping that you've gotten an idea over the course of this of how ATT&CK can help you communicate your threat intelligence, how it can compare behaviors between different types of activities, between different actors and over time.
Tried to get into some of the biases that are in the data we provide so that you can more usefully use the things that we've put out there for free.
And then some advice for how to hedge those biases and use this data to improve your defenses.
So as Katie mentioned, slides are up at SlideShare through the magic of the internet.
They should have been released while we were up here speaking.
So we will find out as people actually try to get at that link.
I'll just leave that up here on the screen.
So absolutely feel free to reach out to us on Twitter, the team, that's how to get in contact with us.
KATIE NICKELS: Thank you all so much for coming and have a great evening.