Building a Security Response Hot Rod with Threat Intelligence: Detect '18 Presentation Series | Anomali


Building a Security Response Hot Rod with Threat Intelligence

After you have watched this Webinar, please feel free to contact us with any questions you may have at


Learn more about threat intelligence.

I'm Travis Farral.

I'm the director of security strategy with Anomali.

And the purpose of this chat is to talk about how to approach things like automation with regard to threat intelligence in a hopefully very smart and intelligent way that makes sense for the business.

My cohort here-- I'll let him introduce himself.

Morning, my name is John Kitchen, sales engineering with Anomali.

Been here about six months and met Travis as I first came on board.

And I said, hey, let's collaborate on some things because I read some of Travis's articles.

And I'm excited to be able to present to you all today.

Thank you.

Yeah, awesome.

So let's jump in.

So the first thing is there's some very popular terms that are in the cyber security space in general.

Orchestration is a big thing, automation-- what's the difference between those really?

The way I like to describe it is automation is something that is very simple.

It's if this, then this.

That's automation.

If a human being had to do that, we were able to take the human being out of it by saying every single time we see this, we're just adding this to it or doing something with it.

Whereas orchestration requires a little bit more decision-making.

It's more of a process flow.

If this, then this.

Oh, and if it's that, then send it over here to this.

But if it's this, then do this with it and maybe make another decision.

It becomes more of like a decision tree.

So it's almost replacing a little bit of human analytics with some levels of automation.

It's like taking automation to the next level.

That's the way I look at the difference between automation and orchestration.

What we're talking about this morning, though, is really just automation.

We're not trying to get into something super complicated.

But we're trying to address the fact that we all live in environments where we have lots of tools, we have lots of input sources, we have lots of logs and other types of data.

And pretty much none of it talks to each other.

So we're talking about how to get value out of connecting those things together, hopefully taking some of the bottlenecks in our processes out, and hopefully being able to improve efficiency and accuracy along the way.

So hopefully that makes sense, and we'll move on.

And come along with us.

If you have any questions, throw them out.

We're happy to answer them as we go.

So up here are just a few bullet points that I want to touch on briefly, and then I'm going to actually expand on them a little bit in the subsequent slides.

So some pretty key things and, again, it's just text on a screen right now.

Just key bullet points is getting those events out of your devices, your security devices.

And what does that mean?

Your SIEM events, your log entries, and all your other security tools around in your network.

So that's just one key piece.

Another thing I'm going to touch on a little bit here is adding context and enrichment to those.

It's a critical aspect of getting the logs and being able to see what you're getting and to be able to correlate that data and then associate that with other things like threat intelligence and then to be able to apply specific actions accordingly.

So if you have those bad IPs, bad domains, bad hashes and things like that, what are we going to do?

And then, to Travis's point, how do we automate some of those processes?

If you saw the keynote, you'll see that one of the comments from one of our customers is getting Anomali ThreatStream and Enterprise actually filled two FTEs.

Now, I don't know everybody's salary.

But you can think about that from a business value.

Two FTEs-- that's a lot of money, even if you're a tier 1 analyst.

So that's one of the things we're going to try and do here is talk about the different sources, the events, the enrichment, the tools that we need to use to automate.

So just real quick, I want to talk about that.

One of the things that we also brought up was-- of course, you know that we're API-driven.

And so with the release of our new SDK packages, that's going to be a huge increase in value and streamlining a lot of integrations.

So over here on your left, you're going to see the different types of alerts, so SIEM, ticketing, help desk, UEBA.

These are all critical components of the logs that you need to gather from-- or you can get alerts from, excuse me.

So depending on what you have-- you might not have all of these things.

You might have a subset of them.

But how do you get the value out of that?

How do you bring that in?

UEBA is a new one.

It's up-and-coming.

It's only been around a few years, not too long.

And some people just see UBA, which is user behavior analytics, versus user and entity behavior analytics.

So you have to look at both, the users and the entity, entity meaning anything that's on your network.

It could be a server.

It could be a printer.

It could be a workstation.

It doesn't matter-- so bringing that into the brain, into the collective.

Now, that's great.

You've got a bunch of logs.

So what does that mean to you?

Does it mean anything?

So it's hard to really just decipher everything from just a simple single-source log.

Because a failed login attempt, as an example-- well, what does that mean?

Well, probably nothing.

What about three failed login attempts?

Well, on a Monday morning at 8:00 or a Friday morning after the party that we threw last night, it may not mean a whole lot either.

Because some of us may be a little gray this morning.

But when you have those three failed login attempts followed by a subsequent successful login attempt-- all right, that's starting to look like something.

That could be more important, right?

So how do you bring all that together?

And that's where I'm talking about some of the SIEM pieces, the UEBA, maybe even EDR, depending.

So then when you're doing that, start enriching that data.

So threat intelligence, CMDB, compliance, HR systems, things like Active Directory-- what's normal?

What's not?

So we logged in this morning-- that's pretty normal.

But when somebody starts logging in on Saturday morning and that's not normal, well, that's something we should probably look at.

And so when you start enriching that data, that's going to show additional context to the logs that you're getting.

The other point I was going to make there is when someone is logging into-- again, on Saturday morning-- and they're logging into systems they shouldn't be accessing-- why am I, from sales engineering, logging into an HR system?

That's not normal.

I shouldn't be doing that.

Why is that happening?

So again, being able to understand that enrichment, whether it's for vulnerability assessment or what have you, but pulling that enrichment in and then being able to apply that to your logs to build out that picture-- then what?

So then we're going to come over here to the response tools.

So now you've got all these integrations between all of your sources in your enrichment.

Now what are we going to do?

How are we going to execute that?

And so with the APIs and now our new SDKs and some of the new tools that we're coming out with, we're able to push these things down to your firewalls, your EDR, your AD, NAC, and things of that nature.

So we can automate and streamline some of those through the SDK process.

So once we have that in place, we can then execute.


So far, so good.

OK, so taking all these components that John covers, that we all have some subset of in our environments, what are we trying to do?

Basically, start with what the SOC is doing.

What is it that they're reacting to, and what do they typically do?

If they are always going to WhoIs to look up any external domains or IP addresses that they see, if there's a way that you can automatically do that for them instead of forcing them to have to go to another tab or another system to be able to do that-- that's a very simplistic example.

But that's the kind of stuff that, you know-- why are you doing that?

I see that you just opened up these other tabs.

What are you doing here?

And how long does that normally take?

Oh, that takes you do that on every ticket?

Oh, OK, just the tickets that fall into this category, you do that with.

You do other stuff for other things.

OK, that's interesting.

Well, look at what they're doing.

And figure out, OK, well, how could we automate some of these things very simply?

If it's as simple as if the ticket matches this category, then do automatically go grab this enrichment and just do that automatically so that they're able to more quickly and efficiently work through those tickets and do their job.

There's certain things, like if you know you're going to block-- the process in your environment is that you're going to block things that come from here and they meet a certain criticality or threat level or something like that, instead of forcing the human beings to have to do all that stuff, just automatically send that stuff into the firewall and have it blocked.

Figure out what APIs or whatever you can take advantage of to do that.

If you can pre-populate tickets with the details they're going to need to be able to work through that stuff, that's even better.

So as the ticket comes up, it's already got the WhoIs stuff in there.

It's already got passive DNS.

It's already got fill in the blank-- whatever they need to work through that ticket.

And share data between applications.

Don't make them have to copy/paste.

If they're copy/pasting, figure out, all right, how can we make this talk to that so that we just send that data over there and it's automatically there?

That helps you with accuracy too so that they don't accidentally copy and paste the wrong thing or paste it into the wrong field or something like that.

You don't want to accidentally block what was the source instead of the destination and now you've created an operational problem.

So there's other benefits to doing a lot of this stuff.

The key here is don't try to do something that's too complex.

If it's not as simple as just taking this and putting it over here automatically, that's not what we're talking about this morning.

And we all know that humans are required for a lot of decision-making that machines just can't do well.

But let's try to make the humans as efficient as possible by getting the right data into their hands and avoiding them doing repetitious tasks that could be easily done with machines.

So kind of to Travis's point, simplifying the automation process is going to be critical.

But identifying the process or having an established process is going to be critical to do that.

So here's just an example workflow and how we might work through that.

So your analyst or your hunter-- or probably your analyst, at this point-- is going to identify something, right?

Something comes in of interest.

They're going to want to triage that, sift through the noise, figure out what's important.

Next, they're going to want to isolate that.

So what is going on?

So I'm going to take that one piece and look at it, and I'm going to begin my hunt process and associate what I need to associate with it but without having a ton of noise with it.

So you're getting rid of that and just being very concise about what you're looking at is going to be important.

Then alert that.

So you're going to then say, hey, we have something here.

This is of interest.

We need to pay attention to this.

And then identify what the criticality of that is, and that's where risk scoring comes in.

So one of the things that we do, as an example, with Anomali Enterprise is we integrate with vulnerability scanners.

And so we take your risk and let you guys assign that, of course.

That's your business.

That's not our business.

But then identifying how important that is-- so how quickly are we going to react to this?

Is it something as widespread as Petya?

Or is this something somebody got phished, and it's a simple malware thing but without risking some sort of major outbreak?

From there, like I talked about on my previous slide, is adding that additional context.

We're going to add the enrichment.

Who is it?

Where are they from?

What's the vector?

Is there an actor associated with this, or is this part of a campaign?

Too early to tell, but when we get to the next step when we start doing the investigation process, we'll identify that.

But again, adding all that extra context to it to figure out what it is I'm actually dealing with.

Then we're going to move on to take action and do the investigation.

What is going on here?

Is it an actor?

Is it a campaign?

And then trying to figure out-- look at that TTP.

Is there an associated TTP?

Is there an associated CVE with this, depending on what we're looking at here, the alert?

And then escalate as necessary.

So oftentimes, your tier 1 analyst is not going to have the rights, permissions or empowerment to take complete action on that.

They may have to escalate that to a tier 2, a manager, an engineer-- whatever the case may be.

And then once you've eradicated or dealt with the threat, you want to report on that so that you learn from that.

We don't want to repeat these steps again.

So the key there is learn about it, enrich it, investigate it, and then take action to resolve it and then report on it.

So if anybody's got a military background, you know AAR-- After-Action Review.

What are the lessons learned from this?

What did we do good?

What did we not do so good?

And how do we learn from this?

And so is it something as simple as establishing an [INAUDIBLE] policy or making sure everybody's AV is up to date or the patches, those other processes, those back-end processes that should be taking place that don't always take place-- configuration management, change management, so employing all those controls to ensure that you've learned something from this event.

And so I'm going to move into some data sources, so just a couple of slides to kind of recap on the slide with the brain, if you will.

As you know, there's a ton of sources to detect.

And a lot of these were on that previous slide.

And so what you really need to figure out is, what are you going to get the best bang for the buck for?

Not every organization can afford to have not only these devices but then the resources to manage them.

Or do you have that one resource who is the firewall guy, who's the SIEM guy, who's the IDS guy, who's the switch guy?

It's not effective.

So you've just got to figure out, what are you going to make use of to maximize your goals?

I'm going to talk about goals, here, in just a little bit.

Malware sandbox-- that's an important tool.

But again, not everybody can do that.

And there are some free ones out there, like Cuckoo and things like that.

But how effective are they for your organization?

We've partnered with Joe Security, just as an example.

So with ThreatStream, as you know, if you're a customer, you already get that.

So that also helps.

Threat intelligence matches-- that's our business.

That's the tip.

That's what we're doing for you as our customers.

And then just a couple other sources of logs.

So then sources of context-- so Travis has talked about this a little bit.

I've talked about it a little bit-- enriching that data.

So what are we going to do here?

So WhoIs information, so who are we dealing with?

Where are they from?

So it's really simple.

So when you see a URL that says chase/, and they're based in Belgium, well, you know what, just basic WhoIs information shows you where that URL is derived from.

So when I see chase/ based out of Belgium, I go, time out.

Aren't they based in Chicago or New York-- or I'm not sure.

I think it's Chicago.

Registered with a Gmail address.

Registered with a Gmail-- it's just simple things like that.

Passive DNS kind of goes along that same thing.

Those recursive lookups for DNS so some people are not doing any spoofing out there-- you're able to see that stuff.

Active Directory, vulnerability platforms, AV-- all these sources of context.

How do you apply that into your threat model?

And how do we automate some of this stuff?

And so there are some tools out there to help do that.

And that's where Travis was talking about automating some of these processes to streamline this.

That's where we want to get to.

And that's where the whole SOAR thing comes into place.

Although we're not doing too much of the O.

We're talking about the A.

But these are critical components that we, Anomali, are trying to strive for.

And you're going to see that in some of the new SDK packages coming out.

Yeah, and I can add some stuff too, here, from just my experience in previous enterprise environments.

Sometimes having the context of, what the heck rule was that in the IDS that fired-- is there a way for us to get more information about that directly into the ticket so that when the analyst sees it, they know exactly how to react to it?

Oh, that's that CVE that came out last week.

This is pretty fresh.

I don't think we're patched against that.

Especially if we can add vulnerability scanning data directly to that-- this was a match against something.

Maybe we need to automatically have the vulnerability management stuff in there for that host just so that the analyst, when they see it, they can recognize, oh, this doesn't even apply because that software is not even installed on that host.

It's a Linux box, and that's a windows exploit.

That's an easy one to discard.

So we've just made it very easy for the decision-making on the part of the analyst to be very efficient.

Same thing with if we have context that we can get from web assets.

This event happened here.

Here is Active Directory information about the user, the host, the machine.

Here's other details that we know about the activity going out.

Well, that's a weird user agent string.

Maybe from our web application firewall or proxy is able to provide that information.

That's not a typical browser.

That's obviously coming from some kind of application or software.

Maybe this needs to be looked into a little bit deeper.

So giving that extra context can be something-- if you can automatically apply that to certain types of tickets that come in, that's a great way to leverage this kind of information.

Oh and historical security events-- if there's a related ticket, like, oh, there's been 37 of these tickets this morning that are similar to this, that's additional context to the SOC, the analyst, as they're looking at it.

Like, that's interesting.

We may be having an outbreak of some kind.

Or this user was impacted with something strange yesterday, and now I'm seeing this today.

Being able to have that context not only makes their decision-making more accurate and better, but it also maybe helps surface something that may have gotten missed otherwise.

They may not realize that somebody else dealt with that-- that same user yesterday was something that was kind of weird as well.

So just throwing those things out there.

So tools-- we just threw this in here just to point out that you can do this by hand.

You can do this yourself with Python.

You've got APIs and SDKs.

A lot of applications nowadays, a lot of security tools have some kind of API available maybe for purchase.

Some of them charge extra for being able to use APIs on them.

But there's things like TheHive out there that help.

If you don't have one of the commercial tools that's more around orchestration to do this kind of stuff, it doesn't matter.

You can do these things sometimes just with a little bit of scripting and take advantage of those APIs and SDKs.

But it's certainly a lot easier if you get the commercial platforms, though, but definitely not required at all.

All right, so let's talk about, where does the rubber meet the road in all this chatter that we've had so far this morning?

What is it that we're talking about being able to do?

I've already hit on some of this.

If we know that this is a high-fidelity alert that comes in from this particular tool-- an example would be there's-- this something we used to do.

We get a lot of noise out of our antivirus system.

A lot of that stuff that comes in is just, hey, it automatically blocked something.

OK, whatever, not really that interesting, and it happens all day long.

Like, we can't look at all of those things that come out of our AV platform.

But there might be certain virus families that you always want alerts on because it's always something high-fidelity that you want to take a look at.

And we did in my previous environment.

So we deliberately went into our AV system, set up alerts around those particular virus families, and any time that we saw a hit on one of those, we automatically went and took a look at the machine.

Because inevitably, there was other artifacts on the machine that may have been missed and things that we had to do internally to respond to that.

So that's just an example.

But I think every environment has-- the SOC and as they go about their business and do things and the IR team and as they're responding to things-- there's stuff that they know when they see it.

They know that's bad.

And that's the stuff that you want to key in on here to automatically create certain types of tickets.

You know the information that they're going to need.

Let's automatically feed in that information into the ticket as much as we can or via other tools, automatically block things if we know this is something that we're always going to want to block.

Just automatically do that.

Maybe note it somewhere.

Maybe open and close and ticket automatically in the background so you can have it for tracking and metrics.

Maybe it's something like when this happens, we need to go scour Tanium or Carbon Black for additional signs of infection elsewhere in the environment.

And if there's a way for us to automatically do that instead of waiting for a human being to open a ticket, look at it, copy and paste something into a Carbon Black query or a Tanium query to actually make that happen, that's the kind of efficiency that we're talking about here.

And then same thing, since we're talking about intelligence-- if part of our intelligence process is to take some of this information and go and be able to research it, find out more information about the malware, about the phishing attack that came in this morning, and feed that back into the IR so they know what we're dealing with-- those kind of things.

If there's a way for us to pre-populate or pre-present information to the intelligence team as a result of that information, that makes the intelligence team much more efficient as well.

So early on, I talked about goals.

So we've got all these tools, and we have all these potential outcomes.

We have the contexts.

We have the enrichment and things like that and then the actions thereafter.

So what are we trying to do here?

About half a dozen key components to that is not being hectic.

You want to be somewhat process-driven.

So driving efficiency is going to be an important part of that.

And so if we can streamline that via automation-- or again, however you do that, whether it's an API or using one of the tools that Travis just spoke of-- that's one way of doing it.

Business aware-- so business drives everything.

So a lot of times, we IT folks, us engineers and so forth, we get caught up in our day job and not thinking about the overall business impact.

And that's hard sometimes because we're so focused on what we do down here in the engineering section.

So that's something that needs more broad focus and look at that from a business perspective.

What is the impact of doing this?

And it's simple as being a sysadmin and spinning up 10 quick VMs.

What did I just do?

All right, so I added I added a certain amount of load on the network.

I just opened up potentially-- most likely-- they're probably not patched immediately as I build the VM.

There's probably-- I don't know when the last snapshot was, but as I'm doing that-- so I'm doing a lot of different things that could potentially have a business impact.

So oftentimes, as you know, we're reactive in nature.

That's just the nature of the beast, and we try not to do that.

So one of the things you want to try and do is get in front of that power curve, paying attention to new releases and what makes sense for your organization.

Obviously, I'm a big proponent for change control.

So if somebody comes up with a new release of software, I'm not the first one to run out and look at it.

I'm going to wait a couple of days at least and let somebody else do that and let them feel the pain if it's not working correctly before I go out there and upgrade it.

And you see that across the board, whether it's Apple, whether it's Microsoft, whether it's Cisco-- it doesn't matter.

Everybody's got that problem.

Risk management-- I can't say enough about risk management.

So if you're compliance-driven, which I just met Margaret right before this talk-- and compliance is a huge piece of our society today.

GDPR-- how many people have got GDPR requirements now?

So whether it's a simple training, awareness, or whatever the case may be, that's coming.

It's going to affect everybody, especially with borderless networks.

How do we address that?

The cloud-- that's a perfect example of that.

And that ties right into cross-domain.

Granted, what I was just talking about was more global but within your organization-- understanding the different business units within your organization and the impact thereof.

So when we're talking about whether it's a ticket or whatever-- again, I talked about you've got that one guy who is the firewall guy, the IDS guy, the SIEM guy-- whatever, all these things.

When you're not, you have to think about the impact of what you are doing relative to the sysadmin team, the network team, the security team, and then, again, of course, the business aspects of it as well.

So make sure you're not staying siloed within just your area.

You've got to think about what everybody's doing.

User centric-- UX, User Experience.

That's, how many clicks do I need to do to perform this task?

The other day, the demo on the keynote, you know, two clicks, you're done.

That's what people want.

I don't want to sit there and click 15 times to get to the same thing when I can click one button, make it just get there, and be done.

That's what everybody wants.

It's like sitting in traffic, right?

You want to snap your fingers and be to work.

And if you're in one of the metro areas like this or New York or LA or whatever the case may be, you fully understand that.

But user experience is going to be a big piece of it.

How can you streamline that, whether it's a ticket or however we have to proceed through a process?

And measurable, right?

You have to be able to measure or quantify what you're doing.

And so I was just having-- again-- having a conversation prior to this, and I mentioned a book that I really enjoyed, The CIO Paradox.

And if you go back about 20 years ago, CIOs were fairly new to the C-suite.

And everything they did was a sunk cost.

And they had to justify to their CFO or CEO of why I want to go spend this million dollars or that $10 million dollars or whatever the case may be on hardware, software, resources-- whatever the case may be.

How am I getting value out of that as a [INAUDIBLE]??

Why do I have to write that check?

So CIOs were able to overcome that by showing value by streamlining processes, whether it's simple as email or some of the applications to be able to do business with.

So now the CCOs are in that same predicament.

How do I show value with all these security products that I'm buying?

Well, there's a couple of different ways.

If we have some sort of malware outbreak and our systems are down or we're compromised-- if our systems are down, how many millions of dollars a minute am I losing?

Think about it from that perspective.

If I'm unable to do transactions online with my customers, I'm going to have unhappy customers.

They're not going to come back.

How many more millions of dollars am I going to lose there?

So again, trying to measure those things.

A security breach-- who wants to be on the front page of The New York Times, saying 10,000 user accounts released or social security numbers?

We see it every week.

It happens all the time.

So being able to measure and quantify what you are doing with the tools you're asking for is important, so just a few of the different goals that we want to be able to overcome or beat.

Yeah, I think the measure is before and after.

Like, this is how we did things before.

It was pretty manual.

We could only work through so many tickets a day or whatever the case is, whether you're talking about an intel process or an operational process or a response process.

And then being able to show later, after you were able to maybe apply some automation to what they were doing, how you were able to impact that, and they're able to work through more tickets.

They're able to be more-- you know, here's the accuracy.

There was this many mistakes last month as a result of copy/paste issues or data transfer issues that human beings had to do versus now, we've eliminated most of that this month because things are automatically passing that data between each other.

Being able to show those things is definitely valuable to justify the actual effort that goes into building these processes.

But it's also a good kind of a yardstick to be like, wow, we spent 150 hours building this automation and making sure it worked and troubleshooting it and everything, and we only saved like 10 hours a month.

So I'm not sure that the value is necessarily there unless there's some other elements to that that make sense.

So there's some challenges, though, in all this.

And these are challenges that exist in environments regardless, but they do apply in a context of when you're trying to automate things.

If you're talking about having to actually write some scripts to take advantage of APIs and connect two disparate systems together in a way to apply some automation, if you don't have the resources around that can write scripts in a good way and not break things, that could be an issue.

The maturity of your SOC is another thing.

Being able to collaborate with them to understand what they're dealing with and try to get information out of them and stuff like that, the tools that they're using-- are they using Excel spreadsheets?

Or are they using ticketing systems?

Or are they just using emails, for instance?

These things all matter when you start talking about trying to apply these types of things to it.

Manpower-- maybe you have the skills, you have the resources, all that stuff.

You don't have the time.

You guys are already buried with too many other things.

And the amount of time that it would take to actually go back and try to apply some automation-- you just don't have the resource for that.

Budget-- budget can be an issue.

If you have to go buy a tool or pay for some licensing, like in the example of, well, we could totally apply this automation here, but we have to pay for the API for this tool, which we don't already have licensed, and that's going to have a $50,000 cost for us-- those type of things can come in.

And then depending on what you're doing, if you're talking about you're taking a high-fidelity alert from something and you're going to apply some automation to that because you're very confident in that alert being something that's actionable, that's obviously a requirement for something that you want to apply automation to.

And if it's riddled with false positives, obviously, that's not a good source.

That's not a good place to start.

So if there's a way for you to go through the process reducing the false positives first and getting it to a place of being really high-fidelity, then that may be your first step.

So we want to walk through an example.

What does this look like?

What are you talking about?

So imagine you have-- talking about instant response, talking about the SOC, or even just talking about how your intel process works-- it could be any of that.

But imagine there's an average number of steps that have to be taken, like, how many clicks, how many things, how long does this take?

And let's say that you're able to take eight steps off of that instance per the little eight there.

And let's say that by taking those eight steps out, we're able to save just two minutes per incident or per ticket or whatever that we're talking about.

Or two minutes per step, let's say, so that's a total of 16 minutes per incident.

Maybe it took them but we're able to shave it down to 14 minutes per ticket as a result of these efficiencies that we added.

And let's say that as a result-- our team processes Then you do some simple math, and if you know that it's 16 minutes and you have 100, that's that you've saved and therefore $1,300, we'll say, in cost savings from those man hours.

And if you apply that monthly, that's $26,000 a month.

Now, that starts to be some real money that you can be able to go back to management and say, yeah, we had to pay $50,000 for the API.

But we're also saving $26,000 a month.

So after two months, we've already got our return on investment from making those changes.

And $312,000-- that's real dollars.

That's something that you can almost take that and go ask for that really cool tool that you wanted to get that you didn't have the budget for now because you were able to save the money to be able to buy it.

So kind of like when I said earlier from the keynote one of our customers said something about getting Anomali ThreatStream saved us two FTEs-- this is a hypothetical.

These are hypothetical numbers.

This is not from the customer.

But realize that.

Realize that value.

$312,000, that could potentially be-- depending what market you're in.

If you're in Oklahoma, maybe not, but if you're in New York, maybe something else, right?

But two FTEs easily, could be several FTEs if you're looking at tier 1 analysts.

But again, that automation is actually going to help you realize the value of the products.


And I don't think that that example is really outside of the realm of possibility in most environments either.

So just kind of wrapping up with some key points that we want-- make sure that you have some very specific goals.

Look at your processes that you're dealing with, and try to figure out, where do we really want to get with all this?

Context is one of the best things you can do.

If a human being has to make decisions, make sure they have all of the appropriate context that they need to make those decisions.

Determine, what sources do I need?

If this is the context that we really should be applying to this particular scenario, what are the sources of information that we need to go get?

And do we have the plumbing in place to be able to get all those things?

And how hard is it to be able to pull those things in?

And if we can't pull them in, how can we at least make it easier for the analysts to get access to them without having to have them go through a bunch of hoops?

If you can apply enough logic to what you're doing-- maybe certain tickets grab certain things, and certain other tickets grab other things.

It just kind of makes sense.

There's documentation.

All of the tools that are out there that have APIs, SDKs available-- there's usually some kind of guide.

It may be online.

It may be a PDF.

But there's usually something associated that you can download, documents how to utilize that API.

This is going to be the key to doing this stuff.

And to be fair, even if you don't understand Python, a lot of times, the API guides are written so well that you can read through it and go, well, that's pretty easy.

They even have examples.

And some of them online that I've dealt with, you actually-- it has dropdowns for what you want to do.

And it sort of writes the code for you on what it is that you need to do.

And you just drop your API key in there, and you're done already.

So this isn't rocket science.

Don't think this is too much of a stretch.

I don't know anything about Python or anything about programming.

This is not really that hard to pull off a lot of times.

If you do have to go down the road of scripting, try to keep that stuff simple.

Try not to make it too complicated.

Make sure that it's efficient, and be sure to secure it.

Think about, how could this be used for evil?

If I'm going to do this, could somebody take advantage of this and actually cause us pain later as a result of this automation that we're trying to do?

Make sure to test definitely.

Tweak it as necessary.

Sometimes APIs change and stuff breaks.

Be ready for that, and be able to react to it pretty quickly and be like, oh, they changed it.

Now it's a different endpoint.

We've got to change the script just to point to this instead-- boom, it works again.

We can join forces on this.

A couple points on this and then I'll turn it over to John.

But also, before you even start, if you know that you're trying to do this with this tool, go look on GitHub.

You'd be surprised how much stuff's already out there.

Somebody may have already written a script or written something that does exactly what you're trying to do.

The other thing is talk to your vendors.

And just be like, hey, do you guys have an integration with so-and-so?

They might be like, oh, yeah, we totally integrate with so-and-so.

We can get you whatever you need.

Or here, go here in the UI and do this and click this and drop your API key in here, and it should come alive for you.

Definitely take advantage integrations where you can.

And even if they don't have a published integration about it, sometimes ask them anyway.

Because they may be like, oh, yeah, well, we don't have an official integration with them.

But we do have some scripts that will connect those things together.

What are you trying to do?

Oh, yeah, we can do that.

We already have some scripts that'll help you with that.

Definitely take advantage of that stuff.

Measure, measure, measure, right?

That's it.

That's what it really boils down to, whether you're that engineer or you're the CCO writing the check.

So just, what happened?

What impact was made, positively or negatively?

Paying attention to that is going to be critical.

And what's going to make it better?

I talked about coming back and doing that summarization and figuring out lessons learned.

What did we do well?

What didn't we do so well, and how do we learn from that?

That's going to be critical.

And then lastly, look to peers for advice.

Defense sharing is king, right?

Collaborate, whether it's as simple as following some threads on LinkedIn, which is open-- everybody can join for free.

Or you subscribe and pay for it.

[INAUDIBLE] is another one.

Forums like this, this convention, Detect '18-- next year, Detect '19, another one.

Trusted circles within our platform-- I just love that idea.

And when I first came onto Anomali six months ago, I was like, where was this five years ago?

I needed this.

When I was working in a SOC environment, I needed it.

And I wish I had it then.

But now, we have it.

And I think it's great.

Collaboration is it.

You're never going to figure everything out on your own.

You can't.

You can't keep up.

Nobody can.

All right, so that's pretty much what we have.

Is there any questions about any of this?

Hopefully it was pretty straightforward.

That was the goal, anyway.

I have a question about ramp-up times.

What do you see in terms of business cases?

And I know it's very environment-specific.

But [INAUDIBLE] to pitch this, the first question you're going to be asked is cost, right, and often time to do this.

What are you seeing right now?

So as far as seeing, I have my own experience from where I came from as sort of the only context to be able to answer that.

And I think it really boils down to-- you already hit on the main point, which is that it's very environment-specific.

Everybody's got their own stack of stuff that they're dealing with and different processes and things internally.

But I think the heart of your question is trying to figure out what it is that-- how far do you want to go?

Like, you can sort of adjust based on, well, I know I've got maybe a little bit of budget.

Maybe I have a little bit of extra time with this one resource who's capable of doing these things.

Maybe we just start working a little bit each week towards getting something.

Let's first start examining our processes and figure out where maybe some low-hanging fruit is that we can apply some automation to or try some things with and just kind of work through that.

And then obviously, if you're actually having an impact and you're starting to see, like, wow, we are actually being able to do this, you might be able to go back to management and say, hey, we have all these other ideas we want to do, but it's going to require a bunch of extra resources.

But here's the success we've already had with what we've done.

We'd like to just pour some gas on this and make it go a little faster if you don't mind getting us another FTE or paying for this tool or whatever to help make us get through this list a little faster or even just to bring in a contractor to help us get all this done.

But then you come in armed with hopefully some good success that led to that.

Thank you.

Any other questions?

OK, well, we're here.

So feel free to come ask us questions or catch us later on in the conference, but thank you very much.

About Detect LIVE

We believe that threat intelligence holds the promise of allowing organizations to better manage risk and develop resilience. Detect LIVE, brought to you by Anomali, is a virtual event series that provides a platform for security executives, practitioners, and researchers to share insights and experiences related to threat visibility, detection, and response.