I’m a SOC Analyst, Threat Intelligence is Just Used to Feed My SIEM, Right?: Detect ‘19 Series

After you have watched this Webinar, please feel free to contact us with any questions you may have at general@anomali.com.
Transcript
MARK MAGILL: All right, so thank you for attending.
If you are not looking for this, I will not be disappointed if you leave.
So I am Mark Magill.
And this is the I'm a SOC analyst.
Threat intelligence is just used to feed my SIEM, right?
Or SIEM.
Sorry, I'm European.
Cool title.
But what are we actually here for?
So we're going to talk about threat intelligence and the SOC.
We're going to talk about threat intelligence management reporting.
And then we'll touch on information sharing.
But as this is the last session of the day, this is going to be dependent on all of you.
This is interactive.
Let's have some fun with this.
If you don't want fun or interaction, again, feel free to leave.
But yeah, here we go.
So who is this tired looking man in front of you?
Well, I am a customer success manager with Anomali.
I joined Anomali last year.
And my background is security operations and security consulting.
Just a show of hands, do you think I'm from the UK or Ireland?
Where do you place my accent?
Or do I have an accent?
OK, silence is good, too.
Anyway.
So I'm from Northern Ireland, just so everyone's aware.
My background, security operations, security consulting.
I hold CISSP and GSEC certifications.
As you can see, I have worked for NYSE, which turned into Euronext after they bought them over.
I then worked in securing insulting, Accenture and EY.
And now, I'm in Anomali.
So it's a little bit of a different challenge, but it's all based around the security consulting aspect of things.
So sorry to do this to you all, but let's talk about you.
Why are you here?
What backgrounds do you come from?
Are you security operations people?
Are you threat intelligence people?
Are you management people?
What do we have in the room?
So to make it easy, security operations people, hand up.
OK, great.
Threat intelligence?
About the same.
And management?
OK, great.
So we've got a nice mix.
So this will be fun.
Excellent.
So just to-- that looks a bit weird.
Just to paint the picture of the intel lifecycle, so you'll see this changes through vendors, through publications.
But this is generally a pretty good way to describe the lifecycle.
So I just want to reference, this is a FireEye diagram.
And obviously, the first stage of any threat intelligence program is your planning and requirements.
So it's very important to actually scope what you're looking for, what adversaries you're going to be interested in, and as well what assets you want to protect and all of that good stuff.
So once we've established that, we'll move into the collection and processing stage.
So that is actually, what do you do with the information that you've collected?
So you've got all of these feeds from your internal-- Excuse me.
You've got internal devices.
You've got your threat feeds.
You're scraping from websites.
But how do we actually do something with that information?
And what does it actually mean?
So then once we've done that, we'll move into our analysis, which is actually interpreting the processed data and if there are any gaps against our requirements.
And then moving into production, which is essentially making sure that your intel is timely, relevant, actionable, and that you actually have a trace back to the requirements to satisfy them.
Because a lot of the times, what I find is that your requirements can start off somewhere, deviates along the way, and you don't actually realize until you look back and find out, OK, it has deviated from a requirement.
So do we need to go back around the loop, or can we still carry on with dissemination and feedback, which is producing that finished intelligence and ensuring that it does match with stakeholder requirements, gaining their feedback and positioning it from there.
So as well just worth noting that this is not a one and done system.
You'll constantly go through this loop.
Feedback is essential to any program.
Because without feedback, you don't actually know, are your stakeholders happy with what you've got?
And if you don't listen to them, carry on the loop, then you'll loose their buy-in.
And then just to just to carry on with just a quick overview of this, so the intelligence types.
Again, this changes over vendors, who's publicizing it, et cetera.
But generally, I look at it as strategic, which is your big picture.
That is the executive level, you CISOs, and your direction.
So it will drive the decisions.
Instead of reporting on operational and technical aspects, you want to be focusing on high level objectives from your organization.
Then we'll move into operational, which is your day-to-day operations, focused on your TTPs and the who, what, where, when, why, and descriptive elements of it.
And then moving into technical, so focusing on IOCs.
What a lot of people in the industry see as threat intelligence is the technical aspect, so your hashes, your bad IPs, your hostnames that are malicious, et cetera.
And so, the question for the room is, where do you see the SOC fitting in with this?
Where would you see a SOC being involved?
Operational?
Any other?
Yep.
Absolutely.
Using them in the SIEM.
So really, my view on this is the SOC can be involved at any aspect of this.
So they can be involved with consumption, the generation, the reporting.
Strategic is obviously high level.
So you're not going to be talking about IOCs.
But it's the research that they do in the back end and your threat intel team do that can all contribute to that.
So moving on to what I have seen as a SOC analyst and where I've gone and where I've been, et cetera.
So there is this misconception that an IOC is threat intelligence.
That's just not the case.
Because without contextualizing the IOC, it is data because an IOC can be great.
But you need context around it for it to be useful for your organization.
One of the things I find a lot is that SOC and threat intelligence teams are disjointed.
So this is a big talking point that I want to bring up as well.
And if there are segregated functions, generally there are siloed environments.
So the SOC won't relay to the threat intel team.
The threat intel team won't relay to the SOC.
They will just want the SIEM integrations working, the integrations with IDS, IPS working.
They don't actually care about those things.
They just want the operational side of things working.
So that's the question for the room.
Is this something that you guys see?
Or is it something that I've only seen?
Yeah?
Perfect.
All right, so as well, I mean, intelligence generally means more overhead for the SOC.
They have to deal with more IOCs.
They have to deal with more reporting change.
And they have to deal with more people.
One of the things that I tried to do is educate that this is not, it's not just an overhead.
Obviously, the initial overhead, the tuning.
But you will have that with all elements of security.
So more overhead for the SOC, which is a negative thing, and then you won't want to have threat intelligence coming into your environment.
You don't want your SIEM getting hammered or your AVs getting hammered or your IDS, IPS getting hammered.
So that's one of the things that I see as well.
As well, we have a focus to be reactive.
As a SOC, you're looking at your flashing lights.
You're looking at your alerts.
But we're not actually working on the why.
Why are we being attacked?
Who is attacking us?
What are our key positions to be attacked from?
And so, that's the negative.
Now we'll move into, how do we improve this?
So education is the big thing.
Excuse me.
I find that with segregated security programs, it generally means that something will get missed, or you will not have proper integrations, or you won't have good communication between teams.
So educating the SOC as to, why do they matter?
And if you see a pattern or you see certain malware being used or you see certain geolocations, why you should look into them.
Because really, the SOC is the front line.
And if the front line is just looking at alerts, false positive, true negative, then you won't have that reactive response.
Another key aspect is empowering the analyst.
As a SOC analyst in the past myself, I know that it sucks.
You generally do spend your time reporting or alerting or just being all angles of people saying, what's going on with this?
Why have you not done this?
What are the alerts?
What are we looking for?
And at the end of the day, you still have alerts the next day and the next day and the next day.
So I know that it can suck.
And empowering the analyst essentially means to educate them and work with them to understand all of this.
So I find that's a pretty important thing to do.
And it's good for burnout as well.
Because I burnt out as a SOC analyst, and I was sick of looking at alerts all day.
So things like that would have helped me not burn out.
And then we'll touch on the sharing and collaboration a little bit later.
But one thing I find is that sharing and collaboration is often viewed as a negative.
So sometimes it's we don't want to share because we don't want our competitors to know what intel we're seeing.
Or if we find something that was not known, maybe we want our competitors to get hit by it.
That is a real thing, and that does happen in the field, absolutely.
Management reporting is the next element that I want to touch on.
And a negative statement to start with.
But realistically, management don't care about operational metrics.
There's no point saying, we managed to consume one million observables.
Who cares?
That does not represent return of investment.
It doesn't show me what threat intelligence has been doing.
It is simply a metric that is a technical metric.
So again, so what I feel is executive buy-in is essential to a threat intelligence program.
You need high level people to be backing up your programs, to be getting budget for threat intelligence, and all of that good stuff.
So if they don't care about operational metrics, what do they care about?
The big one that I find from threat intelligence is intelligence that led to a security incident not occurring.
So that could be as simple as we got wind of an event that was going to happen.
And the TTPs they were going to use, the malware they were going to use, and the time, et cetera, just standard stuff.
So if that was going to target, say, your trading environment, you can work out how long would we have been dying for?
Would we have faced fines for that?
Would we have faced loss of earnings because of the trading being done?
And then you can quantify the metrics around that, which I think is a great way to show the value of threat intelligence.
And as well, you've got finished intelligence for executive communications and threat advisories.
That's all useful stuff to show what your program is actually doing.
So this is another question important, information sharing.
What is the room's feeling on information sharing?
Do you do information sharing in your environments?
Are you adverse to sharing?
Have you seen anything around sharing?
So the reason I ask is because, in the field, you see a lot of different views on it.
There is pro sharing.
There is no sharing.
And why should you share?
Well, essentially, sharing can enable you to find out about events before they happen.
So if there is an actor group or a piece of malware that's targeting your vertical, if somebody in your peer organization sees this, they can share it with you and say, we saw this, this time, this is what it's doing.
And so, then you can go on to your AVs and your SIEMs and look for these alerts and proactively block.
Because without that, you're blind until the event occurs.
So maybe it'll slip through your email gateways, or maybe it'll slip through your IDS, IPS because the signature is just not available at the moment.
And so the next point is, why don't you share?
And I think I touched on it a little bit before.
But really, the reason of not sharing is there is an element of, I don't want to implicate myself.
So I don't want to say, OK, we saw this in our environment.
It led to this.
And yeah, I mean, we don't want you to know that.
There's also an element of, we don't know how to share.
Obviously, I work for Anomali.
We're a tip.
But if you don't have a method to share or you have a insecure way to share, you're going to be reluctant to actually do it.
Yeah, so I was actually part of a Whatsapp group, a telegram group, sorry, which was information sharing in a specific geolocation, which was full of activity.
And as soon as management level were into the group, it led to shut down of the group because people don't want their managers to see what they're sharing, what they're talking about, and all of that things.
And I get that because you don't want your manager saying, why did you share that?
This was, say, it was sensitive.
But generally, that's not what we shared.
So I get the impression.
And so, for anyone that's on the fence of sharing, you can definitely ease into sharing.
And you can start off anonymous, which Anomali can do pretty well, and start from there.
Build in trust.
Build in your memberships.
And just understand that not everyone in the sharing community is out to get you because that's not true.
And just to build on the example I was talking about, my colleague Mark Green produced this blog post.
And essentially, it's just detailing out the steps of an analyst who was investigating something that was on the email gateway.
They created a threat bulletin on the platform, and then their ISAC group consumed it.
A member, so number four, a member actually noticed that that slipped through their perimeter.
And the incident response process was activated.
So then step five as well is around the signatures being updated on the endpoint solutions worldwide, so leading to protection for everyone, which as an information security professional, is your job, is to protect the common good.
So a perfect example of that.
Thank you all for coming.