Automating Open Source Intel (OSI): Detect ‘19 Series

After you have watched this Webinar, please feel free to contact us with any questions you may have at general@anomali.com.
Transcript
SCOTT POLEY: Hello, everyone.
We're here to talk about there are way we this automation for open source intelligence collection.
Basically, the theme of this is why make the free stuff hard?
So it's what we'll walk through, kick it off with some introductions, start with Chris over here.
CHRIS COLLINS: Yes, I am I'm Chris Collins.
I'm a SOC analyst for FirstEnergy.
I've been with them about four years.
I mainly focus in threat intelligence using Anomali, and also as a automation engineer working with the Demisto SCOTT POLEY: I'm Scott Poley.
I'm the supervisor of our SOC at FirstEnergy.
We basically do security stuff, so there's a gambit of things listed under there.
But basically I just-- I'm in charge of really talented people to help us achieve our goals in the different areas.
I do have a military background.
I put my unit in Sydney up there, ties into some of the intelligence space a little bit.
RANDY CORELLI: I'm Randy Corelli.
I work in customer success for Anomali, FirstEnergy being one of the customers I work with.
I do a lot of automation, known in our CSO group as the API guy.
So I help these guys to fine-tune, tweak, and take things up to the next level.
SCOTT POLEY: All right.
So the big thing, why open source intelligence, right?
So one of the things if you've ever had a clearance or worked in the government space, you know that a lot of the classified intelligence really is just around the relationships of stuff that's already publicly available.
Some of those relationships and other key associations actually what develop what is secret or even top secret data.
So that's why it's so significant.
If you can get a good grasp on all the open source intelligence, you collect enough of it, you can start starting to build some of those things yourselves and understanding possible attribution and other aspects of threats.
So the goal is obviously we all want to be really good blue team guys and solve those problems.
But how do we get to that next level, right?
How do we actually use intelligence to help us solve those problems?
CHRIS COLLINS: So what open source intelligence do we want to tackle?
A few of the things that we decided that was important to us is threat blogs, whether they're from Palo Alto or from FireEye or malware traffic analysis.
We'll get vulnerability feeds directly from US-Cert.
We'll get new sources without bleeping computer, wired exploit data from exploit database, will capture real incident data from Have I Been Pwned.
And we also capture social media IOCs from security researchers on Twitter.
SCOTT POLEY: So we cover what open source is I'm sure everyone this room pretty much understands that.
But I want to walk through some use cases how we found this valuable base all the things we've implemented.
So one of the things about a lot of threat Intel that I've run into at least throughout the years I've done security is.
Most of the Intel is pretty much you get information you have to go backwards and try and use it as more of a police blotter.
So it's not as actionable as far as moving forward.
It's always going back.
This is an instance where we actually when we first started collecting some Twitter or Intel from Twitter.
We were looking at hash tags around certain malware families.
In this case, it was Hancitor.
And we're able to start collecting this into Anomali.
We use anomaly match, how they branded today, enable to detect when we are seeing things in real time.
So I can actually understand the context too.
Because when we target what we're actually collecting, we know what it's associated with, and from that point can move forward.
So in this scenario we first started going down this path just exploring it, we got a tweet about a sender associated Hancitor from Twitter.
And it was actually 20 minutes before we actually were hit by it.
So the first time we're able to say we've collected this Intel, and in real time got the effects of, now we know what we saw.
We know why we saw it.
We had the information beforehand.
We're not going backwards.
And we're able to take the right response at that given point in time.
So there's a lot of value in doing it in this manner, which was different from our approach beforehand where we just got it after the fact.
Another thing we use is threat or IOC timelining.
And then we use this with smart tagging.
So how we bring in all this open source stuff, we try to tag it with intelligent terms and things we can associate with.
So an example of walking through this, this was a piece of Intel in there that was dated back in 2017.
And the only tag it had was watch list.
Right?
Really informative.
Now I really know what I'm looking at when I see this actually in my system.
But if you start timelining all the different things we pulled in from reporting, now that one piece of Intel is now associated with exploitations.
Now I get a better understanding of what's actually going on there.
As we work across this timelining, we know it's associated at the [INAUDIBLE] Trojan.
And then now, it's really a malware IP.
So now we're moving from just a watch list exploitation to this specific malware families.
Now, as we start pulling information across, it's associated Emotet and phishing.
So now we have a measurement of houses actually getting in the environment, how we expecting to see this piece of information.
And then by the end of it, we're able to see that it's actually a drop or something name payload for emotes that.
It's actually associated with how it's getting there in the first stage of everything.
So if you look at what we were doing with the open source collection here, we did get some government watch list based thing.
And then with the open source collection through RSS feeds and then Twitter, we enrich the data over time and see how that information changes.
And then the government based off the red tape might be a little slower, but we already have a good idea before we even get the final report.
So that was one of the big use cases where we were able to understand what's going on, just from capturing that data over time.
The third use case to really show value in the things we're collecting is it isn't just about getting intelligence to match against behavior in your environment.
It's about how can you then operationalize that data.
So one of the things we do when we do bring in reporting is we'll create a threat bulletin in Anomali.
So we get all the context around every single indicator that we have.
So that piece of RSS feed from Trustwave, now we have a bulletin that looks just like the page we got it from.
All the indicators are built in the anomaly and associate it, and we can build new associations.
But we have the context here too, so if we really were interested in this type of report, we can also build detections of things in our same environment.
Other types of correlation analytics, we want to do there.
And so when you start pulling in more data that you can associate that's similar because other reporting scenarios, they might have different context that might be important aspects you didn't think about when it comes to the threat information.
So this becomes a lot more powerful.
So we've developed a lot of new content just based off this process alone.
So walking through the workflow.
CHRIS COLLINS: We're going to get into the house, so we know we're going to start off with an RSS feed and a Twitter search.
But before we even got there I had to figure out how this was all going to work.
Because I identified I want this.
I want this blog, but how am I going to get that data into Anomali effectively?
So the first approach I took is I went to a open source platform called IFTTT which stands for If This, Then That.
It's a easy-to-use web application that allows you to plug-in a couple parameters.
In this case, I just had to plug-in a RSS feed URL.
And then that's the if.
So if a new RSS feed matches, then I do an action.
And the action in this case is I'm going to send an email.
And similar with the Twitter part of it, identified a search from Twitter whether it was a hashtag or a user, identified that is the if statement in the applet.
And then the action again was an email out.
We set up our own mail server on AWS to facilitate this.
And then we send it into our SOAR platform, which is Demisto.
From there we normalize that data, structure it where it's-- you're not getting the headers and the photos with ads or a bunch of junk data that you don't need.
We're also unthinking some of the IOCs because every blog, I swear, comes up with a new way to defang something.
So we always have to be on the lookout of what's going to be ingested by Anomali correctly or not, so always looking to fine tune that process.
Initially, our first process is we set up mailboxes on Anomali.
And then we send the whole threat Bolton as a body in the message of the email and the title of the report in the subject line.
And now, we create a threat Bolton and an associated import session.
And that was a lot of manual interaction to get go through the report, do some extra cleanup, associate the import session back to the bulletin.
It was a lot of taxing work on our analysts.
And similarly with the Twitter feed, I guess, at the very initial step I was doing the same thing with the Twitter feed using the email imports.
It came very quickly that it was way too tedious.
And too many import sessions were created for an analyst to sit there and review them all.
So I had to come up with a new solution to avoid everybody looking through 150 to 200 import sessions a day because that's taxing.
So the work around I came up with that is I created a Google Sheet and I took the share URL from the Google Sheets and put it into Anomali as a stream, and then, that read the data from the spreadsheet.
As new things were created it would check it every 30 minutes and import it that way.
But now we've evolved that process to be a little more streamlined.
And we've built in, with Randy's help, API functions to go directly from Demisto into Anomali.
And another piece I've also added if you have a iOS device, there's an Apple App called shortcuts.
And I've built a couple applets on there as well.
That as I come things across things ad hoc, I'm at home, and I'm just trying to kill some time looking through Twitter, I can just-- as I see a new tweet on that phone, this guy, I will grab this tweet.
And I can just click a button and send it right into the workflow for us.
SCOTT POLEY: And that part's really big because a lot of times where our communication internally, we were texting each other this interesting stuff, and not really operationalizing that data.
But we'd find interesting things that we knew were pertinent to what we did every day.
And now with him adding this shortcut to this iPhone app, we basically know that it was going into our operational data on the fly whenever we saw something ad hoc outside of our normal process.
CHRIS COLLINS: And currently, I'm at roughly 300 plus blogs that I follow and 120 different Twitter feeds so starting out I think we started about five or 10 feeds to start off, get our feet wet in a handful of Twitter handles.
But we become successful in this, we've grown this out quite a bit.
And the amount of volume that we bring in now, there's no way an analyst could do this manually on their own.
This is something that has to be at with this volume that has to be done in an automated process.
So I'm going to break down the differences why the API offers you a little more value than going the email or stream route.
So with the email you can only set the tags that are on the mailbox.
So you're very limited on a scale of what you can really put in there.
You can give a general sense, but you can't get specific.
So you can say this is malware, but maybe you can't say it's email attack because you can also have Hancitor or you can have a RAT in there.
So you got to be careful of what kind of tags you put in those mailboxes because those apply to every single indicator that goes in.
Whereas through the API, you can be specific on exactly what you want to import, exactly what kind of tags you want to create.
So for example, like with the Twitter stuff, anything that's a hashtag is a different tag in the API import session.
So that gives you a more contextual feel about what the indicator is.
RANDY CORELLI: You guys have added the things like the full tweet URL in there as well.
CHRIS COLLINS: Yup.
SCOTT POLEY: Yeah.
RANDY CORELLI: The author is now a tag in there.
So you can have some longer values that really do add some additional context and value to your items CHRIS COLLINS: And you know hopefully you know we can do some good metrics later on you know with the author, we can say.
This kind of author tweets about Emotet 94% of the time.
So that's one of the things we want to grow to downline.
And again with the API, it less human interaction to go through and approve things.
Whereas email you have to look through everything and invalidate everything.
And when you're dealing with it gets a little bit taxing.
And with the API, you're going right into the database.
You don't have to wait for the mailbox to find the email, or you have to-- it's a really busy day.
Your email runs slow, and it's going to take a little bit of time to get into Anomali.
Whereas API you have direct access.
And as soon as there's a web session available, you connect right away.
And here's some examples of the different types of tags I was talking about with the mailbox.
This one was on an RSS feed, so anything always got tagged at.
But if they were blogging about Emotet or they were blogging about Hancitor or they were blogging about NetWire or whatever new malware string was out there that they're blogging about, I couldn't really capture that unless I went back and added it manually.
Whereas in this report example on Trend Micro, I know it's-- on Trend Micro, it's around PayPal.
It's a phishing incident.
And I've even got the title of the report that is linked to.
So getting this all set up, here some kind of good tips to kind of keep in mind if you want to create your own process.
With the SOAR platform, we used to set up a monitor mailbox that you want to ingest your data off of.
The best thing we found is to definitely set up your own email server that you have control over.
We found out quickly that any of the free email servers out there, Gmail, Yahoo, Hotmail, they'll see your emails as spam not only because of the content, because of the amount of volume that you have.
So obviously they'll flag it as a bug because essentially it is a bug.
And same thing with your own internal email, your own mail controls will probably stop you and flag you.
So here's how we set it up with Dovecot on AWS, pretty quick and easy once we got that going.
RSS feeds, the first thing you want to do is identify the RSS feed URL.
Most of the time, it's going to be advertised right on the site.
They'll have the icon for RSS feed or say follow us something of that nature.
But in some cases they don't have an RSS feed.
So in those cases, you can create your own with open source sites called Feed43 and Feedity if they don't exist.
In some cases, you might be able to find them in the source code of the HTML.
Of the example, just do it Control F and search for RSS or Adam or XML.
And usually you'll find them if they do exist.
On Twitter, you'll want to identify what you want to follow and who you want to follow.
I suggest first starting out with getting some hashtags that kind of get a sense about who's tweeting about these campaigns.
So I've got example here of a couple of different hashtags that I was using or following using a open source program called Hootsuite.
And that allows me to set different parameters, searches in channels.
And I can view up and down who is tweeting about what the subjects.
And from there, I can identify I want to follow Herbie Zimmerman from his tweets.
I don't recommend following just hashtags because they can be very broad.
And you'll get a lot of a repeat of the same information.
So it's better to focus specifically on the security researchers who are tweeting about the stuff rather than the broad hashtags.
Previously I mentioned If This Then That.
Again, open source website that they have that you can set up all of these applets on.
Pretty easy setup, you just have to put in a couple of parameters, and you are good to go.
And here's a couple examples of those parameters like Google Project Zero, just plug-in the RSS feed URL and then the email that I wanted to send it to.
And I've also set up a couple other tags and they're along the title.
So I let you know-- I set those up for flags on the mailbox, so I created a rule on the mailbox to flag on anything that starts with RSS.
So it goes into the correct folder on the mailbox, and that folder is read by our SOAR engine in the right folder.
And then, it automatically kicks off the RSS playbook as soon as something comes in.
A similar thing with Twitter, you put in the search criteria and then the email you want to send it to.
And then I put the Twitter tag in front on a subject line, so I can filter those appropriately.
And then from the Demisto side, I create playbooks in there.
So I am able to go out and grab the HTML all of the page.
And here's an example of a playbook within Demisto.
And I set some values in there, clean up the data.
I only went up to the title the page.
And I went to the photo of the page and everything in between.
And it cuts out some of the noise and stuff like that.
And then we create the case that goes to our analyst to work on and review.
Then we import it directly into Anomali view of the API.
Along with this, we' ve also created a process to identify new feeds that I've not come across before.
And the best way I've found to find new feeds is from news reports, so Bleeping Computer, CDNet, WIRED SC magazine.
So I pull those in, and with those reports, I'll run a search on all the URLs on that page.
And if I already have that URL in a previous report, close it.
I don't need it.
I already covered it.
If I don't, I get an email about it.
And I say, hey, do you want to add this URL as something to follow?
And I decide, now this is probably not something I really want, I'll get rid of it.
And if I do, I'll just double check, do you already have this feed?
And if I don't, or excuse me, if I do, I'll just run it through the normal process.
And if I don't, I'm going to create a new feed.
And I have another process that helps walks me through the steps of creating a new feed, setting up the applets, and finding your RSS feeds.
In some cases, I just go ahead and finding those RSS feeds URLs for me.
So this process, we're going to keep growing and evolving it into more refined and a better process.
And one of the things that we're looking at doing is cutting out If This Then That to a more reliable source.
And it's an open source platform called Node-RED, which Randy will get into a little more detail here and a little bit.
Instead of feeding it through If This Then That, we'll feed it through Node-RED, and then the email process through Demisto and then the API.
Then eventually, as we get better with Node-RED, we'll just cut those pieces all out.
And we'll just go directly from Node-RED right into Anomali.
SCOTT POLEY: The advantage here is for people that don't actually have a SOAR platform or something like this, now, you guys are able to do SOAR-like things, at least with Intel collection process with just this open source tool that Randy will get into, how he's going to try and look at trying to make that to work in a certain capacity.
So-- RANDY CORELLI: Yeah, thanks guys.
All right, so the next steps, right?
Node-RED, what is this magical software?
Really is just an open source orchestration tool much like Cybersponse or Demisto or any of the other ones that we might partner with.
We also like these open source ones.
And in this case, really, we're not open source Intel, open source tools.
That's why we're using it here.
But also, it's pretty lightweight process here.
Things like Demisto can be very heavy-handed for this lightweight process.
So let's free it up to do the heavy lifting you might have for other tasks and use the simpler tool.
The initial part, the email these guys can do themselves with their Node-RED already.
But what is Node-RED?
Right?
It is a flow-based programming.
You can see here, very visual.
They do have a browser.
It's very basic.
We'll see the UI here in a moment.
But each node really just has a single well-defined purpose.
So then each item in your chain does one task, passes the message on to the next item in the chain, so whether that's a log in or an import, an output or a translation.
You can easily then wire together these workflows in that visual way and really see where it's going and not have to write code or read documents.
You can easily see just like this, this nice chain.
Of course, over 225,000 modules, that's what their website says is available.
I've found a few of them, four things like piehole.
Now, Google Translate has one.
There's a whole collection in there people can check out.
So this is the basics here of the UI.
We see there over on the left all the different categories that come pre-programmed.
This does also run in a number of platforms from the Raspberry Pi through Docker, Linux, Unix.
I have it running on Windows.
And it was very simple to setup.
Within about 20 minutes, I had it up and running in a very basic way.
But that's its basic UI.
Now, we'll go through here a bit of the flow for say, the Twitter type of setup.
So here we'll go to our social category.
And that's where we now have our Twitter node.
It is pre-configured already.
It knows how to talk to Twitter.
It just needs your details, what are you looking for, your credentials, that type of stuff.
So we can select that node.
We can drop it into our graph.
Then we can move on maybe to the next section here.
We do have to maybe change the data.
So in this case, I want just the generic function.
And here we're going to process that raw tweet data.
We saw on the Demisto workflow, they had maybe five or six steps.
So I've simplified it here, but that is really the idea.
And then the final step will be going over here to our Anomali threat stream category.
In this case, we're going to go and create a threat model entity, so that might be a bulletin as they're currently doing an actor profile, a malware profile, TTP, signature any of those objects that we have.
Then really once you have your nodes and you do your basic setup for them, you just wire them together.
So output from Twitter is now the import or input into the next one.
The output from our function here will now be the import or the input that goes down to our normally.
Afterwards, once we're done our flow-- and these flows can also talk to each other.
So you don't have to do everything in one flow.
You can have them communicate across them and really break that up.
But once you're ready, here we can do these different deployments.
And now it's live.
Right, your nodes are configured.
They're all set up.
You've pushed it to production.
And it's up and running.
When I saw their Twitter example, you could see with a debug node just a stream of tweets that were coming in.
So it's very cool.
So these are the default nodes that come with Node-RED.
I believe the function list does have more of them.
You can see the different categories from import or output social storage, even some these other ones, like the analysis for the sentiment.
There is definitely nodes for almost everything.
Then here if we look at the what I'm planning here for the Anomali functions, all of these are available currently within the Anomali user interface.
And the API as well can access all of this data.
The trick is really translating that into these individual little nodes.
So I do have 82 individual little Python scripts that basically did the wrapper so I could make these nice little images.
We see here we're looking at everything from observable management, add, edit remove, even things like a quick function to make it inactive or market as a false positive, things you might want in that automated workflow.
Then all the way going down to other ones, right, some user auditing.
Maybe you want to unlock a user.
We do have the ability to have a time-based lock with our platform.
But maybe you'd only want to unlock that user's threat stream account when the malware that you've seen on their PC is cleaned up.
And so now, you're waiting for an anti-virus event or some other higher level action and not just time.
All the way through some of the matching we might be able to do, my attacks, those matches you've seen in your environment.
Let's leverage that data, right?
Maybe that's something we want to pull into a workflow and do specific things.
So really every area of the platform from sandbox and rules, whitelist, imports.
This one here, the approved job, that is one of the items that helps us solidify and simplify that API, right?
They mentioned a lot of that manual interaction with those email imports and having to manage those sessions.
That now allows the function I call the Import Then Approve.
So it gives you all of your full job tracking that you might want for with approval process, but also gives you that full automation of automatically approving the job for you.
So you can still have that 2-step process no manual interaction, and it just flies through.
You still get all that full auditing.
This will definitely come over time.
We will start with the functions needed just for the first energy workflow and then build it out from there.
A little bit of a comparison between the two of them, the If This Then That, it is easy to use, no real experience.
It's all that web applet-based.
You just fill it out.
There is no hardware because of that.
But you do also have no control over anything.
It's their API.
They limit the access.
It's their servers.
It's their availability, where Node-RED read now does bring it into your environment and gives you that control, as well as we've seen here a few additional functions and features, does require some hardware.
I have it running in a very basic VM, just running Windows 2012.
I also set it up on just a basic Debian Linux.
And for Windows I installed node.js and then ran the NPM command to install Node-RED.
And then it was running.
It was just like three commands, and I was done.
CHRIS COLLINS: You can also set it up on a Raspberry Pi if you want it as well.
RANDY CORELLI: Right.
So just the deployment options, it's super lightweight, makes it a really cool open source tool.
SCOTT POLEY: Yes, this is something I was going to talk to.
One of the things we actually started playing with a little bit, and this ties into we're an energy company.
And Ukraine attack obviously was a big deal for us.
And we had to understand what's really going on there, but actually found there's a lot of really valuable Intel that's not English-written.
So you're stuck with only what the government or what those Intel groups that push things out on stateside, only that kind of information.
But basically finding these foreign sites and translating them, I was able to actually find extra Intel and IOCs that weren't actually mentioned in the US reporting, which was a huge ad than what we originally had.
And one of the ways I did this was I basically took the URL translate link in Google and did an import session Anomali.
And it brought in the translated web page for that type of Intel.
The problem with this today is Google has changed, how they do their translation.
They put an iFrame, I believe.
So it's not as easy to import that way.
But with some of the translation functions in Node-RED, we're hoping to facilitate the same thing.
So an example of what we actually got from the site, this is Russian, when we brought it in.
You can see it actually is talking about the whole tag It was able to capture everything from here something about some sort of Intel.
And that was, I think, one of the IPs that was actually mentioned in some of the callback stuff.
Translating this BlackEnergy and certain IPs, this type of Intel that wasn't captured in standard reporting, I was able to collect just by finding these types of sites.
So this open source collection process of identifying things just outside the norm, we are probably going be a little more aggressive once we get the capability to be able to translate these things.
So obviously it pulls pictures and everything across.
It can't translate pictures, so you're going to be stuck with the same thing.
But this type of use case is definitely one of enhancements we want to bring to just our standard open source collection that probably a lot of people aren't doing.
That we're starting to see based off most most of the actors being foreign anyways.
There's probably people that are getting attacked by them regularly.
We might actually be able to understand, learn more from.
So based off everything that we've done here, Collins has set up Trusted Circles with Anomali where all this Intel that we've been collecting we figure we get it for free.
So we actually have trusted circles within Anomali.
So if you guys want to actually subscribe to our trusted circles, you get the same type of Intel.
I'll let Collins walk to the breakout here, but-- CHRIS COLLINS: So on all of these feeds, I've actually updated this here recently before the conference, there's a Google doc sheets for each one.
And if you go that Google doc sheets, they will tell you exactly what feeds I'm following and what Twitter handles I'm following.
So you can get a good idea of what's already out there.
But the news reports pretty self-explanatory, any news-based security articles.
So examples are like Bleeping Computer, EDnet, Wired as I mentioned many times before, and then the ocean threat reports Palo Alto Networks, FireEye, Malware Traffic Analysis, the list goes on and on.
Generally anything with an indicator or is a TTP, in essence, will go under this bucket.
And then vulnerabilities, I collect from US-cert, Zero Day Initiative and some other vulnerability ones that I can't remember off down my head at the moment.
Anything that really mentions a CVE or an exploit will fall under this block of bucket.
So all of those first three will have a threat Bolton associated to it.
The remaining ones are all the Twitter data.
And these will capture all the hashtags, it will capture the tweet URL, the tweet author, and the full body of the tweet itself as a tag in these circles here.
So pretty self-explanatory based on the titles here, the Twitter APT, anything around APTs.
I've got a list of some of the top common ones that we come across in Twitter.
So anything that matches those names will get filtered into this bucket.
Compromised accounts comes from a Twitter bot called checkmydump which checks pastebin dumps for username and password combinations.
So anything found in there will get dumped in as a compromised account.
The compromise sites, anything that mentions compromise in the tweet or comes from another tweet bot called you may have been hacked will fall into this bucket.
Anything more generic where I haven't really defined anything will generally fall into the malware bucket.
And then the Twitter and social engineering and phishing will cover anything that's phishing.
It'll cover your Emotet, you're tweet bots and it will cover your fake tech support or your fake anti-virus IOCs in those areas.
So please join those circles and-- SCOTT POLEY: These right here are the number of people already subscribed to us.
And we've gotten good feedback so far.
We do like feedbacks.
If you guys do subscribe and either you have things that you want us to add it in because you found some source that you'd like to actually add as part of our pool.
Or if there's things that you don't like that you actually say is negative, we'll try to figure out how we can clean it up.
Because it affects our operation because we actually operate on those data day in and day out.
So-- RANDY CORELLI: I'll just mention a little bit too.
We chose the Trusted Circle approach because it really does now allow you to opt in.
As we've seen here, we have different counts for each of the different circles.
So some people maybe wanted the APT but not the social engineering.
Where if we just publish it completely to the community which can be done in the Anomali platform, everybody gets it.
So if you have all these tweets, all this news coming in, that might spam certain users.
So we chose this semi-public type of mechanism to make this deliverable.
Now people, if they want it, they can come, join it.
If they find it's overwhelming, they can leave that circle.
And they'll stop getting that Intel.
So really it gives the control to the users rather than just forcing it upon them.
And saying, hey, by the way, here's a ton of new Intel.
And all those tags and things that creating are searchable and filterable for your integrations, so you can definitely leverage those a little bit easier.
for if you're trying to protect against Emotet or Hancitor or any of these malware families, those tags come in very useful as that filtering or that search type of that type of criteria.
So yeah thank you.
SCOTT POLEY: Thank you.