Webinar

Honeypots and The Art of Deception: Detect ‘18 Presentation Series

After you have watched this Webinar, please feel free to contact us with any questions you may have at general@anomali.com.

 

My name is David Greenwood.

I'm going to talk this afternoon about a presentation I'm calling in The Art of deception.

And really, what I want you to go away having listened to this presentation is understanding how you can use deception techniques to source high value threat intel.

So a little bit about me before we start.

I work in the product team here in Anomali.

And I'm really interested in psychology and why people act the way they do and often act irrationally.

So that applies to everyone, right.

No one in the room is what I would deem normal.

We all have our own quirks and I'm more quickly than anyone.

But when we talk about deception, initially people come up and associate it with bad connotations.

Does anyone know who that is on the screen?

Bernie Madoff.

Sorry.

It's not Bernie Madoff?

No, it's not Bernie.

The guy who did the defrauded airlines.

Oh.

Catch me if you can.

Catch me if you can, right.

Yeah.

So a guy called Frank Abagnale.

They're the same age though, aren't they?

Yes, you've probably seen the movie with Leonardo DiCaprio, who doesn't look anything like that in the movie.

That's actually quite a current photo.

He does a lot of tech talks now.

He goes round to the big tech companies and talks about his story and tells them sort of the same storyline in his movie.

I've seen him talk quite a few times and it's quite inspiring.

But for those of you who haven't read the book or seen the movie, Frank Abagnale essentially was probably one of the most well-known deceivers of his time.

I think he took the identity or the profession of nine different jobs-- from an airline pilot to a doctor to a lawyer, to many others.

He was very good at deceiving people to the point where he'd get free trips around the world pretending to be a pilot, and began being a doctor.

He would rack up at hospitals to get free prescriptions.

And stories of deception like this are very popular in Hollywood.

Stories like Jordan Belfort, the Wolf of Wall Street, who deceived many people with his penny stocks.

As well as movies like Ocean 11.

And I think I saw on the plane there's an Ocean's 8 now, which is a women-- a version with just all women, which is great.

But Hollywood loves these types of stories.

And it kind of goes with the whole idea that deception is often given a bad name.

But there's another side of deception.

And deception can be used for good reasons.

I mean, if you read so some of these synonyms here-- deceitfulness, double dealing, fraud, fraudulence.

We all work in security.

We're used to these type of terms.

But again, it's craftiness, willingness, artfulness.

Deception has a good side and it can be used for good.

And when you start looking at deception in detail, there's tons of interesting stories and ways in which people have gone about confusing or deceiving the enemy.

So if you take a look closely at this photo you see there's actually four guys holding up this tank here.

So this is a picture taken during World War II.

So some of you might be aware of the Ghost Army.

It was a brigade of troops.

There was about 1,000 of them in total.

And this troop was tasked solely with deceiving the German lines.

So they had an arsenal of different techniques to do this-- from blow up tanks to runways they would construct, all the way up to fake radio communications.

They would pretend to be communicating over the radios, knowing the Germans would be listening in to confuse where they would put their troops.

So the Allies were fighting on one front and the Ghost Army were fighting on another.

Essentially splitting the German front line into two.

Or at least with the intention of splitting the front line into two.

And they've largely being credited with helping to win the war.

The endeavors these guys went through and sort of the scale of their deception was so big that it did live up to what they were trying to achieve.

And essentially, got the Germans fighting on the wrong front a lot of times.

Is anyone in the room from New York, New York state?

Are you familiar with a place called Agloe?

So it doesn't exist.

This is another nice example of deception.

So in the early 1930s, before satellites were producing maps before Google Maps came on the scene.

Map making was an onerous endeavor.

If you think about how difficult it would be to map America with an army of people, there's a lot to it.

So in the early days of map making there was a company called General Drafting Company, who produced the map you see up here.

And they would send people around the country to produce these maps.

And because they were so difficult to produce, what happened was, a lot of other map makers would look at existing maps and simply plagiarize them because a place is pretty static, right?

It doesn't change too much over time.

So what does it matter if I copy someone else's map?

Essentially, we're just sharing the same sort of thing.

So General Drafting Company put in a copyright trap, as they call it, into their maps.

So Agloe.

It took two of the guys who were producing the map.

They took their name and put it together.

And they produced this map and they published it.

And then they sat and waited.

And sure enough just five years later, there was a company called Rand McNally, who also produced maps at the time, this showed up.

Agloe in New York showed up in their map.

And it was actually-- I mean, even though it's publicized in deception, this actually showed up on Google Maps for a short time as well.

So not even the big giants who got hundreds of satellites at their disposal were able not to miss that.

In the UK-- so this is a picture taken from what we call the A to Z map.

So it essentially maps all of the streets in London.

And there's a rumor, although it has been substantiated now, that they have what we call trap streets in these maps.

So again, copyright traps that don't exist in real life that have been designed to catch people out who go in and try and copy their maps.

And it's not just unique to the print world.

So I run a website.

It maps the values of vintage guitars.

And it's a big database of prices.

And one afternoon, I noticed the traffic to the website spiking up.

There was suddenly 5000% more views to the website.

And I get a very small number of views normally anyway.

So it was quite exciting.

But when I looked into it, it was all coming from the same person.

Someone was scraping my content and putting it onto their own website.

So I thought, how do I deal with this?

Do I put in place some sort of rate limiting?

Or do I try and identify this person and block them?

But the IP was changing and the ways in which they were scraping with changing with each run.

So what I did is just put fake records in the database.

It's like, this content was stolen from-- because these people are doing it in an automated fashion.

They're not looking at the results.

They're simply cloning it and sticking into their database.

And these guys were selling it on as well for money.

So all I did, and this is in public, just put it in a Google alert for the term I'd used.

And every time these guys were copying my work it would just flag up.

And I would get an email amounts to say hey, this is someone copying your work.

And I would send them a friendly email to say, hey, why are you plagiarizing my work?

I'll take legal action.

And sure enough, it's very quickly taken down.

And this idea of fictitious entry-- I mean, it's nothing new.

It's been around for a long time.

And if you look on Wikipedia, the entry for fictitious comes up with lots of nice synonyms.

So we have things like mountweazel, which was taken from an encyclopedia where they invented a woman called Victoria Mountweazel.

And [?

nilartikel, ?] so that essentially translates in from German from nothing article.

But essentially, copyright traps have been a good way to catch people out.

Another question for the room.

Does anyone know who this guy is?

No?

So this is a guy called Clifford Stone, if that rings any bells.

So this is him testifying before congress in I think 1989.

So he wrote a really famous book called The Cuckoo's Egg.

If you haven't read it, I would definitely recommend it.

It essentially documents his story of finding someone who's infiltrated his network.

So he used to work for Berkeley University.

And he noticed suspicious activity on the network, through to realizing the person who had infiltrated that network was actually infiltrating many networks across the country, including very sensitive government assets.

And he was probably one of the first examples of the deployment of a honeypot.

Or the first person to deploy what we now would call a honeypot in today's world.

So he commandeered 50 terminals from around his laboratory one weekend and set them up to try and catch the person and learn a bit more about what the person was looking for on his network.

So using those 50 terminals, he tracked that person through them and tried to get a picture of what they were looking for.

He would observe what files they were opening, what directories they were navigating to, what they were opening on the system, in an attempt to learn more about the hacker.

And honeypots have developed over time to include elements of that.

There's lots of honeypots that exist today.

And with the idea of once someone's breached your network or once someone's trying to look for something, they take you away from the high value targets, into a honeypot so that they can either be contained or controlled or monitored.

And here at Anomali, we have our modern honey network.

For those of you who don't know, the modern honey network is a really simple way to deploy a lot of very well known honeypots.

So it's a simple script that will deploy them.

And we have our own deployment of these honeypots around the world, also within our own network.

And I did last year it was in January I was monitoring the activity on these honeypots.

And we got 85 million hits.

So 85 million-- I would call them attacks.

Not all of them were attacks.

But 85 million hits to these honeypots.

If you do a Google search for honey network, you'll find this and you can download all the packages.

I won't speak too much more about that.

The problem with honeypot-- for how many sensors?

OK.

Yep.

The problem with honeypots and the modern honey network is a lot of people don't put much time into them.

They simply grab the binary, install it, and let it run, which is really easy to detect.

Here's a Shodan search for a term Mouser Factory.

So there's a honeypot called Conpot, and it mimics an industrial control system.

So it's meant to mimic power plants or energy plants.

And the default setting comes with the plan ID day called Mouser Factory.

And you can see there's in a simple shutdown search from Mouser Factory.

So immediately, as an attacker, I can see that these are honeypots.

And a lot of the time it's very easy to identify a honeypot for this reason.

In fact, Shodan has something they call a honey score, which actually rates an asset by how much it thinks it's a honeypot.

So they'll say hey out of 100, this is score 70.

So we think there's a chance it might be a honeypot.

| honeypots then started to evolve.

And the concept of a honey token came into place.

So whereas a honeypot is an actual physical-- well, not a physical machine, but is a machine or designed to mimic a machine, a honey token is really just an asset of some sort or it's not necessarily a computer.

It's just a piece of information.

So honey tokens might include email addresses or files for example.

So you might place a file on your network, Anomali customer list, and then monitor if people are opening it.

If people are opening it, why they're opening it, who's opening it.

Again, if there's someone inside your network and they're looking for very sensitive information like customer lists or IP and they're opening, again, a file that they shouldn't be or should never be open, that's a big indication to expose them in your network.

So again, using this example, this is a file that has got nothing of value in.

It's just a Word file with nonsense in.

But if it's touched, I can see that it's being touched.

And because it's a benign bit of information, I want to know who's touched it because no one should really be touching it.

Similarly, with databases this whole concept of honey tokens in a database.

I put a fake email in a database and if that fake email addresses isn't seen anywhere else, it's probably a big indication that my database is being breached at some point.

So if I come up with a completely unique email that again, I've just thought of on the spot, stick it in my database and then I plug in to something like have I been pawned to monitor for that email address being seen elsewhere.

Again, it's a very nice or very easy way of identifying that someone's got access to that data that shouldn't have.

Where it gets really interesting, though, is when you combine the two together.

So you have a honeypot acting as a real system, but also having associated honey tokens used to access that machine.

So at home I've set up a file sharing honeypot.

So I have a machine with a file share on.

So the actual honeypot itself.

It's just a machine designed purely to host a honeypot.

But on all the laptops that I own and all the iPads, I have credentials to access that share.

So unless you have the credentials, you can't access that share.

Again, their secret credentials and not default admin passwords.

So what that means is, for someone to access that honeypot they have to be able to pick up their credentials from one of my local machines or one of my local assets to be able to access it.

So I know that anyone who's going to that honeypot and anyone who authenticates that honeypot has first pulled the information from one of my machines and is also looking to utilize that information.

And everyone in my network or everyone who uses my machines knows that these exist and they didn't touch them.

So what that means is anyone accessing that honeypot is essentially someone malicious, or at least someone that shouldn't be there.

Which means, I get very few false positive alerts because if it's-- as I say, if it's accessed by someone, then I know that they're more than likely malicious.

What I do in that then, essentially I'm monitoring the audit logs from that machine.

So every time someone tries to authenticate, I get log.

So that includes all that nonsense authentications and potentially brute force attempts.

But it also includes all the successful attempts.

So I have an lstat that pulls in the logs from that machine.

And I can then identify the successful authentications and push them into the Anomali threat platform.

So I have my own feed set up that says, hey, whenever there is a successful authentication, capture the IP address of the origin of that request and stick it into an Anomali.

And then I can share it with people.

And I can put it in the trusted circle.

Or again, I can use it with all the integrations that I have out of the threat platform.

I can also in conjunction with Anomali Enterprise, look back to see if I've seen that IP address anywhere else in my network.

So let's say for example my honeypot catches someone trying to break into it and successfully breaking into it.

So I've locked their IP in the threat platform.

I also have logs from all my machines that sit inside my house feeding into Anomali Enterprise.

That includes all my smart connected devices, as well as my laptops.

Which means, I can then cross-reference that IP that has accessed the honeypot against all those other assets I own.

So I can see if that particular attack is purely located to the machine that's pulled the credentials off, and then honeypot, but also anywhere else.

Because if someone's looking to exploit my network, especially a corporate network, they're going to start looking around.

And I want to be able to see if they've been looking around anywhere else.

And then go hunting them based on where they've been looking.

So again, using a Anomali Enterprise I can look back to see where they've been.

And I can look across to see where about in the network they've been.

And where it gets really interesting in one area I'm particularly interested in is this concept of HACK BACK.

So this is a real legal gray area at the moment.

And there's bills going through Congress about whether HACK BACK is going to be legal or whether it's going to be illegal.

Essentially, there's some element of HACK BACK going on and in a lot of organizations right now.

HACK BACK itself is a I guess a media glamorized term to some extent.

Essentially, all HACK BACK means is when someone is maliciously accessing your machine, you deliver some sort of payload onto their machine to gain further intelligence.

So you're not necessarily hacking them.

Not really a big fan of the term.

But you're essentially gathering more intelligence about them.

So if we take the example of a benign file being placed on my machines-- so I have a file named Anomali Customer Lists.

And I say I'm monitoring the logs of people opening that.

That's interesting.

I can see that someone's opened it who shouldn't have.

But I'll only really have an IP address tied to that.

What would be really interesting is if that person opens the file, what if I could run a macro that deploy something to that machine that extracts even more information.

So I can start pulling Intel from that machine to tell me more about them.

So I might be able to gather a bit more about their tactics or who they're targeting, or their procedures, what tools they're using inside of their machine to access my network.

And I can also do this again, just using honeypots or machines, I get someone to access a honeypot.

As soon as I hit a honeypot it deploys a payload onto their machine, which can then be used to extract more intelligence about what they're doing and when they're doing it.

Where it gets really interesting with this sort of stuff is performing sort of man in the middle type endeavors, in that someone accesses a machine that they shouldn't have.

A payload gets deployed, which then controls or gives the illusion of a completely different environment.

So if someone starts trying to access my network, that sort of man in the middle can show them a completely different bit of information than what's actually on my network.

So I can again spoof my network and spoof what that person is going to be looking at in real time based on the payload that's been deployed on their machine.

Which means, I can then essentially control where they go, what they're looking at, and how I interact with them.

Because the longer I can keep the attack contained and away from my high value stuff, I can gather intelligence from them.

But I can also protect my network against them.

And that wraps up my presentation.

[APPLAUSE] So I'll open the floor to any questions.

The term, legal gray area.

Do you want to expand on that?

Because, as you well know, it's an issue that applies to people.

They have to defend themselves sometimes, even from legal companies who are spamming you with advertisements.

And in many cases, people are tired of it.

And they're fighting back or acting back in order to also collect information in order to file a claim against them.

Or are basically brute forced in ways that are-- or in many cases, like, spamming or slamming.

So I'm curious.

I'm sure you probably know exactly what I'm talking about.

Yep.

There's a lot of new gray areas in that area.

I'd love to hear your thoughts.

Yeah, so it's one that's evolving quickly.

Ultimately, in the UK, say for example there's this whole idea of a computer misuse act.

In the UK, if you perform any sort of HACK BACK, you're already breaking the law.

So it's hard to perform any activities like that.

You are breaking the law.

And as a result, in the UK, for example, none of that goes on.

In the US, it's slightly different.

As I say, there's acts going through Congress right now that are looking at putting some sort of boundaries in place for what people can and can't do.

Now, there's also a lot of start-ups working in this whole deception world right now that are starting to offer products.

They kind of do a bit of what I'm doing, but sell it packaged as an enterprise solution.

And they're driving the debate a lot.

What I would say for the context of this presentation is I'm not advocating HACK BACK.

I think at an enterprise level it becomes very blurry because you have compliance teams.

You have legal teams in an organization.

And when you start telling them that you're deploying things or you're looking to infiltrate other assets outside of the company, that in itself can be a challenge.

From an actual legality standpoint, again, I don't know enough about the laws to say what you can and can't do.

So the reason I put gray area is simply because I don't want you guys leaving here thinking, oh, that's a great idea.

And then someone coming back and saying hey, you told me I could do this.

I'm totally telling them you said it was OK.

You didn't see the big red disclaimer.

But yeah, I think-- to that point, no, I think in sort of next year or two years, this sort of stuff will become really hot.

And there will be a big discussion, especially in the security community around this type of activity.

Here at Anomali, I mean we're even talking about it.

Potential ways to gather more intelligence or better intelligence for our customers.

And this is the one of the way-- or one of the ways we can potentially do that.

Did I not just hear, two hours ago, the UK is setting up a 2,000 man operation in order to be offensive in areas of the UK?

And did not the United States just mention their offensive approach to go in and attack and hack the 42, in order to get back?

All in the last been a lot of talk about this offensive behavior to fight back or defend back or hack back or hand-to-hand hack.

It seems like it's going in that direction, too.

Yeah.

The UK just talked about it just a couple hours ago, about setting the full operations to-- Yeah, I saw.

--deploy offensively.

Yup.

Yeah, I don't know enough about that side of things.

But yeah, as I say, I would be almost certain that governments are involved in HACK BACKS of some sort or offensive hacking techniques.

As General Powell said, [INAUDIBLE]..

Yep.

David?

Yes.

Question for you.

So the thing about deception is, it takes resources.

If you want to do it good, the reason we were successful in World War II-- I'm going to say it-- because we put so many of our resources towards the deception.

Those same resources could've been spent building planes.

Instead, they were holding up inflatable tanks.

So from a cyber perspective, what do you think the balance is towards putting resources towards deception versus putting resources towards other information security?

And what do you think is a healthy balance for organizations of different sizes to implement as well?

Because if you get a whole bunch of honeypots and never configure them, you're gonna show up in a honeypot list.

But if you take all of your C-CERT and you dedicate them towards setting up thousands of honeypots, you're not managing incidents, right?

Yeah, I think the argument comes down to defensive versus offensive security.

Traditionally, a lot has been defensive.

How can we protect ourselves.

And here at Anomali, we offer products that do just that, for the most part.

We were born out of providing intelligence that you can use to protect your networks whilst also identifying breaches and then potentially using that information for hunting.

But primarily, it was a lot around defensive.

And I think the mindset still of a lot of the security community, myself included, is very defensive.

You go for the defensive stuff first.

You meet all the compliance requirements around that.

And then you potentially think of the offensive stuff as a side.

The reason I think the offensive side of things hasn't taken off or is slow to mature is simply because to date there haven't been many commercialized or very widely known offensive solutions.

Honeypots, everyone knows what honey pots do.

But as you allude to, the actual time and effort and maintenance required to keep a honeypot network up is really high, to the point where a lot of people might deploy them.

They might maintain them, but the noise they're generating just simply is too much that they get abandoned pretty quickly.

I think as, I say, the market moves as more people start developing these offensive type tools, that balance will shift.

And in all honesty, I think defensive security is good.

I think offensive security can probably take a large proportion of the defensive stuff out of the equation.

And that if you can contain people inside areas of your network and monitor them as opposed to protect against them getting in and hoping they'll never access the network, you'll get a much better hit rate for the high value stuff that people really looking to do damage.

And again, if you have the tools in place it be a lot easier than it is to do now.

So I think that's still some way off.

But yeah, I think hopefully going forward, the balance will be readdressed.

It's more of a 50/50 approach.

So at what point do you think it warrants an offensive approach, as opposed to, like nowadays, with Threat Hunter passkeys, you can beacon an individual.

You know, many agencies and companies are like the anonymized users.

There's also tactics you can use that don't require targeting a user with offensive things, because, attribution wise, how do we know that, if you're going after someone's infrastructure, that it's actually their infrastructure?

When do we make that final decision in your mind of nothing else has worked, we need to go on full offensive?

That's a good question I mean, I'm not sure I really know the true answer to that.

To me, going offensive is just gaining more intelligence.

Now, as you say, things like beaconing, you can get that sort of stuff today using other techniques.

I think it ultimately depends on the type of attacks your falling victim to or where you see big gaps, and what you're able to pull from machines.

If beaconing is working, then that's a great approach.

And if your threat hunting teams are getting enough intel, then why go with any more?

But I would say ultimately in my view, the more intelligence the more context you can get about what someone is doing or what they potentially plan to do or what they can do, it adds more context to an investigation.

Adds more context to the threat hunter looking at a particular incident.

Whether or not that's used or not, again it's up to the person performing the investigation.

But the more information, in my opinion, is better.

Sorry.

I was gonna say, doesn't it make more sense to have a type of standard set in place that you have to follow, like before they carry out things like that?

Or I just liked Pespi on Instagram, so they hacked my mom's Facebook, or something?

You know, like if there's a standard we should set in place for stuff like that, you know?

If that makes sense at all.

So, I'm-- yeah, I'm not 100% sure on what you're asking me.

If you're going to-- that's why I used a really bad example-- but if your decision is to go into the offensive fray, and you're talking about you've gone through all the steps, if you find yourself that have to do that, is it worth trying to go and then formalize that?

Or is it better to leave it to that analyst and then go, it's kind of up to you?

I don't know the answer.

Again, I would say that would differ between organization to organization.

I think, if you put it in the hands of the analyst, it's a good step.

If you give them the opportunity to use that type of thing and they can legally do it without fear of redress, then that's good.

But ultimately, if there were some standards in place or some sort of automation that said, hey, when you want to go offensive we can do that very easily, and something will be triggered, then again that's equally as good.

As long as you're getting the intelligence at the end.

I think, for me anyway, I don't know enough.

It's too early days for me to make a call as to how it should be delivered.

Is your name Redwood or Greenwood?

Greenwood.

I was seeing if anyone would notice that.

So you're most observant.

Are there any particular tools that you were using, open source for honeypots, anything like that?

So is the question, what open source honeypots are you using, or are using?

For my network, no, I built it myself so I'm just using a Windows file share.

So it's not open source, but I can monitor it all from my machines.

But, I mean, there's tons of honeypots out there.

As I say, check out the modern honey network.

I think there's about 30 open source honeypots that come in and deploy script for that.

And some of them are file share.

But there's a ton of them in there.

Hey, man.

I know you use the word offensive and defensive, but what about phonorecords?

In what sense?

Well, [INAUDIBLE] been around for years.

You know, low interaction honeypots.

How come people aren't-- you know, those more commercialized companies-- pushing that type of technology?

How come they aren't pushing that-- Yeah.

Yeah, I think it comes down to the value of intelligence they generate.

Like, low interaction honeypots tend to generate a lot of noise.

And I think for that reason, a lot of commercial companies don't want to pick them up because the market thinks, hey, if I take a honeypot that's going to-- or I take a commercial honeypot that's going to generate a lot of noise, what value is it going to be to me?

Is it really worth my money to invest in this over something else?

But at its inception?

You know-- yeah, I don't know the answer.

As I say, at Anomali we're thinking about that type of thing.

But it's something that's never come to fruition.

I think the market isn't asking for it yet.

Hey, man.

I heard Anomali may be integrating-- not integrating, but obtaining and producing a deception key maybe.

Is it in progress?

Potentially.

There's deception technologies out there.

There's a company that I know of called Symetria that produce, essentially honeypots.

And they sell them as commercialize packages.

We could put that intelligence into Anomali.

So every time a honeypot is accessed in their network, we can pull that into the threat platform.

As to whether there's a specific feed to generate that into information, I don't know much beyond that.

But what I would say to that point is, that type of activity is really unique to an organization.

I mean, yes, you can push honeypot data into Anomali.

And we push our own modern honey network data into Anomali.

But it can be very noisy.

The real value of the feed data there is the personalization.

It's unique to you guys or unique to your organization.

And very relevant to your organization.

So those type of feeds you can pull in using some of the new stuff we're bringing out, like the developer tools, the SDKs.

But also again, there's companies out there that offer commercial solutions that we can pull the intelligence in from.

That's what I was kind of wondering, if you have this kind of feed, and you're aggregating it in with your Anomali subscription, like, is there certain characteristics, certain honeynets dotted at one relevant task more so than just the bulk of it that can be really noisy, that we can maybe filter on when we're adjusting it?

At the moment, no.

So we have our lab's feed, which is our modern honey network feed, which we push into Anomali.

But that really is a catchall.

I mean, there's no uniqueness about it.

We've simply deployed a load of honeypots and we're catching attacks to them.

The assets themselves have no real context when someone's accessing them.

So they're not really victims of targeted attacks.

They're really very generalized in the data they're generating.

That's not to say it's not useful, because it is.

But it's not very tailored.

Cool.

Well, I think that's a wrap then.

Thank you very much guys.