After you have watched this Webinar, please feel free to contact us with any questions you may have at firstname.lastname@example.org.
Thank you for coming.
I'm going to be talking-- this is basically a theoretical talk about how you build trust and talking about building trust to support information security.
I'm also going to be talking about this trust shift that's happening right now culturally and explain why that matters and how we need to apply this to information sharing and build this in.
I was thinking, it's a bit like security needing to be built into all aspects of development, like DevSecOps.
You can think of needing to think about trust in every aspect of your business and particularly in this distributed trust model that we're now experiencing.
So just as an anecdote, has anyone here done online dating?
It's quite-- maybe it still has a stigma about it?
I don't know.
I know lots of people that have gotten married.
But anyway, so in online dating, a lot of times people are afraid of disclosing too much information and what other people might find out about themselves, but unless you do that, you're not going to be matched with anyone.
Or people are going to be searching.
If you're just a general person, they're not going to find you, so it doesn't work unless you share information.
And this is just a funny thing I found from the FTC about dating scams.
But I actually ended up reconnecting with a friend of mine from law school because he found me through an online dating site.
We never dated, but it was just because of mutual interests that ended up reconnecting us and we're still friends today.
So you're all security professionals and the at the top of our mind are many concerns around confidentiality, privacy, regulated data.
All these things.
I'm sure you know.
And so sharing seems to be in conflict with this and maybe isn't at the forefront of everyone's minds but it is really critical to share information because of-- well, I'm sure you-- we've had lots of talks on this.
But it's a force multiplier and there are more threats out there than you can keep up with and there's not enough people.
So we're in the middle of the trust shift.
A lot of this research that I've been doing comes from Rachel Botsman who wrote a really great book called Who Can You Trust.
And she wrote a book before that also about the shared economy and basically predicted the shared economy at the start of the financial crisis, which we're about 10 years to the day of when Lehman Brothers went down.
And that kind of marks the beginning of this shift in our cultural trust model.
Even before this institutionalized trust, we had localized trust, which is like you're in a village with people and you know people and you work with people face-to-face and that's how all transactions get done.
But as we expanded and became more globalized, trust became an institutionalized trust model where you have governments, institution, big business.
And that's a very top-down structure where you're placing all of the trust in that one top entity.
So at the start of the financial crisis, we've seen a lack of trust-- people questioning the institutions that we previously put our trust in-- and a rise in this shared economy, which she coined as the distributed trust model, basically.
Technology in particular is really facilitating exchanges of ideas and information and goods and services that would have otherwise gone unused, by providing a platform where people can connect, but in order to do that, they need to trust each other.
So some of the evidence for the trust shift that's currently happening.
We have increased demand for accountability, transparency, authenticity, equality, honesty, facts.
I'm sure these are all things you're hearing about all the time.
And so what are some businesses that you can think of that you think leveraged this decentralized trust model?
[INAUDIBLE] So anomaly through the trusted circles.
Yeah, something like that.
So here are some, like Airbnb.
It's, you know, people with places to stay connecting via platform.
But in order for it to work, you have to be able to trust this other person and their home.
You know, when it first started, everyone was like, I don't want to stay in some stranger's home.
How do you trust a stranger enough to go stay in their home?
Uber, same thing, only with cars.
Even GitHub, trusting other people's information.
Wikipedia, same thing, trusting information.
Kickstarter is a sort of more collaborative financing model.
And Yelp, particularly because of the reviews and ratings.
I mean, we rely on-- Yelp only works because other people are providing the information there and we go there to use that.
So what is trust?
There's a few different definitions and theories about this.
I want to point out first of all that I want you to think about not trust but trustworthiness.
So if I asked you to trust the person sitting next to you, you're going to say, for what?
Trust needs to be contextual and specific.
You trust someone to do something in particular or some entity to do a particular thing.
There's also heart versus head, or affect-based or cognition-based trust.
This, particularly, you can think of comes into play with doing business in Asia, where you need to rely more on relationship building and heart-based trust.
And in the US, it's a little more cognition-based.
But all the time, both of these things are at play.
Again from Rachel Botsman.
So the components of trust that she breaks down are competence, reliability, and honesty.
And then I also saw a presentation by Wendy Nather from Duo Security.
She's a CISO there.
When I was at Thought Con this year, her presentation was all about trust as well, and basically saying that any time a human is involved, you have to account for trust.
So you ask, what are you trusting them to do?
What conditions need to be true and for how long?
So Rachel Botsman specifically talks about technology.
So technology is enabling millions of people across the world to take a trust leap.
So I'm going to talk more about how do we get people to take this trust leap to share information back and forth, particularly in the ISACs.
So this is how we do it.
Climbing the trust stack.
And Botsman's definition of trust is "the confident relationship with the unknown." So this is her image as well, saying you're taking this leap of trust from the known to the unknown, and in between you have this climbing the trust stack, which involves trusting the idea, trusting the platform, and trusting the other user.
So I'm going to go through each one of these in more detail to explain how we build trust to support bi-directional information sharing.
So first of all, trusting the idea.
That bi-directional information sharing is fundamental to security.
I think a lot of people here know this.
Sometimes maybe when a new ISAC is forming or people that are new to information sharing, they may have a little more-- may be a little more nervous about sharing the information.
But this trust leap is like taking that localized trust model to the distributed trust model or the face-to-face interactions or the phone calls you might have when you know someone.
You pick up the phone, you talk to them about some incident that's happening.
Some people refer to the steak 'n' ale ISAC as like, you know, you get together at the bar and then you actually start talking.
And this is because you're in person, face to face.
So how do we move that to an online community to get the same value out of it?
And part of that is talking about what's in it for you.
What's in it.
So convincing people that the value of it is worth the risk.
Part of this is also in building familiarity.
So there is a theory called the California roll principle.
When sushi was first introduced in California, it did not take off.
So instead, they created something that was familiar, done differently, the California roll.
They took rice-- these were ingredients that everyone already knew and liked-- avocado, crab-- then they put the seaweed on the inside so you don't see the seaweed, and then people really liked this and then they started to eat sushi and sushi took off from there.
So the second piece of this is trusting the platform and this can be an organization, a listserv.
It's basically the entity or the institution supporting the information exchange.
So that organization must demonstrate honesty, accountability, and transparency.
Here, we have Facebook, who everyone is familiar with the Cambridge analytics scandal, and a few years back, they were also using consumer data without their consent to perform research.
So they have really suffered by not demonstrating those things.
You have to facilitate trust between users, which is the distributed trust model.
This is what I was talking about Airbnb doing this by-- they have photos of people, they have photos of the homes, people talk about themselves, they have the-- I think there's like three different forms of identifying someone and making sure that they are who they say they are.
And also designing for trust.
Oh and the other-- I also put Uber up here because Uber has the problem of the God View mode.
So everyone had access to God View in the company up until pretty recently, and I think it was last year that a senior vice president used God View to track a journalist who was writing critical things of Uber.
So they also faced a serious problem of trust.
OK, another aspect of trust in the platform is how you design for trust.
So also another term, skeumorphism.
I'm not sure if I'm saying that right.
I don't know if anyone's heard of that term before.
But skeumorphism is about, through design, associating-- getting your brain to understand things in a new way through design.
So like, with the iPhone coming out, they wanted you to use the iPhone for all these things that you used to use other physical objects for.
So the camera icon is like an old camera, calendar, clock.
All these things are replicated after the physical objects and it's a way of, like, tricking your mind into-- or maybe not tricking, but just getting that association to, this is how we want you to use this product.
So I just put this to represent, like, steak 'n' ale ISAC is like a potential icon.
I mean there's probably lots of better ones you could use but just kind of for fun.
And then finally, trusting the other user.
And reputation is really huge in this aspect.
So ratings and reviews is critical because people when they know they're being observed tend to behave better.
And it's also kind of an expectation these days that you can rate things and that's part of this lack of trust in centralized institutions.
You trust more what other people say and tell you about something than you do what any business is telling you.
The downside, of course, is that sometimes businesses-- this is going back to trusting the platform and the paying for fake reviews and things like that.
And then also, you need a consistent identity.
So even if you're-- I know a lot of the ISACs want to initially be introduced anonymous sharing in our trusted circles and a lot of people demand or really want to have anonymous sharing.
You still need to have an identity that can be tracked so that people can learn to trust your information.
So you don't have to identify yourself by name or your organization, but having a consistent identity is really important to building trust.
For example, Anonymous.
We don't know who is behind them but you know what they do.
You know what they're about.
And then leveraging your trust influencers.
So whenever there's something new, you're getting people to do some-- take a trust leap or use a new technology.
You've got the people who start first and then the majority comes later.
So you really want to look for those people that are into it and see how you can leverage them.
And just some more ideas about trust.
There's bridging versus bonding trust.
This is a concept coined by Robert Putnam in a book, Bowling Alone.
And so bonding social capital is like when groups organize around something you have in common.
So there's some bonding social capital in the security community.
There would be bonding social capital in an ISAC around a vertical.
But ISACs also have a lot of bridging networks because you're bridging between organizations that don't know each other and don't have that trust already established.
And then the 3.5 degrees of separation.
That's the statistic of everyone on Facebook is supposedly only 3.5 degrees away of another person, despite being this vast company of people.
So they use that sense of being in a group and knowing each other to get people to use the product and share and you know.
So as I was mentioning before, you have the early adopters and the late adopters.
So you need to work with those early adopters, getting them to share intel back and forth, create that vibrant community, and that will help build trust.
And you guys have probably heard of tipping point.
Malcolm Gladwell talks about this.
He has a book on it.
And then this graph here shows you kind of the product adoption curve from the innovators, the early adopters, and then the early majority and onto the laggards.
So right now, I would say that we have not reached a tipping point in information sharing as long as it's only passively being consumed and one-directional.
Once it gets to be bi-directional and everyone's doing it, then I would say we've reached a tipping point.
So how do we facilitate bi-directional information sharing?
You need to get buy-in, trusting the idea.
Building familiarity with the idea.
Demonstrate your organization's trustworthiness.
Design for trust.
Create rating systems.
Leveraging trust influencers.
Oh, that's the last one.
And then I've been following this company, Civil Media.
It's a new-- I don't know, has anyone heard of Civil Media?
It's been pretty interesting to me because they've been through the process of developing and then launching while I've also been doing this research and they're trying to address the lack of trust in journalism media by creating this decentralized marketplace for journalism.
So they have a decentralized platform.
It's built on the Ethereum.
You buy tokens.
So you have cooperative ownership.
And they just launched their token sales like yesterday, so anyone can buy tokens to participate and vote or vote for or vote against, and then they have a constitution that sort of dictates how all of this is going to happen.
It's very interesting.
I don't know if it's going to work or not but it's been really interesting to follow.
And then the token foundry is also involved because this is where you actually buy the tokens.
And I thought this quote from them was kind of in line with everything that I'm talking about, saying, "Our mission is to empower everyone around the globe to create and participate in these new and open decentralized networks.
And we strive to bring trust, transparency, and equal access to the token distribution process." So as long as people are involved, trust must be accounted for.
Sharing intelligence with others online is risky and the way we trust is changing, but trust is key to information sharing.
And bi-directional information sharing can only be done in a distributed trust model.
So I had some reflection questions I'd like to just ask and see if anyone feels like answering.
So what gets you to trust a new idea?
Or if anyone wants to answer another question on here, like whether you're an early adopter or a late adopter.
I'm always curious in a platform that someone didn't trust initially but now you use all the time.
When I first CISO it was heavily integrated with ArcSite and we weren't comfortable using something that depends completely on something else.
Back then, I believe they called it [INAUDIBLE]..
And now, after the rebranding, [INAUDIBLE] So you didn't trust Anomaly at first because it was heavily integrated with ArcSite?
It was on ArcSite, yeah.
So [INAUDIBLE] So once they added more products and things, then you got cool.
For me, I would say, I think probably like Uber and Lyft to me.
That was sort of took me a while to take rides with other people.
I think inherently, like any time you get in a taxi is dangerous anyway, or a car, obviously, so there's a lot of risk there.
[INAUDIBLE] I think it is, like you were saying, [INAUDIBLE]..
I think it depends on how you were raised and where you were brought up.
I lived in a lot of big cities and [INAUDIBLE] Yeah.
I guess I didn't really take cabs or taxis much of my life anyway.
So now that I take them more, I use Uber and Lyft a lot.
I think-- what about Airbnb?
Was anyone-- I know typically, I hear that that's one that people took a while to adopt.
Or prior to Airbnb, there was VRBO.
Airbnb wasn't the first to do this, they were just the first ones to really create this platform to trust other people.
Because I used VRBO before Airbnb, but you didn't have-- there was no here's how we make sure these people are who they say they are.
It was just, here's a phone number and email.
So it was really up to you to take that leap and the platform wasn't really facilitating trust in the way that Airbnb does.
So how many of you have called up a friend, colleague from another organization to talk about an incident or vulnerability?
So it's common.
You know, we need that information, especially when we know it's going to be timely and relevant.
And as long as we can create a lot of trust in this community, then you can get a lot more information faster.
There's some of my references if anyone's interested.
And then my contact.
I didn't think I would take 50 minutes, so either you can go early, or if anyone has questions, I'm happy to talk more about it.
You mentioned reviews.
Did you study at all like the impact of one negative review versus like 100 positive?
How does the human brain [INAUDIBLE] Mhm.
[INAUDIBLE] 100 five-stars but you're more likely to read that one one-star [INAUDIBLE] Yeah.
I think yeah.
I mean, that's true.
I didn't read anything about that that I can think of in the research, but I think it's possibly different with ISACs because the point is that you're working with the same people and you would be, say, rating their information as maybe more helpful or useful to you than others.
And I don't know.
I guess you could prevent negative?
You could just, like, how positive you want to go?
You could just not build in the negative part of it.
Yeah, that's all I can think of.
The downside, too, I am part of that [INAUDIBLE] The downside of that is, once in a while, you get something that isn't so great.
But for me, that doesn't negate the value of the information that comes through.
I feel very comfortable with the people that [INAUDIBLE] It is a huge part of my day is reading that information and sharing that information.
So I heard that some of the other ISACs said participation is not quite as robust as it is on the [INAUDIBLE] ISAC.
Whereas, you know, it's still threat and vulnerability information.
Yeah, it tends to be more of an issue with new ISACs or ISACs that don't have a lot of membership yet.
That they haven't reached that tipping point of enough people sharing and then seeing the value in it.
But it also gets to why you need to be able to identify people who are sharing so that you can know, you know, this person, I trust their information.
Or this, you know, moniker or whatever.
I mean, even-- I didn't mention this earlier, but even drug dealers on the dark web rely on ratings and reviews.
So it's just what happens.
I mean, there is definitely-- there's a lot of dark sides to this and Rachel Botsman's book goes into a lot of the dark sides of technology and how it's being used.
For example, are you guys aware of China's person rating system that's going to be mandatory in two years?
Right now, it's optional, but basically it's like credit scores but for your entire life.
And it will separate you and separate what you have access to, even like whether you can fly or whether you can get a mortgage.
And who you associate with also will affect your score.
So it's really-- that's really taking ratings and reviews to a scary extreme.
I think for me, bringing it back to the platform itself, we generally don't share outside of our organization on the platform, but we do have a very robust sharing process guide.
And we technically can just have that apply not only to the ISACs and the different trust groups we belong to but to anomaly as well.
But that for us is a maturity issue and you have to be comfortable with the platform and comfortable with how things are moving back and forth before you can be completely comfortable with sharing on the platform.
With that said, I have to be very careful because, from a risk-based approach, sharing information even if it is contained with just thread information outside on the platform opens up a whole different can of worms for us as a large organization.
Yeah, that's one of the huge risks.
But having guides for around information sharing, that's another thing that I've had ISACs-- new ISACs-- ask me for.
So I think that will help.
I'm sure it helps greatly.
Cybersecurity Sharing Act of 2013 is fantastic when it comes to building a procedure within an enterprise.
I based a lot of it off of that.
And for me, I'm a part of a quite large company, and it was funny.
When I first wrote it, I wrote about all the things I was going to do in sharing.
And they didn't like that.
They told me [INAUDIBLE] Mhm.
So I flipped it, and I said all of the things I wouldn't do, and they loved it.
I mean, it was easier in the end.
I won't share sensitive information.
I won't do anything that violates anti-trust.
And it will be specific to the threat information in these categories and these categories only.
They loved it.
And I mean, some ISACs come up with those sorts of agreements, too, where you need to sign something saying you're not going to share this type of information.
You're not going to do this with this information.
And that's all part of the trust building as well.
But a good basis of what you can and cannot do within your enterprise makes it way easier when you're on platform trying to determine what [INAUDIBLE] Mhm.
So a question for you, then.
Why does it increase your risk to share [INAUDIBLE] Because when we go through a risk assessment on a product, we have to divulge everything that we're sharing.
And because on platform there is more detail than say just an email of cyber intel tell [INAUDIBLE] So for us, they are [INAUDIBLE] people are insane, so just to get, you know, a straight-up [INAUDIBLE] somebody is giving us information and they don't have any of our information requires a six-week risk evaluation.
So to do something where we're sharing back and to the community, even if we are abiding by those rules, there's just a lot more work for me to proof up that it isn't anything sensitive.
So it's weird how we do it.
They don't trust you at all.
[INAUDIBLE] Are they more worried that you're going to make mistakes and give out personal information, or are they more worried that you're going to [INAUDIBLE] Oh, they have this-- [INAUDIBLE] No.
They don't care about us sharing threat information.
Anything that could be construed as, you know, our enterprise-specific, we have to prove that we [INAUDIBLE] And they're very adamant about those specifics.
That's why when, you know, for cyber intel, [INAUDIBLE] ISAC was easy because we had a template.
You know, boom, boom, boom, boom, boom, you put that information in.
Whereas, with Anomaly, even though we're sharing within trusted circles, it does open up a little more gray area because they [INAUDIBLE] So I don't know.
[INAUDIBLE] complaints people have to deal with to experiment with this.
Somebody might be able to take what we said [INAUDIBLE] and figure out where we came from.
All right, well thank you all for coming.
I hope you can think more about trust and how you can apply that into lots of parts of your business and your organization and everything that you're doing.