The Next Phase Of Identity And Security
Hello, everybody, and welcome. Welcome to today's webinar, "The Next Phase of Identity and Security." You know, we've been talking for years about kind of this aspect of where identity plays in our broader security strategies. If you actually look back to kind of some of the original forms of hacking and breaches, they targeted individuals.
They did this in ways by calling a help desk and impersonating a user to try to get that help desk agent to change a password, thus giving the adversary access. There have been infamous stories and the origins of some of the exploits and attacks by Kevin Mitnick that was dumpster diving and tailgating to get into buildings as that way of getting access. And as we move to more automation, broader use of systems, client-server networking, the adversary shifted their focus to the networks, to the devices to try to access that way, which is how we came up with this whole notion of the perimeter in the first place.
We all remember the images of a castle with a moat to make it hard for that adversary to get in and try to stop them at that point of penetration. But, as we move to broader aspects of remote users and third-party users, customers accessing more and more important and sensitive data and resources, we shifted our mindset and the thinking, and the first iteration of this was identity as the new perimeter.
But that perimeter-based hard shell soft center didn't work, so fortify that by really validating we knew who was on the other end of that access request. And the initiation of identity management was all about that. It was really an efficiency play, which was focused on creating that bridge to get users the access they needed.
And how do we ensure that only the right users had the right types of access? Well, clearly the adversary has shifted their attack focus again, and they're going after the identities. Over 80% of the breaches leveraging stolen credentials, misuse of credentials, adversary in the middle attacks to attack these session cookies, etc., that the user is under attack more than ever before, which really has been for years how we've been talking about how do we better fortify and bring together identity into our security story.
And that's really what we're here to talk to you today about. My name's Kurt Johnson. I'm the chief strategy officer over at Beyond Identity. Beyond Identity, we formed the company about four years ago, really with the mindset to create a security company to solve identity problems, recognizing first and foremost that passwords are terrible, and can't figure out why we're still using them as the primary control for users to gain access, and figure out a way to do this in a far more secure way using the HSM, what's in phones and laptops today to create a more secure authentication ability being on that device.
And if we had a footprint on that device, we also had the innate ability to answer key security questions about that device when you need it most at the point of authentication, the controls we expect to be in place in place when a user is trying to gain access to these sensitive data, resources, and workloads.
And so, really that's what Beyond Identity created, was this ability of really not just answering who is gaining access, but what is gaining access. As I said, my name is Kurt Johnson. I'm the chief strategy officer over here at Beyond Identity. Been here pretty much since the beginning. Long background in identity management initially with a company called Courion, which was one of the original companies in the identity governance and administration space before we even called it that, and some stint in cloud email security before lured to join the team here at Beyond Identity and help launch the company.
And I'm thrilled to be joined today by a longtime friend in this industry, Sam Curry, who's the vice president and CISO over at Zscaler, a close partner of ours. So, Sam, please welcome. Thank you for joining us, and please introduce yourself to the lovely folks out there.
Yeah, and thanks for having me. You know, Kurt, I've always had fun talking to you. And of course, I remember when you were at Courion, I was still at RSA in those days. You know, we partnered on a lot of things. And I actually had a CISO friend of mine who's still a friend of mine today, who you mentioned passwords. He used to challenge me. We used to have a meeting once a month.
He said, "Sam, I will always challenge you this in the strategy meeting for two things," he said. The first was that we get a password-free world. And I think we're at the point now where we can start to answer that, but it took a long time to get there. And the second, of course, was to do with just how hard it is actually to get some basic things right.
He said, "I want to know when someone leaves or I let them go, I want to know everything they could access, and I want to know everything that he accessed or she accessed." And those were big challenges back in the day. It requires quite a bit of capability on the backend. So, about myself, I've been in cyber for about, my goodness, over 30 years now. You've already said where I am and what my role is.
But, I have a personal mission, which is I'm interested in making the topography of our infrastructure favor the defenders and actually start to reverse what I call the "hacker advantage" for many years. So, what I want to do is, because I've watched the attackers get better at their craft, they're faster at than the defenders, I want to make an unfair landscape that favors defenders.
And I think zero trust is one of those things that's a game-changer. And we're going to talk about that a little bit, but it does require an ecosystem of vendors. So, I want to make sure that people realize that. It also requires practitioners and processes, and many other things. And so, how we work together makes a...is a big deal. And the fact that we work at two different companies, I think is important, but there are others in the picture as well.
And maybe we'll reference some of them as we go, not necessarily by name, although we might, but there are other components here. And I think of what we do at Zscaler as if you were saying authentication on your part, we say zero trust perhaps. But, more granularly than that, we do authorization and zero-trust authorization, and how is that enforced on the fly in real-time in a fine-grained way and an on-demand way for each request uniquely?
And hopefully, that makes a bit more sense at the end of this, but, you know, what my company does and what your company does is not as important as I think, as some of the things we're going to discuss. So, that's a long-winded way. But, you already got me thinking with your intro, Kurt, as is not surprising given how long we've known each other. And I promise to the audience, we'll keep this fresh. It's not just two old men having a chat.
And I will always say from experience, it's not too difficult to get your mind going, which is why you've been so successful in this industry.
That's very kind of you, very kind of you.
So, kind of moving into it, as I kind of mentioned in the preamble there, you can't argue the fact that identities are under attack. Over 80% of the incidents, according to the Verizon data breach report, came from credential compromise. CrowdStrike validated this in their recent threat report that showed, I think 82% of the breaches from credential compromise.
And, you know, this really has been coming from two primary ways. First of all, credential theft, but then more and more these sign-in fool exploits, adversary in the middle, man in the middle, man in the browser type of attacks. And they're happening at scale. Ripped from the headlines, this big fish can swim around MFAs, says Microsoft Security.
This was from a year ago, showing over 10,000 organizations they identified in just the previous nine months had MFA bypass attacks. CrowdStrike detailed this at the RSA conference this year of a new MFA bypass that was all around credential theft attacks. So, it's not just stealing the passwords, it's actually being able to bypass a lot of the multi-factor authentication we put in place.
You know, as I mentioned previously, it's a shock to all of us that we still rely on passwords, but they're so embedded into our history, ubiquitous with kind of that log-on experience that regulations and even cyber insurance policies required MFA. But, that has been the cause of why we're seeing the adversary shift their attacks to MFA, a lot of this through just spraying an end user, you know?
You can get these push notifications and realizing that if you have an end user on the other end of that, you send them a push notification, they ignore it, what do you do? You send them another one. Send them another one. And at some point, one of those users is going to click, or they're going to see that they're easily susceptible to these phishing attacks.
And we've seen this in attacks like Uber and Slack, Twilio, numerous ones all taking advantage of bypassing traditional and legacy forms of MFA, which unfortunately are about 80%, 90% of the MFA that's in place today, so much so that even the federal government has issued their zero-trust strategy that requires every federal agency to eliminate these forms of MFA that are vulnerable to phishing attacks by the end of next year.
And they're not doing this just out of, you know, over caution, they're doing it because they're seeing them rising increasingly in volume. And I was out at CrowdStrike's Falcon Conference last week out in Las Vegas, and it was timely in the news that we all saw the recent MGM attack, literally bringing down the casino. And I don't know if you're familiar with it.
Those things, like slot machines, bring a lot of money to folks like MGM shutting those down, shutting down the reservation systems. One of my colleagues was staying at an MGM resort, still waiting for his bill to arrive, costing them over $8 million a day. And again, came from exploiting and taking advantage of a social engineering attack that had a help desk agent change MFA that allowed the adversary in to do all this destruction.
So, there's no argument we need to be changing our mindset. And Sam brought up zero trust initially in his introduction, which really is a...I know a lot of people are, you know, rolling their eyes already saying, "Oh, here we go. Another buzzword, another vendor who's going to call themselves a zero-trust product." Well, we all know there's no such thing as a zero-trust product.
It's a architecture, a strategy, a ongoing, never-achieving type of evolution that organizations are going through or are implementing to really crack down on this problem and this problem here. So, with that, Sam, let me throw it to you. And really, what does zero trust mean to you and the customer conversations you're having day in and day out?
Sure. So, I think it's worth pointing out that John Kindervag came up with a definition, and I'll paraphrase it, back in 2010 actually. He talked about how no elements should, by default, trust each other. The simplest business definition though, that I use is that it is provisioning only what's needed by the business for as long as it's needed.
And the word only in there is the key one. And it's really hard to do that by the way, at scale. So, there is a principle of least privilege, there is a principle of least function too, incidentally, when you architect things. And we forget that one a lot. We put too many functions in the stack, for instance. But, there's a principle of least trust that's important. And you can follow that.
But, if you re-architect such that you're no longer trying to solve the problem of how do you connect to the network, right, and stop thinking about it as a, you know, layer one through seven connectivity, and instead you start thinking, "How do I connect the user to the app and only that for as long as it's needed?"
Then the world can look very different and it can actually lead to some interesting transformations. That's how I see zero trust. Now, the reason that's important is your example earlier. If you do that, then the blast radius of any given compromised identity or device gets limited. It gets, in other words, let's say an identity is stolen. Well, that identity only has so much reach or access.
And I think it's worth, by the way, you got me thinking with your intro. We often say insider and we often talk about compromised identities. I think there's five cases that people in their mind should keep in mind through. If I can be seen on screen, I'm going to hold my hands up and do that, right? So, the first is if someone steals credentials, right?
So, in other words, they are an outsider using insider's privileges. That's one. Two is if someone's blackmailed. So, let's say, you know, your spouse gets it if you don't do this. That's somebody who's...that's an unwilling insider. The third is you get bribed. So, that's a compromised insider, that's different.
Because the difference between those two is motivation matters, right? And how you might monitor for it and the penalties you might apply are different. The fourth one is someone inside could choose to do something, and it's because maybe vengeance or greed. That's a motivated insider. And the final one is an accidental insider, someone who's trying to do their job, let's say, and leads to a leakage.
And the reason I distinguish these is because you might have different controls for each of them, but in each case, a zero-trust strategy is going to help you. Now, how you monitor for it, and when you start to look for breakout behavior, that could be different. And the other thing I'm going to comment because I just have to, when you mentioned authentication, and I know we're going to talk more about that, I think of authentication strength as how difficult or what is the cost to break for an opponent?
And that is a moving target. When I say that is some form factors have more longevity, some, however, break under sort of the development pressure of the attackers. You know, so we know that some are inherently going to be weaker and are going to crack when the attackers really build a mechanism to go and exploit them. And I'm bringing this up just because I know this'll be an interesting point for people that are watching or listening today.
What we want to do is to find the combination of factors that make it difficult for the attacker to break any form factor for that first case. And then we want to deny options to the opponent, right? So, what we want to do is say, "All right. It's hard to compromise a machine, it's hard to compromise a person," and then we're going to going to deny you the ability to move laterally. We're going to deny you the ability to find the IP addresses or the listeners for things.
And that should make it hard for you to find the crown jewels, if you will, and get them out. I hope that wasn't too long-winded, but that's what comes to mind when you ask the question, and based on what you said. Yeah.
I think you're spot on.
Does that resonate with you, anything, Kurt, I think I left off there that you've encountered?
No. I think you covered it well. And I think also, it really kind of leads us to our next topic, which is, you know, this notion of zero trust. What does it mean for zero trust authentication?
I have given that some thought. Yeah.
Yeah. And Beyond Identity, we've kind of really been branding ourselves around this notion of zero trust authentication. And again, it's not to take advantage of a buzzword. It really is how do we take the principles that Sam just spoke about and the need to react to this, that assumption that every breach is compromised to begin with? You know, we used to kind of apply this to authentication, to the...you know, in the words of Ronald Reagan, "Trust but verify."
But, the reality today is it's almost never trust and always verify. And I think, you know, those principles are critical. And how it's initially been applied to authentication sequences is what can we do to give ourselves more confidence in a fit sense, you know? And more confidence and assurance gives us greater trust.
But, the reality of this is that these are probabilistic controls. It really is all centered around, can I give myself enough signaling, enough indicators to give myself a higher assurance, a higher level of probability that I'm...that it really is Sam on the other end of that interaction, or it really is Kurt on the other end of the interaction?
So, a couple of the aspects of authentication that I think are flawed and need to evolve today, and need to evolve, obviously, in light of all the attacks that we've been seeing, really change that principle from these probabilistic controls to can we really make them deterministic controls, and can we really truly eliminate this type of attacks, these credential attacks?
So, as I said, kind of dating back, a lot of what we've been focused on specifically with authentication is about who. Who is gaining access? And again, given us that kind of higher level of assurance, we've seen the adversaries do all sorts of things to get around these things because, you know, the reality is they've been doing this either by stealing...in two ways, as I mentioned before.
They either steal that credential because shared secret is moving. When it's moving, it gets in memory, it gets in files. It is accessible by many of the tactics Sam just brought up, even including bribery and even blackmailing to get that credential. Or, the sign-in fool attacks that if I can't steal the credential, let me at least fool the user into an authentication sequence and sit in the middle there and be able to grab that session cookie and use that to gain access.
Now, this is why we put all these controls in place, right? We try to train our end users out of this. We try to train them not to click. We try to train them to be better and more observant on it. But, the state of authentication, our defense and depth is still today, a non-zero chance of preventing these things.
We have a non-zero chance of blocking phishing. And yes, the important controls we put in place for inbound email filtering, proxies, including those available from Zscaler critical tools, mail server configurations, SPF, DKIM, DMARC, all are focused on can we stop more of this from getting into the inbox? Can we prevent more of it, but not all of it?
Some things are going to get through. And again, if we rely on a human on the other side, guess what's going to happen? Some of them are going to click. So, this is why, you know, we've also, as I mentioned, we're fortifying the authentication sequence through passwords plus traditional MFA. But, the weakness of passwords, TOTP, even Microsoft number match, push notifications all require a human to execute the protocol on the other end.
On the client side, it's the human. It's the machine on the server side, but humans have error rates. They're going click, they're going to respond to that push notification, they're going to click on a phishing link that exposes them and enables the adversary to grab that, which means they're going to have success getting in, they're going to have a success with account takeovers and even ransomware.
So, it really kind of is based on kind of how authentication, you know, starts today, that what this fishable MFA is this human-in-the-middle problem. The browser verifies the server certificate, but we leave it up to the human to verify the domain. Is that the domain I think I'm logging? We try to change them, to hover over it, to really look carefully, but we know it's easier and easier for adversaries to trick our users by, you know, character changes, slight modifications in that domain that eventually has some users clicking.
And then when they click, we again, rely on that human to actually execute the authentication protocol, put in their username and password, respond to an OTP, give the number match that we see. So, really when we start talking about the principles of zero trust authentication, and this has really been behind kind of exactly what Beyond Identity was formed around and the strategy and the vision we had from the beginning, is what if first we can ensure that that credential never moves?
And we know that hardware today comes with these enclaves. Tiny processors, not the CPU, can create a...and we can create a credential, place a private key in that enclave, which ensures it never moves. It's never in memory. If it's never in memory, it can't be dumped to a file. It can never be...
It can never leave the system. It never goes through the internet. And if it doesn't move, you can't steal it. But, how can you actually guarantee that you can't hijack that authentication as well? And really responding to this adversary in the middle attacks really by doing this phishing resistant form of authentication, this notion of zero trust authentication, we're really looking to move from this probabilistic control to a true deterministic control.
And the challenge can actually be part of the protocol. The challenge can include who the challenger is. So, we do this notion of a platform authenticator. The platform authenticator resides on that device, enables that device to become the platform authenticator. It, as I said, has the private key stored in the enclave with instructions that it cannot move. So, we also have a guarantee that that key can never move from that device.
And so, what happens in these cases that the platform authenticator verifies their certificate, the platform authenticator verifies the challenge. It asks the human for proof, either through a pin or biometric on that device, but will only allow authentication if it meets the instructions that are associated with that key, that we know it's coming from a device that's cryptographically bound to that user.
So, we have true assurance, not just on who, but on what is gaining access. The key that's going to sign the challenge can include associated data and instructions on who the valid challengers are. It could be a bi-directional signature verification and only move forward if the challenge itself is signed by the appropriate person that originally agreed to enroll that key initially on that device.
This allows us to mechanize the elimination of the adversary in the middle attack, that if it's not an instruction coming from that device, that user, it's not going to be accepted. This is what NIST calls verifier impersonation protection. And it really is our overlying arching principles around our belief that you can truly have a deterministic control around zero trust authentication.
And it's a number of simple principles. First and foremost, it's passwordless. You cannot do this with passwords. Let's face it. Usernames, passwords are no way of verifying any identity at any time. In addition to that, it's truly phishing-resistant, leveraging those credentials that I just mentioned. If it can't move, if you can guarantee it can't move, if you can put credentials in context on how it's being used, binding that identity and device together, you also enable the ability to only allow authentication for authorized devices that you've assembled through policy within your organization.
Only authorized devices associated with authorized users are allowing to gain access. But, the other strength of being rooted on that device is we're actually capable of assessing device security posture. We can pick up a number of different controls such as, is it a corporate device or a personal device?
Has it been jailbroken? Is it patched and updated? Is disc encryption on? Is firewall enabled? Really pulling all of the controls so you can understand that the controls that you expect to be in place are actually in place when you need it most, which is at the point of authentication. And it's not just from what we can gather from it.
We know you've made investments in a variety of different security tools to look and assess risk signals, endpoint detection and response, zero-trust network access, mobile device management, all of these tools that, you know, EDR, a great example, by definition, by name has detection and response in there.
The idea that once they're in, let's detect it quickly and respond to it. What if we can actually pull those signals in at the point of authentication, verifying that the controls on that device really are happening and there's no compromise there? The other big change in our philosophy and thinking is that authentication historically has been a one-and-done event, kind of like your bouncer at the nightclub that removes the velvet rope when they have enough assurance and lets you into that club.
And the rules around what that level of assurance can vary greatly. We need to change from a one-and-done event to a set of continuous controls, that we're continuously monitoring that device, we're continuously monitoring the controls that we expect to be in place to be in place, and when anything does change, that we can take immediate corrective action to disable that access, to discontinue that session, and require the user to re-authenticate.
This doesn't just reduce, it actually eliminates those categories of attack from credential theft, from the adversary in the middle, sign-in fool attack, truly changing and modernizing our approach to authentication, leveraging the principles and aspects of zero trust and applying them at the point when we need them most when the user is gaining that access.
Take the human out of that loop, mechanize the elimination of the attack. Mechanize it so we can avoid the credential theft. And really that's the principles and philosophies on how you can really take that zero trust and apply it to our authentication.
So, that kind of makes me... And, you know, back to you, Sam. I know you have talked a bunch about kind of even beyond zero trust and this notion of negative trust. I'm wondering if you can kind of walk us through that.
Oh, I'm happy to. I'm happy to.
You can guide me along. I've got your slides here. I can browse through the slides.
I might have you move ahead. Just give me one second though because I have to say something. As you were speaking, it occurred to me when we say when we have zero trust in a person like a user, it feels like we're saying something bad about them, and we're not, right? It's that we shouldn't have to have trust in any device, we shouldn't have to have trust in any app, we shouldn't have to have trust in any network, also in any user because they make errors.
I make errors. Like, I don't want to be trusted by my own networks because I could make a mistake, right, or by my own applications. And I think it's important to realize that when you get to zero trust, you are liberated from having to put trust in things. And that's really important. And I think that's what you were saying as well. It's that by removing the human from that equation, their user experience gets better, and you don't have to place trust in things like that.
And the other thing that ties to negative trust now is that context is super important. And you mentioned it when you talked about it's not just a binary decision, and you also talked about it when you...and I want people to keep that in mind, when you talked about how you can make risk decisions. And this is why integration with other vendors is important and not necessarily getting boxed in with one vendor or one methodology of measuring risk because the risk assessment and the context behind a risk-based score should be shareable through an API.
I think that's super important. because you, for instance, at BeyondTrust, you may say, "Hey, we've got this changing risk score to do with context," and we make an authorization decision, right? And do we connect someone? Or, how do we connect someone? And then so we get to negative trust, for instance. So, what is negative trust? If you go to the next slide.
This is actually a build, and it's okay if you build it out. There's three circles that should appear on this slide. And folks can see there's some nice, lovely animation in here. So, the slide on the left, I call a positive trust state. Think of the early days of the internet when what we wanted to do was to make more things connect. Think of Metcalfe's Law where we actually say the value of a network increases non-linearly with a linear increase in the size of the network.
We've all seen the diagram where the number of edges as you increase the number of nodes in a circle, the number of edges increases faster than the number of nodes. That's the Metcalfe traditional drawing. And if you think about it, what we've done is we've watched the internet grow. And look at all the things we've done with it over the last 30, 40 years. We have found new ways to get more value out of the network. And it's reflected by the way, in the percent of GDP in any country and in the world that's carried over the internet now, and how much we depend on it for critical infrastructure, for instance.
So, what we wanted to do... And just look in IT departments. What we wanted to do was just connect more and more. In an IT department, if you get tickets, the first thing you do with a help desk ticket is to say, "Okay, did you reboot the system?" If it's Windows, fine. Then you say, "Can you ping? Can you get a connection between two systems?" And so, the positive trust state was let's just connect more.
And that is a problem. So, then you get to let's say you put a perimeter in place, and then we started talking about zero trust. So, how do you now say only what's needed of those connections? Hence the white lines that you see there. So, negative trust is a meaningful application of deceptive technology. What we want is, rather than the world of hackers, as it has been for at least the last 15 years, where the adversary just has to get it right once, and then they get a field day exploring laterally all those connective, you know, things that they can do, the connective tissue and finding things...the IP and putting back doors and beacons in place, and then the defender has to get it right every time, tying it back to the mission I mentioned earlier.
What we want to do is have them move with trepidation. We want them to be worried about touching every door handle because it could send a super true positive incident response team that you're now going...that you exist as a hacker in the network. We want them moving scared through any connection that they're presented with a file. Is that credential, is it a lure?
You know, is that system real? Is that file real? And what we want is for it to be a scary place for them to be. And when I say a super true positive signal, look, there's a lot of noise to signal. Look at how many things we have to stick in the sim. There's rules, you know, most of them are compliance-related now. And you've got to cap...you know, get six failed login attempts.
Well, you know what, attackers don't fail login attempts anymore, right? That's a compliance-based thing. They get it right or they don't try it. And so, we've got a huge amount of noise in the average sim or log management system. What we want to do is to pull the signal out. And so, a true positive signal, a super true positive signal is when they stumbled across one of these things that a user wouldn't, but that they might.
And so, for a lot of those classes of attacker that I mentioned earlier, at least risk that I mentioned earlier, we get into a situation where it's not binary, it's not allow or don't allow. It's okay, in between, we could partially allow. We could enable the ability to see it in a browser interface, but not download something. We could instead say, "Here's a new pathway for you," and create it just in time so you don't have to have the whole fake network built.
It's a lot like in a massively multiplayer online game where they just render around the avatar. You can render around the attacker space, delay them, give them false information and false applications while you give time for the defenders to catch up. So, if you go to the next slide, I think there's... Did you put the picture into this one?
So, this is what we want, right? We want it to be lots of goodies and fear to move through the network because that's going to have a material impact. We want it so that every step is a chance for the attackers to make a mistake and is a chance to signal to the defenders that something's happening there. I hope that resonates with you, Kurt, and with the audience because there's more than just allow, don't allow.
And actually, there's more than, you know, do I...what do I authorize you to do based on the context of how you connect with risk. There's also the opportunity to make it a dangerous place for the opponent.
No, and I think you're right on. And I really think it really kind of helps in how we as companies, as company security organizations really even look at kind of these aspects. I mean, we've always had the risk area. If you're just accessing a low-risk resource, there's one set of controls higher level. But, we should also make it easy to really make it restrictive on users that cause the problems and are of higher risk and more permissive on those that don't.
And I think kind of taking this aspect in the way you've laid it out really gives us that ability of kind of purviewing and really seeing kind of what's out there in our organizations to ensure we're applying the right sets of controls.
Yep. Yep. And I hope that resonates with the audience as well.
So, we've touched on this a little bit, but I wanted to... It's kind of the next phase of this, really kind of how can companies, how can your organizations really leverage these existing security investments, the evolution of these to achieve this? It's no surprise, certainly in light of economic climate issues for budget issues that organizations are exploring consolidation.
I hear a lot about that, especially in the security field where, you know, we've been, you know, experiencing some drunken sailor spending on security tools in the past. You had a problem, you could pretty much make a cause to get budget and do it. Well, those days are gone. You know, when you're looking...
When we started, Kurt, it was 1% of IT. I think the last number I saw was 15% of IT. I remember when it passed storage, and I got in trouble with that when I was at RSA because we were part of BMC, and I mentioned that to my counterparts. So, yeah, it can't keep going. It has to plateau somewhere.
Yeah. That it does. But, I think, you know, too often, I think people are falling kind of under the guise that, you know, consolidation isn't necessarily just a licensing agreement. And, you know, I will name names, but there are certain companies that are kind of doing monopolistic things with licenses. But, true aspect of it is how really in working with vendors and choosing vendors that like Zscaler, like our mutual friends, CrowdStrike, that are making it clear that the need for sharing of risk signals, the need for sharing this information across platforms in a way to being leveraged across, to really solve end-to-end security issues, but also to make it easier on administrators to assemble these, to create the right types of policies.
And I just want to kind of, you know, through the illustration of kind of what I've been talking about, what we at Beyond Identity from day one, we've really made it our mission to know that, you know, we really wanted to tie in to the existing infrastructure our customers already had in place. They've made strong investments in identity infrastructures with identity providers. How can we better secure that?
We can bring signals into the equation, but how can we also leverage those other controls, those other signals, all of the things that you expect to be in place on these devices and pull it in through the authentication process? So, again, kind of really that's kind of our, you know, reminder of our vision on zero trust authentication. We built this model with an ecosystem in mind.
And just to kind of walk you through the process here. You know, as I mentioned before, we...the first step of this process, placing a private key in the enclave, having instructions data about how that key can be used, binding that cryptographically to that end user. And then by doing so, we create this platform authentication that's tied just to the specific users associated with that device, that to launch an application to sign in now becomes a very simple process where the end-user just launches the application or logs into their IDP single sign-on solution as they usually would, and that immediately delegates to Beyond Identity.
You know, we've supported from day one, the integration with identity providers like Okta, Ping, ForgeRock, Microsoft, in the university environment, Shibboleth, CyberArk, you name it, really to ensure that through protocols and through standards like OIDC and SAML, we can immediately just have that redirect to Beyond Identity to take over that authentication process.
And again, leveraging those principles I mentioned before, without a password, without any form of phishing, vulnerable form of MFA, how do we verify that's an authorized user on an authorized device cryptographically bound to that user and scan that device security posture? And again, just to ensure that you have a guarantee of the policies that the controls that you expect to be in place on that device are in place on that device when that is being authenticated, but recognizing those risk signals, security posture, a trusted device comes from multiple sources that we can actually pull the risk signals from MDM tools such as Insight or Carbon Black, or MobileIron to validate that they are in place, configured, working appropriately...they're in place in providing kind of the level of controls we expect to when that user's authenticating.
And also EDR tools, endpoint detection and response, XDR tools, CrowdStrike, SentinelOne, others to pull those risk signals in. Even in the case with CrowdStrike, CrowdStrike's been really talking about this concept that they have in their tools called a Zero Trust Assessment Score.
This ZTA score is all about taking a variety of these risk signals, prioritizing them, weighting them to actually come up with a score on that device from its risk profile. We can actually place that in the Beyond Identity policy engine to say if it doesn't have a ZTA of 75 or greater, deny access.
And again, pulling all these risk signals in at that point of authentication, and then, and only then, providing access to that end user, really ensuring, again, protecting against credential theft because no credential should ever move. And those credentials should never move. When they don't move, they can't be stolen.
And it's always guarded by this policy, this trusted computing, that there is hardware proof that will be expect to be happening, is happening, and that that is validated at authentication. And again, that's not a one-and-done event, that we continuously scan these policies because this process is so frictionless to an end user.
As I mentioned, all the end user does is request access, uses the biometric on the device to validate them. All this happens in sub-seconds behind the scenes because it is so seamless and so frictionless on the end user. We can run these controls on a continuous basis, even unknown to them. And when we detect that something has changed, to bring one of those devices out of policy to identify a risk on that device, we can take immediate action.
Through the integration we have with Zscaler, we can immediately disconnect that session, discontinue that session, requiring the user to re-authenticate, which won't be granted unless that device has been brought back into policy, unless that risk of the identity and that device has been eliminated. We can similarly quarantine that in the EDR solutions as well. But, really back to those principles of zero trust authentication, continuously monitor and take immediate corrective action.
The other great benefit of digital certificates is that you have a true immutable audit record of every device, every user, the posture of that device and resource being accessed, which provides rich sets of data around risk that can be fed into security operations, our SOC tools, our sim tools to analyze this, to assess and identify risk, trends around user access, but also with identity governance and administration solutions such as SailPoint, Saviynt, where we can actually demonstrate where they're identifying and validating, do the users have the access controls that I expect?
Do they have the authorization, the roles in place, but also which device is accessing those resources? Is that device in policy? For example, in HIPAA, there's a requirement that any caregiver has...accessing patient data can only do so on a device that has disc encryption enabled or any privacy kind of PII type of information.
We can validate that through an immutable audit record showing that the only devices gaining access had this control in place, and even better, demonstrate to the audit community that there's a control in place, there's a policy in place that won't allow access if that isn't occurring. And so, this really is just kind of from our standpoint, really how this whole ecosystem works, how each of us play such a critical role.
And again, our goal, our plan is we don't want to replace any of that. We want to really take advantage of it. But, again, bringing that back to that philosophy, all of these accessing rules, authentication should be guarded by policy. And what we are doing is providing hardware proof that what we expect to be happening is happening and validating that at the point of authentication.
And again, that's not just coming from the signals we gather, but the signals across the community and really being able to do this through this notion of continuous authentication.
Hey, Kurt. I do want to add something here before we move on. For many years, our industry has asked the question, best-of-breed, or suite because you brought this up earlier. And I'm going to say most of the tech industry is best of breed until it's no longer differentiated enough to resist the suite trend.
ERP did this many, many moons ago, right? Productivity suites did this. Eventually, when they only differentiate on the basis of price, you get a consolidation in the market. The small players go away and big enterprise players remain, and then they get gobbled up. It's the old big fish eat small fish thing. But, for some reason, and I understand it now, you know, people come from other industries and they think it's going to happen in cyber and it doesn't.
For some reason, it doesn't happen. And here's the real reason. It's because we have an adaptive opponent. And because we have an adaptive opponent, innovation is required. And we never get to the point where we only differentiate on price. And that's an important thing. If we do solve this cyber thing, by the way, the same thing will happen.
But, this is why it's so important that vendors have the opportunity to innovate and to focus being good at what they're good at because large companies are bad at that. They become conglomerates, they absorb smaller companies, they become suites because at that point, it is a cost equation and differentiation is on the basis of price.
And then you get your enterprise licenses that you mentioned. And so, I mention this now because what I as a consumer and a user and a practitioner, and what I think everyone listening should be doing in their RFPs is, are you a walled garden from a risk perspective? Are you a walled garden from a policy perspective? Because you must be able to vote with your feet to purchase and to subscribe to any service or product or what have you that is best of breed because of the adaptive opponent.
And it should still plug with everything else. So, look for the ecosystems and the players that do that and reject or put pressure. If they've got a unique feature, be aware that getting locked in could limit future adaptability for the...at the expense of getting...you know, so you get one feature now and in the future, you lose the ability to vote with your feet.
And so, I mention this now because in cyber, every move to consolidate that creates the large players, and this is why there's very few mid-sized players and there's a long tail of small companies, that hasn't changed in 25 years. Every other industry has had this, I mean, best of breed, best of breed, best of breed, and then consolidation to suites, and then consolidation to few vendors, and then steady state.
That hasn't happened here. Unless it's disrupted, by the way. That does happen occasionally. There's a continuous disruption because of the opponent in our space. I'll stop, I'll get off my soapbox, but I have to mention that because what you just mentioned is so important as a strategic and architectural decision for practitioners when choosing who they work with or don't.
Yeah, absolutely. Couldn't have said it better. So, you know, I don't know if you're aware of this now, Sam, but there's actually, I think it's in the process of a bill becoming a law that you can't actually have a webinar in cyber without mentioning AI. So, talk to us a little bit about, you know, we've been talking a lot about even advancing and moving towards a futuristic state here with zero trust and zero trust authentication. But, from your perspective, and I know you've always had great vision and great guidance in this industry, you know, what do you think the future holds, and what about AI?
Yeah. So, first of all, if we... I want to say that look, Gartner has their hype cycle. And I actually don't know where AI falls on there, but it isn't one thing. So, I don't think AI should be on there, but it rose in hype around ChatGPT at the beginning of 2023 meteorically.
And I think it then went down to the, as they call the trough of disillusionment, I think is what they call it, right? And I love the word trough. It's just a great use of the word, right? And then it gets to the plateau of productivity, which is, like, boring but useful. And let's define AI as a toolkit of things that includes, I will call it advanced data science tools like LLMs, which are a product of artificial intelligence research, but they are not artificial intelligence, and machine learning.
Some dimensions of this like anti-fraud use and finding unique novel malware, they're quite mature. They've been around for a long time and they've been doing things for a long time. But, if we ever do really create artificial intelligence of the sort that was being hyped at the beginning of 2023, we won't call it artificial anymore. It'll just be intelligence, right, because there'll be nothing artificial about it.
And we are not there. No one has yet surfaced sentience or reasoning, or initiative. They have surfaced understanding, and that in itself is either uncanny valley to us, meaning it gives us the creeps as this oversimplification, but it does, or it makes us infer more intelligence than is there.
And that's scary. Both are scary. And so, what I'm going to say is that the AI toolkit is going to empower attackers in the short term. We're going to see better phishing attacks. And that's why I think some of what we're talking about today is so important. Everything came before because we're going to...the awareness and training that we've seen, things like, you know, look for the typos and look for the bad domains, and look for the bad spelling, none of that's going to happen anymore.
Unless it was by design because they were looking for a certain optimal point of, you know, smart people just skip over it, and people that don't spot that sort of thing click. Unless that's the case, it's just going to come better targeted to its audience and more effective at getting past training. And then we're going to get things like combined arms, things like deep fakes with it as well. Also, we're going to see a spike in both genetic and phenotypic, by which I really mean source code and object code of attack code, which is going to make the defense, the non-signature based defenses that we've seen so far in the industry less effective for a while.
But, long term, I'm actually super excited about all the different places we can apply it. We mentioned negative trust. The ability to do that well is interesting. Risk scoring, we mentioned today. The ability to do that in an even better way is exciting. So, many of the technologies we've talked about, policy management, I'll give you two examples. One is actually getting closer to finer-grained authorization at scale is going to become much more manageable thanks to the AI toolkit.
Here's another one. In the GRC world, we always wanted the G to be small and the C to be small, by which I mean the governance, which is the issuance of policy and the compliance to checking that it matches what you thought. We want to write in as close to plain text native language, English, or whatever language you use, and then have the machine interpret that and behave in a consistent way without having to do the translation.
Well, guess what can do that really well? LLMs are pretty good at that, by the way, just as they are with some tweaking. So, the human language, the machine language interface could become much more efficient for policy standardization, and then for auditing. So, there's a lot of places that it can be applied, but we're going to get to that plateau of productivity, and it's only going to become, "Hmm, yeah, we're sort of using it, we're soaking in it. You know, it's..."
I'm reminded of this report I heard, perhaps apocryphal of somebody who was in one of those self-driving cars. And I said, "The first five minutes was so exciting, and then it was boring." It's going to be a lot like that. And so, my suspicion is this too shall pass. We'll go into 2024, unless there's some other major breakthroughs, God, I hate making predictions, but what's going to happen is we're going to take a lot of it for granted, and it's going to start to get into what we do and how we do it, and in the medium to long term, the defenders are going to start to reap the benefits as opposed to the attackers.
We're going to see some hype about what the attackers are doing. Make no mistake on that. I think people are going to get very, very scared and upset and there's going to be a lot of hyperbole. But, long-term, I think we are actually going to find new ways to apply it in deception technology, in policy, in better detection, in better response, automation, co-piloting, those sorts of things. This is such a huge area, by the way.
We could do multiple webinars just on that. But, Kurt, did I leave anything out that you thought I'd probably bring up?
No, I don't think so. We're coming up, you know, close to time here. I want to leave some time for questions.
Oh, I think we had some questions too.
We do. So, I do want to remind everybody that the Q&A portion, please feel free to submit some questions. Before we go into it, I just kind of want to, you know, wrap things up on kind of the philosophy here. You know, I'm quoting...
I really quoted something else by Einstein. I'm glad I didn't realize you'd put that in there.
There we go. He's a quotable gentleman. But, you know, he's often said insanity is doing the same thing over and over again and expecting different results. And I can't help but think how this is applied to a lot of what we've just trusted as the way to do authentication, relying on a password because that's what's in the systems, relying on MFA because that's what we're told by regulators and auditors. But, just to kind of...
You know, and we have this book on "Zero Trust Authentication." You can scan the code here, you can download it from our website. And it really gets into details around a lot of the philosophies that, you know, we've been talking about today. But, I'm just going to kind of sum it up that no credential should ever move. And that's...you know, we talked a lot about passwords. But, it's also session cookies.
It's access tokens, which unfortunately most organizations treat like share key secrets that move all over the place. It's SSH keys, GPG Keys. All credentials should never move. And if they can't move, they can't be stolen. And it should always be guaranteed by policy. You know, there's hardware proof that what we expect to be happening is happening at the time when that user is authenticating.
And in doing this, we can not only answer who you are, but are the things I expect to be on your device actually in place on that device relative to the things you're trying to do and the access you're trying to get. And back to the fact that this needs to be continuous and it needs to leverage that ecosystem, it needs to leverage that broad aspect. And, you know, if you do this, you can prevent credential theft, not just reduce it.
You could prevent adversary in the middle attacks, not just reduce it. And it should be easy on the end user. So, with that, I'll turn to the questions. We had a couple that came up. I'll take the first one here, Sam. It says, "Why not use biometrics versus keys as the primary verification?" And I think biometrics are, you know...and I'll give it from my standpoint, Sam, and I'd love for you to, you know, add on here too.
You know, the biometrics are a valid form of user verification. There's various levels of it from, you know, kind of what...
Something you are. Yeah.
Yep, that's it. It's something you are. It's built into the devices. You can get even deeper with, you know, bio hashing, etc. In most cases...
I would say by the way, that it's the password you can't reset if the crypto system's not built right. So, check the crypto system as opposed to the piece of your body you're using. Yeah.
Dead on. You know, asymmetric cryptography is such a critical aspect that you're just doing digital signatures, is how authentication should work. So, you know, to have that biometric validate that user need, that cryptographic binding, you need to do this through asymmetric cryptography, you know?
And that kind of gets back to this aspect. The key can't move. If it's in the enclave, you have a guarantee it doesn't move. Many of the systems that you might be authenticating with today with the biometric probably still has a password in place that an adversary can gain. Even if somehow that is compromised, if somebody's coming from a device then that has, you know, compromised that identity, they're still getting in.
It's only when you can truly bring these two things together that you can truly eliminate credential theft and eliminate the adversary-in-the-middle attacks. Another question, I think kind of touched on this, how do you incorporate zero trust into IAM tools like SailPoint when employees from companies exist within your profile? And, you know, absolutely.
You know, the IGA tools are critical stores for really tracking and assessing who is gaining access to what, is that access appropriate based on their role in the organization, providing a capability to instill controls and doing attestation reviews to have managers validate those people, even incorporating third parties, you know, a consultant that had access long ago. Tying that back to authentication is critical.
You know, like what we're talking about here, how do you put better controls around how those people are gaining access to what you're giving them access to? But, as I kind of said before, that you can assess your users across different levels and different inputs to really understand how restrictive on those users you want to be versus permissive when you don't have to be. So, really tying those different systems together.
But, I think the other aspect is really showing that immutable audit proof of exactly who's gotten access, and from what device, and the state of that device. I'm going to throw this one to you, Sam. [crosstalk
...any applications, anecdotes for zero trust in OT environments?
Yeah. So, for this one, I would say that it's not just about users. Think of it as subject, verb, object, where the subject is the source of a connection request or an access request, and the object is usually of the form of an application.
But, the subject can be...it could be a workload, could be an OT device, an IOT device, could be a user, could be a device itself that's acting as part of a protocol exchange, for instance. And the destination, likewise, the object in this instance could be something like another workload, another OT or IOT device, etc. And so, yeah, there's lots of instances of this.
First, they want anecdotes though here, right, an application. So, one is some devices have really thought carefully through the OS or the RTOS, the real-time operating system, and the use of hardware routes of trust are not and have thought through authentication and machine identity. Some have not. Some have done some runtime protection.
There's some vendors out there that are doing that. And some have thought of segmentation just like we do application segmentation and we do workload segmentation we run in VPCs. We can also create clusters or segments of OT devices logically and protect those and then put them into a zero trust environment. So, there's quite a few things here. And what I would do is talk to your vendors, see what exists for authentication of the device and for identification of the device, and then see who is it.
And by who, I don't just mean people. Who is it that's going to connect to these, and for what? And I would say it's not just zero trust. It's also that least function I alluded to earlier that almost no one talks about. It's funny, I have a... I know we're out of time, but this one's worth it. I mentor some students, and one of them really wanted a CVE number.
And I told him... And it's a funny story, but I'll shorten it. And I told him that most vendors in the OT world, they just take off the shelf an operating system and slam it in regardless of whether the functions are needed. And as a result, sometimes they fix a vulnerability and they forget to fix it in the source OS before they put it in the next device.
And sure enough, he went and tested later devices by a manufacturer that had patched it and found the same vulnerabilities were reappearing. He got his CVE number, and he's kept testing it and keeps finding more. And he's appeared and presented at Black Hat, and he hasn't even gotten out of college yet.
And he's a great guy. [inaudible Rajesh. Shout out to him. But, yeah, this stuff happens. So, I know we're out of time. And do we have time for one more, Kurt?
I'll take the last one's kind of a quick one. How would this be implemented across all the tools we use? Is it supported by vendors in order to eliminate using passwords? And another one asking even about systems that don't support SSO, you know? Our approach was go after kind of that low-hanging fruit, the cloud-based modern environments where IDPs are in place, but expand from there.
Like, our belief is that we need to do this for all forms of authentication. Even the older systems that we're still logging into that, you know, we need to kind of move our authentication systems to one based on digital signatures, based on asymmetric cryptography, leveraging the enclaves in today's modern hardware, be phish resistant to address these adversary-in-the-middle attacks, and delivered by a company that really cares about the user experience and committed to leveraging, not replacing those investments.
That portfolio, that breadth of solutions and systems will continue to expand because as I said, our philosophy is it's not just for passwords. It's for anything, any kind of credential out there, SSH, GPG, access token, session cookies. Any credential needs to be protected. And in that way, we can truly solve root problems causing breaches through initial access today.
Kurt, two things real fast. One is, I would add to that if you do zero trust, right, you can also simplify your stack, which means you may not have to deal with as many legacy providers. And I'm not going to say go throw this out and that out because you need to look at your stack to be careful about it. And the last thing is the quote that I was going to use from Einstein about AI.
Be careful of any predictions out there because when he was asked, what will World War III be fought with, he said, "I don't know, but World War IV will be fought with sticks and stones." So, just be careful about all the hype about AI, even as applied in this industry because deal with the real things that happen, not with what everybody pontificates or academically talks about.
Thank you, Sam, for joining me today.
Thank you, Kurt. Thanks for having me.
Thank you, everyone, for your participation today. If you want more information on Zscaler and Beyond Identity, please reach out. Thank you again for the time. And look forward to talking to you again in the future.
Have a good one. Thank you.
Bye for now.