Billions Spent on Security—and We’re Still Losing to Phishing
TL;DR
Full Transcript
Welcome to another Beyond Identity webinar. We do many, many of these. So if you've been to more than one, you might know who I am. But in way of introduction, my name is Jing. I lead marketing here. And with me today, I have a very special guest, Jasson Casey. Who are you, Jasson?
Who am I?
Yeah. Who are you? Why don't you introduce yourself to the audience?
Sure. I'm I'm Jasson Casey. I'm the, CEO and cofounder here at Beyond Identity.
Indeed, he is. And today, we are going to talk to y'all about, the fact that we're spending billions on security and somehow still losing the phishing. So what's going on there? I'm going to add our slides to the stage. And without further ado, we're gonna kick things off.
All your assumptions are wrong. What's going on?
So, yeah, let's get this started. First of all, we at Beyond Identity are not spend spending billions. You, the audience, are spending hundreds of billions of dollars chasing security results. And we think that some of the assumptions baked into that chase are not only wrong, but are gonna lead to the wrong outcomes for us and frustration. So let's get it started.
So this slide isn't gonna really surprise anyone. Right? We have been in the world of increased security spend for, decades now, And yet outcomes aren't necessarily getting better, certainly not when viewed by a a dollar figure amount.
Next slide.
So what's actually going on under the hood? And this is a particular snapshot taken from Verizon's, database of incident response. But you could also do some background research with Mandiant or, CrowdStrike's threat intelligence report, and you'd see the same thing. And what they're saying is eighty percent or more of your security incidents are caused by the identity system.
So let's put that in perspective. You have a SOC or you have an MSSP partner. And eighty percent of the exhaust of that organization is essentially working failures in your identity system. Your identity system is how you, your employees, your partners, and your customers get access to services and data.
Now why is this the case? We have a thought, and that thought is, historically, identity is the, product of productivity. Identity is is about what gets you to work. Identity is not necessarily has not historically been a security product.
But what we're focused on here at Beyond Identity is we think identity is actually the future of security for a lot of the reasons that I'm gonna walk you through today, but also where the industry is going.
So the slide we have up right now is just kind of a quick state of the state of the world. So here are a couple of threat actors that you've probably heard and read about in the news. Some of these are state actors, Some of these are organized, crime, and some of them are don't know. It I don't know if I would really call them organized crime as opposed to organized, adolescents having fun, but but it's it's a bit more than that. There there's a striking theme amongst all of them. If you look through some of the tactics, techniques, and procedures that they're actually using, they're attacking the identity system.
They're attacking credentials of the identity system. They're attacking users of the identity system. And so this is just another way of making the same point of the previous slide, which is the techniques that are really being used today are against the identity system or at least the large majority of them.
There we go. So what we mean by this is the adversary is actually not breaking in. They're logging in, and they're logging in and walking in through the front door.
So why is this the case? This gets back to these assumptions. Right? So a lot of us, think about identity.
A security system. But this isn't really its journey. Identity's journey into corporate America was about easing the IT burden to make it easy for people to access services, to make it easier for us to understand roles and permissions of of of people and groups and what they actually need access to. It hasn't necessarily been on a focus of this is a first order security system. It has really been about how do we drive more productivity with my global and expanding workforce. We think that ultimately is where Where prob these problems started and where kind of these false assumptions kind of took root and helped it become so easy to actually exploit infrastructure.
So what are some of these assumptions? Let's kinda get to it. This is kinda what we think the world, looks like when we think about these identity systems, but we would argue it looks more like this.
Not exactly, what I thought. Right? So, move forward one more.
So let's start with the first myth. Let's get let's get concrete. And the first myth is that shared secrets can actually be secured.
So let's ground this a little bit. What do we mean by shared secrets? Well, it's easy to think about a password as a shared secret. It lives in your brain.
You're clearly going to share it during enrollment with a service. Hey. Here's my password I wanna use to log in another time. But there's many other secrets, that are, of the sharing variety that help power your organization.
For instance, oftentimes session cookies, access tokens, bearer tokens, these are shared secrets. Oftentimes, shared secrets are baked in as keys to APIs.
When you actually look at, how some of these, tokens work, like, timed one time passcodes, they're a they're a slightly more elaborate shared secret, but still, fundamentally, they're a shared secret. Prove that you have this thing that was seated in the similar way that has a deterministic walk.
And and we argue that these shared secrets can't actually be secured.
And the reason we argue they can't be secured is when you think about the nature of how a shared secret works, you have to share it to use it.
And in the world of computing, sharing is really a fancy way of saying reads and writes, copies and writes. Reading something out of my memory, reading something off of my desk, and writing it out the network socket, writing it to, the peer device that I'm actually communicating with. And every time that secret moves, it ends up in memory of some computer.
How do I track its location across all of those computers?
In a modern web service, let's say one that your organization is building, That web service is probably gonna be based on containers, maybe some Docker containers deployed in some sort of Kubernetes cluster. That Kubernetes cluster probably is balancing across those containers using something called a service mesh. That is an application load balancer. It's opening your TLS connection, which means it now has copies of any secret sent around.
Maybe you have some sort of Kafka message bus in your architecture.
All of the memory of those devices now have access to those shared secrets. Let's say you're deploying an Amazon in, across different zones and maybe even across different regions. So you're using their application load balancer. Again, now those secrets live in memory of those devices.
Do you use content distribution? Do you use Cloudflare? Do you use Akamai? Do you use Amazon CloudFront? Those services now have copies of your secrets in their memory. And then let's say an organization who's using your service falls under Sarbanes Oxley and has a recording require.
Or some other compliance regime that has a a connection recording requirement. They're gonna be using proxies, forward proxies in their environment that are opening that TLS connection. And, again, your secrets are now all in their memory. These are all third parties. How do you track them all?
How do you ensure they're all protected? How do you ensure they're all securely erased?
These third parties can have insiders. These third parties can be exploited themselves. Most of you are probably familiar with Cloudbleed, a very famous, vulnerability discovered in the late teens twenty teens. And the gist of it is a well crafted packet.
Created a memory dump of the the application load balancer or the proxy, and you essentially got a memory dump of a bunch of traffic that may be yours or maybe others. And so it's a good example of how it's just not really possible to secure shared secrets. By their very nature, they have to move. And the fact that they move creates this ever expanding surface area that you have to protect, and and and that's just almost impossible. So what do you do about it?
What do you do against, how do we're arguing you can't secure shared secrets, well, then we're clearly saying stop using shared secrets. Let's move to maybe an asymmetric secret. Right? Most of you are familiar with asymmetric cryptography. This is the technology that makes it possible.
There's some other text, but this is the main one that makes it possible to not have to share a secret. But you can go a step even further than that. You can use asymmetric cryptography, and you can use something called a trusted execution environment. A trusted execution environment.
This is a a a fancy word for essentially like a crypto coprocessor. It's a a coprocessor with some special properties. Number one is it's the memory of this coprocessor is isolated. It's not connected to your computer's memory.
It is, and has special operations. You think of it as a jail. You can create key pairs in this jail where only the public key can leave the jail, and the private key literally cannot leave the jail. There is no read operation over that private key.
So when you want to conduct signings, you bring the document or you bring the piece of data to the jail, you hand it through the bars, and you ask the key to sign that document for you. This is how you can guarantee that private key, that private material does not move. If it does not move, the surface area has just shrunk to this singular device, to this singular thing. And from an attack surface perspective, we've just gone from well, if I can't steal a credential at scale, now I have to steal it physically.
Right? So, again, drastically different attack surface. And, again, this is what security is all about is how do I how do I raise the cost of the adversary in an asymmetric way to dissuade them from this technique, from this device, from even me as a target?
A little comment on the pervasiveness of this technology. These crypto coprocessors are they they pretty much exist everywhere. If you use, Microsoft Secure Boot, you're actually leveraging this technology. If you ever bought a cup of coffee using Apple Pay Google Pay, you're using this technology. If you're playing the new Battlefield six, you're using this technology. And, this technology can be used to guarantee keys don't move, and then you can do, these gadget constructions with these keys to then do even stronger properties. Like, the software that I'm running has not been modified and is original OEM guaranteed, so to speak.
Next slide.
So let's move on to myth number two.
Myth number two is two devices make me more secure than one. And we wanna challenge this one pretty hard, because it's all been drilled into our head that if you have a second device, two devices are harder than one. And it could be true if the right network protocols were were in place that could actually guarantee that these are the two devices I actually expect, and they're next to each other and paired in some interesting way, and it's not possible to man in the middle of the connection.
And the burden gets very, very difficult. And, of course, all of the protocols in play right now with multi device authentication don't do anything like what I just said.
And this is evidenced, right, with TOTB can be man in the middle. Push notification can be bypassed. It can either through push bomb or man in the middle primary accounts. Like, number match can also be bypassed. Right? Because, again, there's nothing in these security tokens that inherently speak to the identity of the device they're on.
The identity of the device that the user is driving, right, that's gonna receive the data or the service. It's gonna bring the risk to the service and the data. And and so the second device authentication, it gives us a feeling, that we're doing something to actually increase security. But the reality is is it's actually making it, it's making it harder from a systems and a network perspective to really analyze and understand what's going on under the hood.
I'm sure many of you logged in and got, like, the screen pop. It's like, hey. Are are you really coming from New York? Are you really in this particular city in Virginia?
The reason for that pop up is your provider has no idea if your second device is near you or not. It has no idea if you've been man in the middle or not, and it's trying to stitch these things together. So what do we do? Well, the answer is don't give up on multiple, factors, but focus on the primary device. So how do you do single device multifactor authentication?
And because if you can do single device multifactor authentication, number one, you start working on the device that someone's on. If you're working on the device that someone's on, it now becomes possible for you to comment about the device identity relative to the user identity, relative to the service and the data that someone's actually asking for access to. It makes it possible for you actually to detect man in the middle. It makes it possible for you to detect and prevent session hijacking.
It's still possible to do multifactor and make sure it's the right user. And the way you do that, obviously, through possession, through a key in that jail that I mentioned earlier, but you can also then use things like biometrics or pins. Right? So inheritance factors or knowledge factors that become unlocks, to that crypto coprocessor.
So remember earlier, I described the jail that has the key in it, bring it the documents, say sign this. What turns out, most of the time, the way we use those crypto processors is we create keys in the jail with a policy. So when you bring a document to that jail, you won't get the document signed right away. The jail's gonna gonna prompt you, and it's gonna say, okay.
I'll sign this. But now I want you to give me a biometric. And so if you're on an Apple device, maybe you put your finger. If you're on a mobile device, maybe you smile at the camera or a Windows laptop, maybe you smile at the camera.
That turns into something called a stable feature vector that's then fed to the jail. The jail runs it through You think of it as, like, a a secure hash.
Only if the result of the secure hash comes up to the same value that that key was enrolled in, I e, it's the same biometric that enrolled the key, will the jail actually then allow the signing of the document? So you can introduce two step. You can introduce three step. You can actually introduce multistep, or multifactor authentication from this singular jail on this device.
Alright. Myth number three. We can train the problem away. Don't click the bad link.
So, does this smell like chloroform? Right? Is this a bad QR code?
The active testing either of those situations is the act of detonating either of those two situations. Also, humans are just bad at this. We're really, really bad at this. So we've got a chart up here right now, and this is just showing some very basic mathematics showing three different organizations.
And so these three different organizations have different click through rates. So according to some of our partners in this industry, the best organizations, best in class, only have a four percent click through rate. So the best trained organizations out there, four percent of their population constantly fails phishing train. Right?
Now we all know most of us aren't best in class. Most of us live in these other two. Ten percent click through or even twenty five percent click through. What these charts are showing us is based on these three organization types, How many attempts does it take by the adversary before they have a ninety percent certainty A of detonating some sort of click through link on a target device?
And the answer is kinda shocking. So at the worst organization, it only takes eight attempts to get a ninety percent certainty. At the best organization, it only takes fifty six attempts to get a ninety six percent certainty. Fifty six attempts is not a lot.
Right? Especially in the world of, AI and agents where I can automate and script not just all of the attempts, but the chameleon like nature of sounding and looking and being exactly what the target or the victim expects.
So, again, we this is just not the way.
So let's let's remove the human from the link.
Rather than training the human to detect if something is bad, let's do the job we should have done in the first place, which is engineer the protocol, the authentication protocol correctly to detect if it is actually being challenged by the right service or not. To detect if the challenger is deviating from something that appears in the t l TLS layer, which can actually happen, and we've seen in the wild. Let's make the protocol responsible for the problem. Let's not continue victim blaming the humans, for trying to fail, the test of ultimately, is this a bad QR code, or does this smell like chloroform?
Alright.
Myth number four. I only allow managed devices to connect to my network. And man devices managed means it's been hardened and it has all my security tools on it. Therefore, I don't have any device problems.
Well, we find that this myth is, well, we find that it's a myth. And, the reasons we find that it's a myth is, just just think through this. Right? Alright.
So let me tell me about your executives. Executives. Do you really not have any BYOD exceptions for any of your executives?
Do you really not have outsourced PR? Do you really not have outsourced software development? Do you really not have outsourced IT? Do you really not have business partners that help you on certain large deals? In all of these cases, do they have access to any of your services or your data? Right?
So that's kinda number one. Number two is even for the devices that are managed.
How do you know they're doing what you think? How do you know your design intent is actually what you're getting?
You know, in the software engineering world, one of the funniest things that you you learn pretty quick is just because your code compiles doesn't mean, it actually does what you say. And and maybe even a harder lesson is just because it passes your testing doesn't necessarily mean it does exactly what you think and nothing else.
Same thing shows up in IT. Same thing shows up in system management, and we see this all the time in our customer deployments. Their design intent is not always faithfully replicated across all devices that have access to their infrastructure.
And that creates problems. Right? If I don't have the device hardened, if, if the device is not CMMC compliant, if the device is not, PCI DSS rep four compliant, and I'm then giving it access to data that falls under the protection or the controls of one of those regimes, I just blew my compliance. And now I have to run an incident, and resolve it best case or worst case, I in I I jeopardize or imperil some of my revenue.
So what are we gonna do about this?
Well, let's think about an airport for a minute.
Where security matters, they ask the questions. Are you the right person?
Are you on this are you the right are you the ticketed passenger, and are you safe enough to allow into the airplane? Do you have no guns, knives, or bombs?
And now let's try and apply that metaphor to identity and access.
Am I the right person on the right device, and is my device safe enough for the service or data it's actually asking for?
Let's join those questions. Let's make them the same question. Let's make authentication actually say, prove that you can use the key in that jail that has the the high policy or the multifactor policy on it. Prove that it's the the key that's enrolled to the one of the devices I expect this user to use, and prove that device is hardened to the level of requirements for whatever it's actually asking for.
If it's asking for very sensitive things, let's make sure CrowdStrike is running. Let's make sure, all of the baseline hardening on the device is on. Let's make sure Jamf is on or Intune is actually on with the policy that we actually expect. Let's make sure that the the ZTNA solution is present and active from Zscaler.
Let's do these things in real time relative to each authentication.
Okay. Myth five.
Hey. The once I've once I've achieved authorization, I'm good. The the devices that have authorizations into, into my services and data don't aren't really a major source of risk.
This one is yeah. I I don't think people I think people follow the argument pretty quickly on this one. I think it's more of this one just getting forgotten. Right? So, a lot of services, were built in kind of a lazy way, and they hand out authorizations that never die.
So I'm sure some of you, still remember all the action we had to go through two to three months ago. When I say we, I mean, the industry with the SalesLoft breach. So remember, there was this plug in for Salesforce. And whether you whether you were still currently using it or even had turned it off, it was still refreshing your access token, your authorization tokens in the back end, even if you thought you had deleted your tenant. Right? So it was kind of like a, an ever die authorization still into your infrastructure, which as the adversary compromised a third party, they were able to then use that token to kinda download any data that that token had access to in that environment. So this is that that's a prime example of why long lived authorizations is a problem.
Some of you have moved to short lived authorizations, like, your users basically, thank you by complaining all the time of why do I have to do three factor authorization again every three hours, every eight hours, etcetera.
And so we we we think the way you handle this is a little bit different.
Next slide.
We think the idea is let's move off to be continuous.
It doesn't always have to be continuous and involve the person, but it should be continuous. And the authorizations could should constantly be refreshed based on, is this the same user? Is this the same device?
Or is the security posture on the device the same level as when I granted the original authorization, or has it increased? And then and only then do I consider it kind of a no op. But if something is degrading, then maybe I have to, worst case, withdraw the authorization, signal, some of the security, platforms, to revoke downstream authorizations, lock them out of their tunnels, lock them out of their, their connections, maybe even lock log them out of their, their end user device.
Or maybe I don't need to be that drastic.
Maybe something has changed, but what I really wanna do is I just wanna trigger an authorization with the user, or I wanna go pulse some particular piece of software, whether it's a cloud service or an endpoint software and and and get a You know, get a better reading on the situation.
In all of those cases, moving to continuous is what solves that problem.
So, future proofing. How do we move beyond today?
Thanks.
We we think the so, obviously, we think the answer in solving these myths is kind of a fundamental thing. Like, rather than chasing all of these security controls that you've been deploying for decades with yet another layer, What if we could do something that let us throw a couple layers out? Not because layering is bad, but just because if a credential doesn't move. I don't really need defenses around moving credentials.
If if I'm using single device multifactor, then I don't necessarily have the burden of all these peripheral devices.
If I'm actually verifying security of the device and the person relative to their access.
I have a much cleaner picture of what's going on. I also have a simpler record in my SIM. One access record tells me about the user, the device, and the security of the device at the moment of access in each re auth. Like, it's just a it's a it's a cleaner picture.
It's a simpler mental model. It's easier to work. It has implications on on incident response. Like, your blast radius is now precise. When I have an incident, I know exactly what device I'm talking But let's move towards the future. Right?
All of you have some sort of project with nonhuman workloads. Right? Whether it's automation workloads in the cloud or on desktops, whether you're building AI agents or even flying drones.
In all of these scenarios, you have real identity security problems that you have to solve.
And it's very easy to actually solve them in a more principled way by guaranteeing credentials can't move, by guaranteeing by actually verifying the security of the device and the identity at the time of off and continuously.
And so when we say future proofing, what we're really saying is by by answering these these questions or by attacking these myths fundamentally, you're actually setting yourself up to where these problems like AI agents aren't going to scale and spread credentials sprawl all over your future organization.
Nothing to sprawl because your future credential doesn't move.
I think that's a really powerful idea. Right? So what I heard you say at the end was well Once you sort of break through these myths, there's a better world. There's a better tomorrow beyond, the realities the presumed realities of today. Right? So if there are no passwords, there's nothing for your IT help desk to reset.
If you eliminate the biggest source of, incidents, which is identity which are identity based threats, according to pretty much every report out there, you can actually reduce your risk of breaches. And, you know, some of our customers say things like, our insurance premiums after we've implemented the on identity has never gone up, Or, you know, we save an average of ten minutes per employee per day, and that's with a a business unit within SAP that we work with. Right? So not only is there significant material benefits today, some of these concepts of a device bound credential, continuous authentication, you know, device security on the authenticating device can extend to the problems of AI, which, you know, these are ephemeral identities.
These are, identities that do things. Right? They're not agentic for for nothing. So it's really powerful concept.
So Turns out, Our company does something about all of these myths.
If you're curious about how this actually looks like in practice, I would encourage all of you to check out beyond identity dot com for more information. If you have any questions, I'm volunteering Jasson as tribute. You could feel free to, you know, find him on LinkedIn or Twitter under Jasson Casey, Jasson with two s's. And, of course, you can reach out to us, and we always have, security experts on the line who, would be more than willing to, chat with you. Any any parting thoughts, Jasson, before, before we let the audience get back to their days?
Addressing these myths simplifies your security architecture.
Anyone who's built platforms at scale understands that, like, the the knob that moves the needle the fastest is the simplification knob.
So if you could simplify your security architecture and that you're going to reduce your help desk ticket rate. You're going to reduce your security incident rate. You're gonna give your workforce a simpler mental model to think about.
Like, the benefits are kinda three sixty.
Yeah. Yeah. Agreed. Alright. Well, thank you all for joining us for this webinar, and, we'll catch you at the next one. Bye.
TL;DR
Full Transcript
Welcome to another Beyond Identity webinar. We do many, many of these. So if you've been to more than one, you might know who I am. But in way of introduction, my name is Jing. I lead marketing here. And with me today, I have a very special guest, Jasson Casey. Who are you, Jasson?
Who am I?
Yeah. Who are you? Why don't you introduce yourself to the audience?
Sure. I'm I'm Jasson Casey. I'm the, CEO and cofounder here at Beyond Identity.
Indeed, he is. And today, we are going to talk to y'all about, the fact that we're spending billions on security and somehow still losing the phishing. So what's going on there? I'm going to add our slides to the stage. And without further ado, we're gonna kick things off.
All your assumptions are wrong. What's going on?
So, yeah, let's get this started. First of all, we at Beyond Identity are not spend spending billions. You, the audience, are spending hundreds of billions of dollars chasing security results. And we think that some of the assumptions baked into that chase are not only wrong, but are gonna lead to the wrong outcomes for us and frustration. So let's get it started.
So this slide isn't gonna really surprise anyone. Right? We have been in the world of increased security spend for, decades now, And yet outcomes aren't necessarily getting better, certainly not when viewed by a a dollar figure amount.
Next slide.
So what's actually going on under the hood? And this is a particular snapshot taken from Verizon's, database of incident response. But you could also do some background research with Mandiant or, CrowdStrike's threat intelligence report, and you'd see the same thing. And what they're saying is eighty percent or more of your security incidents are caused by the identity system.
So let's put that in perspective. You have a SOC or you have an MSSP partner. And eighty percent of the exhaust of that organization is essentially working failures in your identity system. Your identity system is how you, your employees, your partners, and your customers get access to services and data.
Now why is this the case? We have a thought, and that thought is, historically, identity is the, product of productivity. Identity is is about what gets you to work. Identity is not necessarily has not historically been a security product.
But what we're focused on here at Beyond Identity is we think identity is actually the future of security for a lot of the reasons that I'm gonna walk you through today, but also where the industry is going.
So the slide we have up right now is just kind of a quick state of the state of the world. So here are a couple of threat actors that you've probably heard and read about in the news. Some of these are state actors, Some of these are organized, crime, and some of them are don't know. It I don't know if I would really call them organized crime as opposed to organized, adolescents having fun, but but it's it's a bit more than that. There there's a striking theme amongst all of them. If you look through some of the tactics, techniques, and procedures that they're actually using, they're attacking the identity system.
They're attacking credentials of the identity system. They're attacking users of the identity system. And so this is just another way of making the same point of the previous slide, which is the techniques that are really being used today are against the identity system or at least the large majority of them.
There we go. So what we mean by this is the adversary is actually not breaking in. They're logging in, and they're logging in and walking in through the front door.
So why is this the case? This gets back to these assumptions. Right? So a lot of us, think about identity.
A security system. But this isn't really its journey. Identity's journey into corporate America was about easing the IT burden to make it easy for people to access services, to make it easier for us to understand roles and permissions of of of people and groups and what they actually need access to. It hasn't necessarily been on a focus of this is a first order security system. It has really been about how do we drive more productivity with my global and expanding workforce. We think that ultimately is where Where prob these problems started and where kind of these false assumptions kind of took root and helped it become so easy to actually exploit infrastructure.
So what are some of these assumptions? Let's kinda get to it. This is kinda what we think the world, looks like when we think about these identity systems, but we would argue it looks more like this.
Not exactly, what I thought. Right? So, move forward one more.
So let's start with the first myth. Let's get let's get concrete. And the first myth is that shared secrets can actually be secured.
So let's ground this a little bit. What do we mean by shared secrets? Well, it's easy to think about a password as a shared secret. It lives in your brain.
You're clearly going to share it during enrollment with a service. Hey. Here's my password I wanna use to log in another time. But there's many other secrets, that are, of the sharing variety that help power your organization.
For instance, oftentimes session cookies, access tokens, bearer tokens, these are shared secrets. Oftentimes, shared secrets are baked in as keys to APIs.
When you actually look at, how some of these, tokens work, like, timed one time passcodes, they're a they're a slightly more elaborate shared secret, but still, fundamentally, they're a shared secret. Prove that you have this thing that was seated in the similar way that has a deterministic walk.
And and we argue that these shared secrets can't actually be secured.
And the reason we argue they can't be secured is when you think about the nature of how a shared secret works, you have to share it to use it.
And in the world of computing, sharing is really a fancy way of saying reads and writes, copies and writes. Reading something out of my memory, reading something off of my desk, and writing it out the network socket, writing it to, the peer device that I'm actually communicating with. And every time that secret moves, it ends up in memory of some computer.
How do I track its location across all of those computers?
In a modern web service, let's say one that your organization is building, That web service is probably gonna be based on containers, maybe some Docker containers deployed in some sort of Kubernetes cluster. That Kubernetes cluster probably is balancing across those containers using something called a service mesh. That is an application load balancer. It's opening your TLS connection, which means it now has copies of any secret sent around.
Maybe you have some sort of Kafka message bus in your architecture.
All of the memory of those devices now have access to those shared secrets. Let's say you're deploying an Amazon in, across different zones and maybe even across different regions. So you're using their application load balancer. Again, now those secrets live in memory of those devices.
Do you use content distribution? Do you use Cloudflare? Do you use Akamai? Do you use Amazon CloudFront? Those services now have copies of your secrets in their memory. And then let's say an organization who's using your service falls under Sarbanes Oxley and has a recording require.
Or some other compliance regime that has a a connection recording requirement. They're gonna be using proxies, forward proxies in their environment that are opening that TLS connection. And, again, your secrets are now all in their memory. These are all third parties. How do you track them all?
How do you ensure they're all protected? How do you ensure they're all securely erased?
These third parties can have insiders. These third parties can be exploited themselves. Most of you are probably familiar with Cloudbleed, a very famous, vulnerability discovered in the late teens twenty teens. And the gist of it is a well crafted packet.
Created a memory dump of the the application load balancer or the proxy, and you essentially got a memory dump of a bunch of traffic that may be yours or maybe others. And so it's a good example of how it's just not really possible to secure shared secrets. By their very nature, they have to move. And the fact that they move creates this ever expanding surface area that you have to protect, and and and that's just almost impossible. So what do you do about it?
What do you do against, how do we're arguing you can't secure shared secrets, well, then we're clearly saying stop using shared secrets. Let's move to maybe an asymmetric secret. Right? Most of you are familiar with asymmetric cryptography. This is the technology that makes it possible.
There's some other text, but this is the main one that makes it possible to not have to share a secret. But you can go a step even further than that. You can use asymmetric cryptography, and you can use something called a trusted execution environment. A trusted execution environment.
This is a a a fancy word for essentially like a crypto coprocessor. It's a a coprocessor with some special properties. Number one is it's the memory of this coprocessor is isolated. It's not connected to your computer's memory.
It is, and has special operations. You think of it as a jail. You can create key pairs in this jail where only the public key can leave the jail, and the private key literally cannot leave the jail. There is no read operation over that private key.
So when you want to conduct signings, you bring the document or you bring the piece of data to the jail, you hand it through the bars, and you ask the key to sign that document for you. This is how you can guarantee that private key, that private material does not move. If it does not move, the surface area has just shrunk to this singular device, to this singular thing. And from an attack surface perspective, we've just gone from well, if I can't steal a credential at scale, now I have to steal it physically.
Right? So, again, drastically different attack surface. And, again, this is what security is all about is how do I how do I raise the cost of the adversary in an asymmetric way to dissuade them from this technique, from this device, from even me as a target?
A little comment on the pervasiveness of this technology. These crypto coprocessors are they they pretty much exist everywhere. If you use, Microsoft Secure Boot, you're actually leveraging this technology. If you ever bought a cup of coffee using Apple Pay Google Pay, you're using this technology. If you're playing the new Battlefield six, you're using this technology. And, this technology can be used to guarantee keys don't move, and then you can do, these gadget constructions with these keys to then do even stronger properties. Like, the software that I'm running has not been modified and is original OEM guaranteed, so to speak.
Next slide.
So let's move on to myth number two.
Myth number two is two devices make me more secure than one. And we wanna challenge this one pretty hard, because it's all been drilled into our head that if you have a second device, two devices are harder than one. And it could be true if the right network protocols were were in place that could actually guarantee that these are the two devices I actually expect, and they're next to each other and paired in some interesting way, and it's not possible to man in the middle of the connection.
And the burden gets very, very difficult. And, of course, all of the protocols in play right now with multi device authentication don't do anything like what I just said.
And this is evidenced, right, with TOTB can be man in the middle. Push notification can be bypassed. It can either through push bomb or man in the middle primary accounts. Like, number match can also be bypassed. Right? Because, again, there's nothing in these security tokens that inherently speak to the identity of the device they're on.
The identity of the device that the user is driving, right, that's gonna receive the data or the service. It's gonna bring the risk to the service and the data. And and so the second device authentication, it gives us a feeling, that we're doing something to actually increase security. But the reality is is it's actually making it, it's making it harder from a systems and a network perspective to really analyze and understand what's going on under the hood.
I'm sure many of you logged in and got, like, the screen pop. It's like, hey. Are are you really coming from New York? Are you really in this particular city in Virginia?
The reason for that pop up is your provider has no idea if your second device is near you or not. It has no idea if you've been man in the middle or not, and it's trying to stitch these things together. So what do we do? Well, the answer is don't give up on multiple, factors, but focus on the primary device. So how do you do single device multifactor authentication?
And because if you can do single device multifactor authentication, number one, you start working on the device that someone's on. If you're working on the device that someone's on, it now becomes possible for you to comment about the device identity relative to the user identity, relative to the service and the data that someone's actually asking for access to. It makes it possible for you actually to detect man in the middle. It makes it possible for you to detect and prevent session hijacking.
It's still possible to do multifactor and make sure it's the right user. And the way you do that, obviously, through possession, through a key in that jail that I mentioned earlier, but you can also then use things like biometrics or pins. Right? So inheritance factors or knowledge factors that become unlocks, to that crypto coprocessor.
So remember earlier, I described the jail that has the key in it, bring it the documents, say sign this. What turns out, most of the time, the way we use those crypto processors is we create keys in the jail with a policy. So when you bring a document to that jail, you won't get the document signed right away. The jail's gonna gonna prompt you, and it's gonna say, okay.
I'll sign this. But now I want you to give me a biometric. And so if you're on an Apple device, maybe you put your finger. If you're on a mobile device, maybe you smile at the camera or a Windows laptop, maybe you smile at the camera.
That turns into something called a stable feature vector that's then fed to the jail. The jail runs it through You think of it as, like, a a secure hash.
Only if the result of the secure hash comes up to the same value that that key was enrolled in, I e, it's the same biometric that enrolled the key, will the jail actually then allow the signing of the document? So you can introduce two step. You can introduce three step. You can actually introduce multistep, or multifactor authentication from this singular jail on this device.
Alright. Myth number three. We can train the problem away. Don't click the bad link.
So, does this smell like chloroform? Right? Is this a bad QR code?
The active testing either of those situations is the act of detonating either of those two situations. Also, humans are just bad at this. We're really, really bad at this. So we've got a chart up here right now, and this is just showing some very basic mathematics showing three different organizations.
And so these three different organizations have different click through rates. So according to some of our partners in this industry, the best organizations, best in class, only have a four percent click through rate. So the best trained organizations out there, four percent of their population constantly fails phishing train. Right?
Now we all know most of us aren't best in class. Most of us live in these other two. Ten percent click through or even twenty five percent click through. What these charts are showing us is based on these three organization types, How many attempts does it take by the adversary before they have a ninety percent certainty A of detonating some sort of click through link on a target device?
And the answer is kinda shocking. So at the worst organization, it only takes eight attempts to get a ninety percent certainty. At the best organization, it only takes fifty six attempts to get a ninety six percent certainty. Fifty six attempts is not a lot.
Right? Especially in the world of, AI and agents where I can automate and script not just all of the attempts, but the chameleon like nature of sounding and looking and being exactly what the target or the victim expects.
So, again, we this is just not the way.
So let's let's remove the human from the link.
Rather than training the human to detect if something is bad, let's do the job we should have done in the first place, which is engineer the protocol, the authentication protocol correctly to detect if it is actually being challenged by the right service or not. To detect if the challenger is deviating from something that appears in the t l TLS layer, which can actually happen, and we've seen in the wild. Let's make the protocol responsible for the problem. Let's not continue victim blaming the humans, for trying to fail, the test of ultimately, is this a bad QR code, or does this smell like chloroform?
Alright.
Myth number four. I only allow managed devices to connect to my network. And man devices managed means it's been hardened and it has all my security tools on it. Therefore, I don't have any device problems.
Well, we find that this myth is, well, we find that it's a myth. And, the reasons we find that it's a myth is, just just think through this. Right? Alright.
So let me tell me about your executives. Executives. Do you really not have any BYOD exceptions for any of your executives?
Do you really not have outsourced PR? Do you really not have outsourced software development? Do you really not have outsourced IT? Do you really not have business partners that help you on certain large deals? In all of these cases, do they have access to any of your services or your data? Right?
So that's kinda number one. Number two is even for the devices that are managed.
How do you know they're doing what you think? How do you know your design intent is actually what you're getting?
You know, in the software engineering world, one of the funniest things that you you learn pretty quick is just because your code compiles doesn't mean, it actually does what you say. And and maybe even a harder lesson is just because it passes your testing doesn't necessarily mean it does exactly what you think and nothing else.
Same thing shows up in IT. Same thing shows up in system management, and we see this all the time in our customer deployments. Their design intent is not always faithfully replicated across all devices that have access to their infrastructure.
And that creates problems. Right? If I don't have the device hardened, if, if the device is not CMMC compliant, if the device is not, PCI DSS rep four compliant, and I'm then giving it access to data that falls under the protection or the controls of one of those regimes, I just blew my compliance. And now I have to run an incident, and resolve it best case or worst case, I in I I jeopardize or imperil some of my revenue.
So what are we gonna do about this?
Well, let's think about an airport for a minute.
Where security matters, they ask the questions. Are you the right person?
Are you on this are you the right are you the ticketed passenger, and are you safe enough to allow into the airplane? Do you have no guns, knives, or bombs?
And now let's try and apply that metaphor to identity and access.
Am I the right person on the right device, and is my device safe enough for the service or data it's actually asking for?
Let's join those questions. Let's make them the same question. Let's make authentication actually say, prove that you can use the key in that jail that has the the high policy or the multifactor policy on it. Prove that it's the the key that's enrolled to the one of the devices I expect this user to use, and prove that device is hardened to the level of requirements for whatever it's actually asking for.
If it's asking for very sensitive things, let's make sure CrowdStrike is running. Let's make sure, all of the baseline hardening on the device is on. Let's make sure Jamf is on or Intune is actually on with the policy that we actually expect. Let's make sure that the the ZTNA solution is present and active from Zscaler.
Let's do these things in real time relative to each authentication.
Okay. Myth five.
Hey. The once I've once I've achieved authorization, I'm good. The the devices that have authorizations into, into my services and data don't aren't really a major source of risk.
This one is yeah. I I don't think people I think people follow the argument pretty quickly on this one. I think it's more of this one just getting forgotten. Right? So, a lot of services, were built in kind of a lazy way, and they hand out authorizations that never die.
So I'm sure some of you, still remember all the action we had to go through two to three months ago. When I say we, I mean, the industry with the SalesLoft breach. So remember, there was this plug in for Salesforce. And whether you whether you were still currently using it or even had turned it off, it was still refreshing your access token, your authorization tokens in the back end, even if you thought you had deleted your tenant. Right? So it was kind of like a, an ever die authorization still into your infrastructure, which as the adversary compromised a third party, they were able to then use that token to kinda download any data that that token had access to in that environment. So this is that that's a prime example of why long lived authorizations is a problem.
Some of you have moved to short lived authorizations, like, your users basically, thank you by complaining all the time of why do I have to do three factor authorization again every three hours, every eight hours, etcetera.
And so we we we think the way you handle this is a little bit different.
Next slide.
We think the idea is let's move off to be continuous.
It doesn't always have to be continuous and involve the person, but it should be continuous. And the authorizations could should constantly be refreshed based on, is this the same user? Is this the same device?
Or is the security posture on the device the same level as when I granted the original authorization, or has it increased? And then and only then do I consider it kind of a no op. But if something is degrading, then maybe I have to, worst case, withdraw the authorization, signal, some of the security, platforms, to revoke downstream authorizations, lock them out of their tunnels, lock them out of their, their connections, maybe even lock log them out of their, their end user device.
Or maybe I don't need to be that drastic.
Maybe something has changed, but what I really wanna do is I just wanna trigger an authorization with the user, or I wanna go pulse some particular piece of software, whether it's a cloud service or an endpoint software and and and get a You know, get a better reading on the situation.
In all of those cases, moving to continuous is what solves that problem.
So, future proofing. How do we move beyond today?
Thanks.
We we think the so, obviously, we think the answer in solving these myths is kind of a fundamental thing. Like, rather than chasing all of these security controls that you've been deploying for decades with yet another layer, What if we could do something that let us throw a couple layers out? Not because layering is bad, but just because if a credential doesn't move. I don't really need defenses around moving credentials.
If if I'm using single device multifactor, then I don't necessarily have the burden of all these peripheral devices.
If I'm actually verifying security of the device and the person relative to their access.
I have a much cleaner picture of what's going on. I also have a simpler record in my SIM. One access record tells me about the user, the device, and the security of the device at the moment of access in each re auth. Like, it's just a it's a it's a cleaner picture.
It's a simpler mental model. It's easier to work. It has implications on on incident response. Like, your blast radius is now precise. When I have an incident, I know exactly what device I'm talking But let's move towards the future. Right?
All of you have some sort of project with nonhuman workloads. Right? Whether it's automation workloads in the cloud or on desktops, whether you're building AI agents or even flying drones.
In all of these scenarios, you have real identity security problems that you have to solve.
And it's very easy to actually solve them in a more principled way by guaranteeing credentials can't move, by guaranteeing by actually verifying the security of the device and the identity at the time of off and continuously.
And so when we say future proofing, what we're really saying is by by answering these these questions or by attacking these myths fundamentally, you're actually setting yourself up to where these problems like AI agents aren't going to scale and spread credentials sprawl all over your future organization.
Nothing to sprawl because your future credential doesn't move.
I think that's a really powerful idea. Right? So what I heard you say at the end was well Once you sort of break through these myths, there's a better world. There's a better tomorrow beyond, the realities the presumed realities of today. Right? So if there are no passwords, there's nothing for your IT help desk to reset.
If you eliminate the biggest source of, incidents, which is identity which are identity based threats, according to pretty much every report out there, you can actually reduce your risk of breaches. And, you know, some of our customers say things like, our insurance premiums after we've implemented the on identity has never gone up, Or, you know, we save an average of ten minutes per employee per day, and that's with a a business unit within SAP that we work with. Right? So not only is there significant material benefits today, some of these concepts of a device bound credential, continuous authentication, you know, device security on the authenticating device can extend to the problems of AI, which, you know, these are ephemeral identities.
These are, identities that do things. Right? They're not agentic for for nothing. So it's really powerful concept.
So Turns out, Our company does something about all of these myths.
If you're curious about how this actually looks like in practice, I would encourage all of you to check out beyond identity dot com for more information. If you have any questions, I'm volunteering Jasson as tribute. You could feel free to, you know, find him on LinkedIn or Twitter under Jasson Casey, Jasson with two s's. And, of course, you can reach out to us, and we always have, security experts on the line who, would be more than willing to, chat with you. Any any parting thoughts, Jasson, before, before we let the audience get back to their days?
Addressing these myths simplifies your security architecture.
Anyone who's built platforms at scale understands that, like, the the knob that moves the needle the fastest is the simplification knob.
So if you could simplify your security architecture and that you're going to reduce your help desk ticket rate. You're going to reduce your security incident rate. You're gonna give your workforce a simpler mental model to think about.
Like, the benefits are kinda three sixty.
Yeah. Yeah. Agreed. Alright. Well, thank you all for joining us for this webinar, and, we'll catch you at the next one. Bye.
TL;DR
Full Transcript
Welcome to another Beyond Identity webinar. We do many, many of these. So if you've been to more than one, you might know who I am. But in way of introduction, my name is Jing. I lead marketing here. And with me today, I have a very special guest, Jasson Casey. Who are you, Jasson?
Who am I?
Yeah. Who are you? Why don't you introduce yourself to the audience?
Sure. I'm I'm Jasson Casey. I'm the, CEO and cofounder here at Beyond Identity.
Indeed, he is. And today, we are going to talk to y'all about, the fact that we're spending billions on security and somehow still losing the phishing. So what's going on there? I'm going to add our slides to the stage. And without further ado, we're gonna kick things off.
All your assumptions are wrong. What's going on?
So, yeah, let's get this started. First of all, we at Beyond Identity are not spend spending billions. You, the audience, are spending hundreds of billions of dollars chasing security results. And we think that some of the assumptions baked into that chase are not only wrong, but are gonna lead to the wrong outcomes for us and frustration. So let's get it started.
So this slide isn't gonna really surprise anyone. Right? We have been in the world of increased security spend for, decades now, And yet outcomes aren't necessarily getting better, certainly not when viewed by a a dollar figure amount.
Next slide.
So what's actually going on under the hood? And this is a particular snapshot taken from Verizon's, database of incident response. But you could also do some background research with Mandiant or, CrowdStrike's threat intelligence report, and you'd see the same thing. And what they're saying is eighty percent or more of your security incidents are caused by the identity system.
So let's put that in perspective. You have a SOC or you have an MSSP partner. And eighty percent of the exhaust of that organization is essentially working failures in your identity system. Your identity system is how you, your employees, your partners, and your customers get access to services and data.
Now why is this the case? We have a thought, and that thought is, historically, identity is the, product of productivity. Identity is is about what gets you to work. Identity is not necessarily has not historically been a security product.
But what we're focused on here at Beyond Identity is we think identity is actually the future of security for a lot of the reasons that I'm gonna walk you through today, but also where the industry is going.
So the slide we have up right now is just kind of a quick state of the state of the world. So here are a couple of threat actors that you've probably heard and read about in the news. Some of these are state actors, Some of these are organized, crime, and some of them are don't know. It I don't know if I would really call them organized crime as opposed to organized, adolescents having fun, but but it's it's a bit more than that. There there's a striking theme amongst all of them. If you look through some of the tactics, techniques, and procedures that they're actually using, they're attacking the identity system.
They're attacking credentials of the identity system. They're attacking users of the identity system. And so this is just another way of making the same point of the previous slide, which is the techniques that are really being used today are against the identity system or at least the large majority of them.
There we go. So what we mean by this is the adversary is actually not breaking in. They're logging in, and they're logging in and walking in through the front door.
So why is this the case? This gets back to these assumptions. Right? So a lot of us, think about identity.
A security system. But this isn't really its journey. Identity's journey into corporate America was about easing the IT burden to make it easy for people to access services, to make it easier for us to understand roles and permissions of of of people and groups and what they actually need access to. It hasn't necessarily been on a focus of this is a first order security system. It has really been about how do we drive more productivity with my global and expanding workforce. We think that ultimately is where Where prob these problems started and where kind of these false assumptions kind of took root and helped it become so easy to actually exploit infrastructure.
So what are some of these assumptions? Let's kinda get to it. This is kinda what we think the world, looks like when we think about these identity systems, but we would argue it looks more like this.
Not exactly, what I thought. Right? So, move forward one more.
So let's start with the first myth. Let's get let's get concrete. And the first myth is that shared secrets can actually be secured.
So let's ground this a little bit. What do we mean by shared secrets? Well, it's easy to think about a password as a shared secret. It lives in your brain.
You're clearly going to share it during enrollment with a service. Hey. Here's my password I wanna use to log in another time. But there's many other secrets, that are, of the sharing variety that help power your organization.
For instance, oftentimes session cookies, access tokens, bearer tokens, these are shared secrets. Oftentimes, shared secrets are baked in as keys to APIs.
When you actually look at, how some of these, tokens work, like, timed one time passcodes, they're a they're a slightly more elaborate shared secret, but still, fundamentally, they're a shared secret. Prove that you have this thing that was seated in the similar way that has a deterministic walk.
And and we argue that these shared secrets can't actually be secured.
And the reason we argue they can't be secured is when you think about the nature of how a shared secret works, you have to share it to use it.
And in the world of computing, sharing is really a fancy way of saying reads and writes, copies and writes. Reading something out of my memory, reading something off of my desk, and writing it out the network socket, writing it to, the peer device that I'm actually communicating with. And every time that secret moves, it ends up in memory of some computer.
How do I track its location across all of those computers?
In a modern web service, let's say one that your organization is building, That web service is probably gonna be based on containers, maybe some Docker containers deployed in some sort of Kubernetes cluster. That Kubernetes cluster probably is balancing across those containers using something called a service mesh. That is an application load balancer. It's opening your TLS connection, which means it now has copies of any secret sent around.
Maybe you have some sort of Kafka message bus in your architecture.
All of the memory of those devices now have access to those shared secrets. Let's say you're deploying an Amazon in, across different zones and maybe even across different regions. So you're using their application load balancer. Again, now those secrets live in memory of those devices.
Do you use content distribution? Do you use Cloudflare? Do you use Akamai? Do you use Amazon CloudFront? Those services now have copies of your secrets in their memory. And then let's say an organization who's using your service falls under Sarbanes Oxley and has a recording require.
Or some other compliance regime that has a a connection recording requirement. They're gonna be using proxies, forward proxies in their environment that are opening that TLS connection. And, again, your secrets are now all in their memory. These are all third parties. How do you track them all?
How do you ensure they're all protected? How do you ensure they're all securely erased?
These third parties can have insiders. These third parties can be exploited themselves. Most of you are probably familiar with Cloudbleed, a very famous, vulnerability discovered in the late teens twenty teens. And the gist of it is a well crafted packet.
Created a memory dump of the the application load balancer or the proxy, and you essentially got a memory dump of a bunch of traffic that may be yours or maybe others. And so it's a good example of how it's just not really possible to secure shared secrets. By their very nature, they have to move. And the fact that they move creates this ever expanding surface area that you have to protect, and and and that's just almost impossible. So what do you do about it?
What do you do against, how do we're arguing you can't secure shared secrets, well, then we're clearly saying stop using shared secrets. Let's move to maybe an asymmetric secret. Right? Most of you are familiar with asymmetric cryptography. This is the technology that makes it possible.
There's some other text, but this is the main one that makes it possible to not have to share a secret. But you can go a step even further than that. You can use asymmetric cryptography, and you can use something called a trusted execution environment. A trusted execution environment.
This is a a a fancy word for essentially like a crypto coprocessor. It's a a coprocessor with some special properties. Number one is it's the memory of this coprocessor is isolated. It's not connected to your computer's memory.
It is, and has special operations. You think of it as a jail. You can create key pairs in this jail where only the public key can leave the jail, and the private key literally cannot leave the jail. There is no read operation over that private key.
So when you want to conduct signings, you bring the document or you bring the piece of data to the jail, you hand it through the bars, and you ask the key to sign that document for you. This is how you can guarantee that private key, that private material does not move. If it does not move, the surface area has just shrunk to this singular device, to this singular thing. And from an attack surface perspective, we've just gone from well, if I can't steal a credential at scale, now I have to steal it physically.
Right? So, again, drastically different attack surface. And, again, this is what security is all about is how do I how do I raise the cost of the adversary in an asymmetric way to dissuade them from this technique, from this device, from even me as a target?
A little comment on the pervasiveness of this technology. These crypto coprocessors are they they pretty much exist everywhere. If you use, Microsoft Secure Boot, you're actually leveraging this technology. If you ever bought a cup of coffee using Apple Pay Google Pay, you're using this technology. If you're playing the new Battlefield six, you're using this technology. And, this technology can be used to guarantee keys don't move, and then you can do, these gadget constructions with these keys to then do even stronger properties. Like, the software that I'm running has not been modified and is original OEM guaranteed, so to speak.
Next slide.
So let's move on to myth number two.
Myth number two is two devices make me more secure than one. And we wanna challenge this one pretty hard, because it's all been drilled into our head that if you have a second device, two devices are harder than one. And it could be true if the right network protocols were were in place that could actually guarantee that these are the two devices I actually expect, and they're next to each other and paired in some interesting way, and it's not possible to man in the middle of the connection.
And the burden gets very, very difficult. And, of course, all of the protocols in play right now with multi device authentication don't do anything like what I just said.
And this is evidenced, right, with TOTB can be man in the middle. Push notification can be bypassed. It can either through push bomb or man in the middle primary accounts. Like, number match can also be bypassed. Right? Because, again, there's nothing in these security tokens that inherently speak to the identity of the device they're on.
The identity of the device that the user is driving, right, that's gonna receive the data or the service. It's gonna bring the risk to the service and the data. And and so the second device authentication, it gives us a feeling, that we're doing something to actually increase security. But the reality is is it's actually making it, it's making it harder from a systems and a network perspective to really analyze and understand what's going on under the hood.
I'm sure many of you logged in and got, like, the screen pop. It's like, hey. Are are you really coming from New York? Are you really in this particular city in Virginia?
The reason for that pop up is your provider has no idea if your second device is near you or not. It has no idea if you've been man in the middle or not, and it's trying to stitch these things together. So what do we do? Well, the answer is don't give up on multiple, factors, but focus on the primary device. So how do you do single device multifactor authentication?
And because if you can do single device multifactor authentication, number one, you start working on the device that someone's on. If you're working on the device that someone's on, it now becomes possible for you to comment about the device identity relative to the user identity, relative to the service and the data that someone's actually asking for access to. It makes it possible for you actually to detect man in the middle. It makes it possible for you to detect and prevent session hijacking.
It's still possible to do multifactor and make sure it's the right user. And the way you do that, obviously, through possession, through a key in that jail that I mentioned earlier, but you can also then use things like biometrics or pins. Right? So inheritance factors or knowledge factors that become unlocks, to that crypto coprocessor.
So remember earlier, I described the jail that has the key in it, bring it the documents, say sign this. What turns out, most of the time, the way we use those crypto processors is we create keys in the jail with a policy. So when you bring a document to that jail, you won't get the document signed right away. The jail's gonna gonna prompt you, and it's gonna say, okay.
I'll sign this. But now I want you to give me a biometric. And so if you're on an Apple device, maybe you put your finger. If you're on a mobile device, maybe you smile at the camera or a Windows laptop, maybe you smile at the camera.
That turns into something called a stable feature vector that's then fed to the jail. The jail runs it through You think of it as, like, a a secure hash.
Only if the result of the secure hash comes up to the same value that that key was enrolled in, I e, it's the same biometric that enrolled the key, will the jail actually then allow the signing of the document? So you can introduce two step. You can introduce three step. You can actually introduce multistep, or multifactor authentication from this singular jail on this device.
Alright. Myth number three. We can train the problem away. Don't click the bad link.
So, does this smell like chloroform? Right? Is this a bad QR code?
The active testing either of those situations is the act of detonating either of those two situations. Also, humans are just bad at this. We're really, really bad at this. So we've got a chart up here right now, and this is just showing some very basic mathematics showing three different organizations.
And so these three different organizations have different click through rates. So according to some of our partners in this industry, the best organizations, best in class, only have a four percent click through rate. So the best trained organizations out there, four percent of their population constantly fails phishing train. Right?
Now we all know most of us aren't best in class. Most of us live in these other two. Ten percent click through or even twenty five percent click through. What these charts are showing us is based on these three organization types, How many attempts does it take by the adversary before they have a ninety percent certainty A of detonating some sort of click through link on a target device?
And the answer is kinda shocking. So at the worst organization, it only takes eight attempts to get a ninety percent certainty. At the best organization, it only takes fifty six attempts to get a ninety six percent certainty. Fifty six attempts is not a lot.
Right? Especially in the world of, AI and agents where I can automate and script not just all of the attempts, but the chameleon like nature of sounding and looking and being exactly what the target or the victim expects.
So, again, we this is just not the way.
So let's let's remove the human from the link.
Rather than training the human to detect if something is bad, let's do the job we should have done in the first place, which is engineer the protocol, the authentication protocol correctly to detect if it is actually being challenged by the right service or not. To detect if the challenger is deviating from something that appears in the t l TLS layer, which can actually happen, and we've seen in the wild. Let's make the protocol responsible for the problem. Let's not continue victim blaming the humans, for trying to fail, the test of ultimately, is this a bad QR code, or does this smell like chloroform?
Alright.
Myth number four. I only allow managed devices to connect to my network. And man devices managed means it's been hardened and it has all my security tools on it. Therefore, I don't have any device problems.
Well, we find that this myth is, well, we find that it's a myth. And, the reasons we find that it's a myth is, just just think through this. Right? Alright.
So let me tell me about your executives. Executives. Do you really not have any BYOD exceptions for any of your executives?
Do you really not have outsourced PR? Do you really not have outsourced software development? Do you really not have outsourced IT? Do you really not have business partners that help you on certain large deals? In all of these cases, do they have access to any of your services or your data? Right?
So that's kinda number one. Number two is even for the devices that are managed.
How do you know they're doing what you think? How do you know your design intent is actually what you're getting?
You know, in the software engineering world, one of the funniest things that you you learn pretty quick is just because your code compiles doesn't mean, it actually does what you say. And and maybe even a harder lesson is just because it passes your testing doesn't necessarily mean it does exactly what you think and nothing else.
Same thing shows up in IT. Same thing shows up in system management, and we see this all the time in our customer deployments. Their design intent is not always faithfully replicated across all devices that have access to their infrastructure.
And that creates problems. Right? If I don't have the device hardened, if, if the device is not CMMC compliant, if the device is not, PCI DSS rep four compliant, and I'm then giving it access to data that falls under the protection or the controls of one of those regimes, I just blew my compliance. And now I have to run an incident, and resolve it best case or worst case, I in I I jeopardize or imperil some of my revenue.
So what are we gonna do about this?
Well, let's think about an airport for a minute.
Where security matters, they ask the questions. Are you the right person?
Are you on this are you the right are you the ticketed passenger, and are you safe enough to allow into the airplane? Do you have no guns, knives, or bombs?
And now let's try and apply that metaphor to identity and access.
Am I the right person on the right device, and is my device safe enough for the service or data it's actually asking for?
Let's join those questions. Let's make them the same question. Let's make authentication actually say, prove that you can use the key in that jail that has the the high policy or the multifactor policy on it. Prove that it's the the key that's enrolled to the one of the devices I expect this user to use, and prove that device is hardened to the level of requirements for whatever it's actually asking for.
If it's asking for very sensitive things, let's make sure CrowdStrike is running. Let's make sure, all of the baseline hardening on the device is on. Let's make sure Jamf is on or Intune is actually on with the policy that we actually expect. Let's make sure that the the ZTNA solution is present and active from Zscaler.
Let's do these things in real time relative to each authentication.
Okay. Myth five.
Hey. The once I've once I've achieved authorization, I'm good. The the devices that have authorizations into, into my services and data don't aren't really a major source of risk.
This one is yeah. I I don't think people I think people follow the argument pretty quickly on this one. I think it's more of this one just getting forgotten. Right? So, a lot of services, were built in kind of a lazy way, and they hand out authorizations that never die.
So I'm sure some of you, still remember all the action we had to go through two to three months ago. When I say we, I mean, the industry with the SalesLoft breach. So remember, there was this plug in for Salesforce. And whether you whether you were still currently using it or even had turned it off, it was still refreshing your access token, your authorization tokens in the back end, even if you thought you had deleted your tenant. Right? So it was kind of like a, an ever die authorization still into your infrastructure, which as the adversary compromised a third party, they were able to then use that token to kinda download any data that that token had access to in that environment. So this is that that's a prime example of why long lived authorizations is a problem.
Some of you have moved to short lived authorizations, like, your users basically, thank you by complaining all the time of why do I have to do three factor authorization again every three hours, every eight hours, etcetera.
And so we we we think the way you handle this is a little bit different.
Next slide.
We think the idea is let's move off to be continuous.
It doesn't always have to be continuous and involve the person, but it should be continuous. And the authorizations could should constantly be refreshed based on, is this the same user? Is this the same device?
Or is the security posture on the device the same level as when I granted the original authorization, or has it increased? And then and only then do I consider it kind of a no op. But if something is degrading, then maybe I have to, worst case, withdraw the authorization, signal, some of the security, platforms, to revoke downstream authorizations, lock them out of their tunnels, lock them out of their, their connections, maybe even lock log them out of their, their end user device.
Or maybe I don't need to be that drastic.
Maybe something has changed, but what I really wanna do is I just wanna trigger an authorization with the user, or I wanna go pulse some particular piece of software, whether it's a cloud service or an endpoint software and and and get a You know, get a better reading on the situation.
In all of those cases, moving to continuous is what solves that problem.
So, future proofing. How do we move beyond today?
Thanks.
We we think the so, obviously, we think the answer in solving these myths is kind of a fundamental thing. Like, rather than chasing all of these security controls that you've been deploying for decades with yet another layer, What if we could do something that let us throw a couple layers out? Not because layering is bad, but just because if a credential doesn't move. I don't really need defenses around moving credentials.
If if I'm using single device multifactor, then I don't necessarily have the burden of all these peripheral devices.
If I'm actually verifying security of the device and the person relative to their access.
I have a much cleaner picture of what's going on. I also have a simpler record in my SIM. One access record tells me about the user, the device, and the security of the device at the moment of access in each re auth. Like, it's just a it's a it's a cleaner picture.
It's a simpler mental model. It's easier to work. It has implications on on incident response. Like, your blast radius is now precise. When I have an incident, I know exactly what device I'm talking But let's move towards the future. Right?
All of you have some sort of project with nonhuman workloads. Right? Whether it's automation workloads in the cloud or on desktops, whether you're building AI agents or even flying drones.
In all of these scenarios, you have real identity security problems that you have to solve.
And it's very easy to actually solve them in a more principled way by guaranteeing credentials can't move, by guaranteeing by actually verifying the security of the device and the identity at the time of off and continuously.
And so when we say future proofing, what we're really saying is by by answering these these questions or by attacking these myths fundamentally, you're actually setting yourself up to where these problems like AI agents aren't going to scale and spread credentials sprawl all over your future organization.
Nothing to sprawl because your future credential doesn't move.
I think that's a really powerful idea. Right? So what I heard you say at the end was well Once you sort of break through these myths, there's a better world. There's a better tomorrow beyond, the realities the presumed realities of today. Right? So if there are no passwords, there's nothing for your IT help desk to reset.
If you eliminate the biggest source of, incidents, which is identity which are identity based threats, according to pretty much every report out there, you can actually reduce your risk of breaches. And, you know, some of our customers say things like, our insurance premiums after we've implemented the on identity has never gone up, Or, you know, we save an average of ten minutes per employee per day, and that's with a a business unit within SAP that we work with. Right? So not only is there significant material benefits today, some of these concepts of a device bound credential, continuous authentication, you know, device security on the authenticating device can extend to the problems of AI, which, you know, these are ephemeral identities.
These are, identities that do things. Right? They're not agentic for for nothing. So it's really powerful concept.
So Turns out, Our company does something about all of these myths.
If you're curious about how this actually looks like in practice, I would encourage all of you to check out beyond identity dot com for more information. If you have any questions, I'm volunteering Jasson as tribute. You could feel free to, you know, find him on LinkedIn or Twitter under Jasson Casey, Jasson with two s's. And, of course, you can reach out to us, and we always have, security experts on the line who, would be more than willing to, chat with you. Any any parting thoughts, Jasson, before, before we let the audience get back to their days?
Addressing these myths simplifies your security architecture.
Anyone who's built platforms at scale understands that, like, the the knob that moves the needle the fastest is the simplification knob.
So if you could simplify your security architecture and that you're going to reduce your help desk ticket rate. You're going to reduce your security incident rate. You're gonna give your workforce a simpler mental model to think about.
Like, the benefits are kinda three sixty.
Yeah. Yeah. Agreed. Alright. Well, thank you all for joining us for this webinar, and, we'll catch you at the next one. Bye.
.png)





.jpg)

.jpg)
.jpg)
.jpg)
.jpg)

.png)
.jpeg)







.png)