The New Blind Spot in AI-Native Orgs: Untrusted Devices + Agentic Access
TL;DR
Full Transcript
Hello, everybody. Welcome to another edition of Beyond Identity's webinars. Today, we're gonna be talking about the new blind spot in AI native orgs, untrusted devices and agentic access.
But before we get into the swing of things, let's just introduce your speakers for today. So hello. I am Kasia. I am a Product Marketing Manager here at the end Beyond Identity. But we also have Sarah here. Would you like to introduce yourself?
Yeah. My name is Sarah Cecchetti. I'm the Director of Product Strategy at Beyond Identity.
Yes. And I added a little note, a little shout out of that. You're also the author of the OpenID white paper identity management for AgenTic AI. So I feel like you're perfect to talk about the threats in AI native orgs. Really excited to see what's to come.
So before we get into before we get into, the actual vulnerabilities, wanna just talk about the impact of AI, on security. So at AI native orgs, you're most likely using a lot of cloud cursor. Your developers are really emphasizing speed, efficiency, agility in order to drive outcomes at the business. And we're not here to stop any of that or slow any of that down, but we wanna talk about what the impact that has caused within the cybersecurity industry.
And so here are some statistics from some reputable sources like Verizon or IBM and what they're seeing within the market this year. One of them being that Copilot increased the secrets, incident rate by forty percent, especially in AI assisted repositories, Or that employees are now using AI routine routinely, and a large number of those are either using non corporate emails as the identifiers for the accounts, so about seventy two percent of those employees. Or they were using corporate emails without integrated authentication systems in place, so most likely suggesting use of outside corporate policy.
And then for organizations with high levels of ShadowAI, those breaches added six hundred and seventy thousand dollars to the average breach price tag compared to those that had low levels of ShadowAI or just none at all. And things in those breaches that were exposed were things like PII or, intellectual property.
And then lastly, ninety seven percent is the share of organizations that reported an AI related breach, but also lacked, proper AI access controls. So here we can see that, you know, its impact is showing, new threat factors that AI native orgs, have something to to really catch their eye on and determine what security solutions are in place, to really tackle these.
And so when we really boiled it down, we found three different, vulnerabilities at the core of AI native organizations. So first being unmanaged device access and, session smuggling, especially on compromised endpoints, Then leaked credentials from MCP and API sprawl. And lastly, ShadowAI and excessive agent permissions. So we're gonna go through the depths of these three on how they work and what the fix is, not from, a probabilistic point of view of just detect detection or, you know, is this maybe a risk to my organization, but actually using deterministic proof to identify if something's a threat or not and blocking it from accessing your systems.
So, Sarah, would you like to take it away?
Yeah. So to go deeper on the first one that Kasha mentioned, what we're seeing from people coming to our company is a lot of unmanaged device access. And what you get from unmanaged device access is session smuggling. So the way that they do that is they get in through, an AI interface or an unmanaged laptop.
The device does not have, an endpoint detection and, gets malware on it, and then that malware is used to steal the tokens from within the browser. This can be done with a with a malicious browser extension as well. And once they have those tokens, they can act as the act as the user without triggering MFA. And so it's a way to get to corporate secrets to confidential information without needing to go through the authentication step.
And the way that we fix this for our customers at Beyond Identity is that we block access from unmanaged or insecure devices. So we can see the device. We can see the security posture of the device. So we'll be able to see that malware getting installed and then prevent authentication on that device.
We can also prevent that session replay. So we can bind the identity to the device so that stolen session tokens can't be used.
And we do continuous validation. So we reverify that that device posture and that human constantly to make sure that that we're terminating compromised sessions in real time.
And the next one I wanna go over is leaked credentials from MCP and API sprawl. So for those of you who are a little bit new to AI, MCP stands for model context protocol. And what model context protocol is used for is connecting your AI clients. So that might be, an IDE, like a cursor or a Versus code, or it might be a CLI, like Claude code, to an external server that might, access information in a database.
It might access linear tickets. It might access Canva designs. It might access Salesforce data. There's all sorts of things that it can go and get access to.
And if you have a malicious MCP server, it can go get access to a lot of things.
But even if the MCP server isn't malicious, even if it's valid, often the way that that MCP server is configured to connect is through the same API keys that you use to connect to APIs. And so this is quite dangerous. It causes API key sprawl. So we're seeing more and more people using AI within the organization, And then more and more of them say, oh, I need the API key for linear. I need the API key for Jira, and I'm gonna put it into my, my settings for my AI, and then they will accidentally commit those settings, somewhere public like GitHub.
And they will leak your company's API keys and give access to those secrets. So that's really the exploit that we're that we're seeing in the wild. And the way that we fix this is to bind the identity to the device. And so when we cryptographically bind access, those leaked keys are not useful.
And we enforce policy over tool use. So when we know that an API key has been compromised, we can down that MCP server until we know that we've rotated all of those keys and that all of the new keys have been distributed. So we have real time policy. And then if there is a, as I talked about at the top, a malicious MCP server that someone has inadvertently downloaded on their machine, or if they downloaded a perfectly good MCP server that a malicious person then sent an update to to make it malicious, we can block those, right, in real time with our policy engine.
And then the last one we wanted to go over was shadow AI and excessive agent permissions.
So we know that employees are bypassing the AI that they are supposed to be using at work and using, what we're calling shadow agents. And so they are paying their own personal credit card, their own personal email, and granting these agents broad read and write access to corporate data. And this should be terrifying to anyone who works in security because then, of course, if anyone has access to that machine, if anyone has access to that AI, either through a malicious MCP server or some other connection, they get access to everything that that user has access to.
And so, it's very important that we detect an inventory all of the AI agents that are running in the environment and that we block any risky actions that they are trying to do. So if they try to do something like elicitation, for example, with ChatGPT, an MCP server can ask ChatGPT what sorts of things your user has been researching. Have they been researching legal things or health things or anything that might relate to the security of the code base?
We don't want that, that sort of action happening when MCP, protocol interactions are going on. And so we can block those actions and just say, you know what? We don't allow elicitation at all and we don't want you knowing anything about what's in the user's memory.
And those policies, of course, like all beyond identity policies, are enforced in real time. So we can shut things down very quickly as soon as we see malicious behavior.
So to sum up, the the agentic AI world is quite different. The the traditional attacks still exist, but they have been supercharged and superpowered by, by AI. And the ability for attackers to write malware and to get into machines has been supercharged. But also the security perimeter around AI is very different than the security perimeter that existed around APIs, and it's being accessed by far more people in your organization.
And so, traditionally, when we look at controls around identity, we have a probabilistic defense. Right? So we're trusting static passwords and API keys. We're permitting untrusted devices with reusable credentials, and we're creating blind spots where AI operates.
And what we want to move to is from probabilistic to deterministic. So we want cryptographic proof of identity. We want real time validation of identity posture, and we want proof and certainty before the agent is allowed to take any actions in the environment.
So, what we say is when we put this kind of deterministic, defense in place for our customers, they can verify. We tell them verify, don't guess. So the principle is that you can verify user and security device posture continuously, not just at login, but all the time. That you have a hardware backed identity, that you have real time device posture, that you have integration with your whole security stack on every single access request from you, from AI, from humans, from bots. Everything in your organization is tied to a tied to hardware and, monitored in real time.
So we would love it if you would take a look at a quick demo of the product. If you go to our website beyond identity dot com and click on the platform link, you can get a fifteen minute prerecorded demo. It's very quick, and you'll see the the product live in action. You can see those real time policy checks and how how we work day to day.
So, we also have for you today a special preview of a new product we are building that is specifically intended to secure AI and MCP servers.
And what it is intended to do is give you control over tool use in your environment. So that might be if you're using a CLI like Cloud Code, that might be something like bash commands. Maybe you don't want the the tool to be able to do remote code execution. Execution.
Maybe you don't want the tool to be able to access your Oracle database or access your email. You as an administrator can control tool use across your organization so that you know exactly what information is flowing back and forth and you can control what information goes back and forth. Which brings us to step two. You get visibility into every MCP tool call.
Every tool call that the that the AI makes, you can see in your dashboard and you get insights into what is going on in your organization. You might have thousands of people interacting with AI in thousands of different ways, and you will be able to see all of that. You'll be able to see charts of what clients people are using, what MCP servers those clients are accessing, all in, very nice visualizations. And that helps you write and enforce policies in real time so that you can see what's going on in your organization, and you can say, you know what?
I'm really not comfortable with that. I'm gonna turn that off for right now while I go talk to my employees and figure out why they're doing that. And then maybe I'll turn it back on or maybe I'll turn it back on in a different way. So it lets you enforce that policy in real time and see what's going on in your organization, have control over it, and enforce the policy around it.
So this tool is at a different site. It's at beyond identity dot a I, and you can preview it. So this is what the interface looks like. You can see that, it's describing different tools. It's discovered tools. It's found one tool that's that's been denied by policy.
And, so this will give you a bunch of insight into how people are using AI in your organization, who the who the most common users are, and then a graph view of of how it's being used across the the information flow. So we would love to preview this for you. We would love to give you a demo, and and we may be able to get you into our early access program. We're trying to keep that small, but if you are interested in getting into that, let us know.
Awesome. Well, thanks, everybody. Thank you, Sarah, so much for diving deep with us. And if there are any questions about beyond identity dot a I, or our core platform and product on phishing resistant authentication and secure access to your a AI agents, visit beyond identity dot com with any questions, comments. We respond to everything. So excited to chat more about this. Thank you again.
TL;DR
Full Transcript
Hello, everybody. Welcome to another edition of Beyond Identity's webinars. Today, we're gonna be talking about the new blind spot in AI native orgs, untrusted devices and agentic access.
But before we get into the swing of things, let's just introduce your speakers for today. So hello. I am Kasia. I am a Product Marketing Manager here at the end Beyond Identity. But we also have Sarah here. Would you like to introduce yourself?
Yeah. My name is Sarah Cecchetti. I'm the Director of Product Strategy at Beyond Identity.
Yes. And I added a little note, a little shout out of that. You're also the author of the OpenID white paper identity management for AgenTic AI. So I feel like you're perfect to talk about the threats in AI native orgs. Really excited to see what's to come.
So before we get into before we get into, the actual vulnerabilities, wanna just talk about the impact of AI, on security. So at AI native orgs, you're most likely using a lot of cloud cursor. Your developers are really emphasizing speed, efficiency, agility in order to drive outcomes at the business. And we're not here to stop any of that or slow any of that down, but we wanna talk about what the impact that has caused within the cybersecurity industry.
And so here are some statistics from some reputable sources like Verizon or IBM and what they're seeing within the market this year. One of them being that Copilot increased the secrets, incident rate by forty percent, especially in AI assisted repositories, Or that employees are now using AI routine routinely, and a large number of those are either using non corporate emails as the identifiers for the accounts, so about seventy two percent of those employees. Or they were using corporate emails without integrated authentication systems in place, so most likely suggesting use of outside corporate policy.
And then for organizations with high levels of ShadowAI, those breaches added six hundred and seventy thousand dollars to the average breach price tag compared to those that had low levels of ShadowAI or just none at all. And things in those breaches that were exposed were things like PII or, intellectual property.
And then lastly, ninety seven percent is the share of organizations that reported an AI related breach, but also lacked, proper AI access controls. So here we can see that, you know, its impact is showing, new threat factors that AI native orgs, have something to to really catch their eye on and determine what security solutions are in place, to really tackle these.
And so when we really boiled it down, we found three different, vulnerabilities at the core of AI native organizations. So first being unmanaged device access and, session smuggling, especially on compromised endpoints, Then leaked credentials from MCP and API sprawl. And lastly, ShadowAI and excessive agent permissions. So we're gonna go through the depths of these three on how they work and what the fix is, not from, a probabilistic point of view of just detect detection or, you know, is this maybe a risk to my organization, but actually using deterministic proof to identify if something's a threat or not and blocking it from accessing your systems.
So, Sarah, would you like to take it away?
Yeah. So to go deeper on the first one that Kasha mentioned, what we're seeing from people coming to our company is a lot of unmanaged device access. And what you get from unmanaged device access is session smuggling. So the way that they do that is they get in through, an AI interface or an unmanaged laptop.
The device does not have, an endpoint detection and, gets malware on it, and then that malware is used to steal the tokens from within the browser. This can be done with a with a malicious browser extension as well. And once they have those tokens, they can act as the act as the user without triggering MFA. And so it's a way to get to corporate secrets to confidential information without needing to go through the authentication step.
And the way that we fix this for our customers at Beyond Identity is that we block access from unmanaged or insecure devices. So we can see the device. We can see the security posture of the device. So we'll be able to see that malware getting installed and then prevent authentication on that device.
We can also prevent that session replay. So we can bind the identity to the device so that stolen session tokens can't be used.
And we do continuous validation. So we reverify that that device posture and that human constantly to make sure that that we're terminating compromised sessions in real time.
And the next one I wanna go over is leaked credentials from MCP and API sprawl. So for those of you who are a little bit new to AI, MCP stands for model context protocol. And what model context protocol is used for is connecting your AI clients. So that might be, an IDE, like a cursor or a Versus code, or it might be a CLI, like Claude code, to an external server that might, access information in a database.
It might access linear tickets. It might access Canva designs. It might access Salesforce data. There's all sorts of things that it can go and get access to.
And if you have a malicious MCP server, it can go get access to a lot of things.
But even if the MCP server isn't malicious, even if it's valid, often the way that that MCP server is configured to connect is through the same API keys that you use to connect to APIs. And so this is quite dangerous. It causes API key sprawl. So we're seeing more and more people using AI within the organization, And then more and more of them say, oh, I need the API key for linear. I need the API key for Jira, and I'm gonna put it into my, my settings for my AI, and then they will accidentally commit those settings, somewhere public like GitHub.
And they will leak your company's API keys and give access to those secrets. So that's really the exploit that we're that we're seeing in the wild. And the way that we fix this is to bind the identity to the device. And so when we cryptographically bind access, those leaked keys are not useful.
And we enforce policy over tool use. So when we know that an API key has been compromised, we can down that MCP server until we know that we've rotated all of those keys and that all of the new keys have been distributed. So we have real time policy. And then if there is a, as I talked about at the top, a malicious MCP server that someone has inadvertently downloaded on their machine, or if they downloaded a perfectly good MCP server that a malicious person then sent an update to to make it malicious, we can block those, right, in real time with our policy engine.
And then the last one we wanted to go over was shadow AI and excessive agent permissions.
So we know that employees are bypassing the AI that they are supposed to be using at work and using, what we're calling shadow agents. And so they are paying their own personal credit card, their own personal email, and granting these agents broad read and write access to corporate data. And this should be terrifying to anyone who works in security because then, of course, if anyone has access to that machine, if anyone has access to that AI, either through a malicious MCP server or some other connection, they get access to everything that that user has access to.
And so, it's very important that we detect an inventory all of the AI agents that are running in the environment and that we block any risky actions that they are trying to do. So if they try to do something like elicitation, for example, with ChatGPT, an MCP server can ask ChatGPT what sorts of things your user has been researching. Have they been researching legal things or health things or anything that might relate to the security of the code base?
We don't want that, that sort of action happening when MCP, protocol interactions are going on. And so we can block those actions and just say, you know what? We don't allow elicitation at all and we don't want you knowing anything about what's in the user's memory.
And those policies, of course, like all beyond identity policies, are enforced in real time. So we can shut things down very quickly as soon as we see malicious behavior.
So to sum up, the the agentic AI world is quite different. The the traditional attacks still exist, but they have been supercharged and superpowered by, by AI. And the ability for attackers to write malware and to get into machines has been supercharged. But also the security perimeter around AI is very different than the security perimeter that existed around APIs, and it's being accessed by far more people in your organization.
And so, traditionally, when we look at controls around identity, we have a probabilistic defense. Right? So we're trusting static passwords and API keys. We're permitting untrusted devices with reusable credentials, and we're creating blind spots where AI operates.
And what we want to move to is from probabilistic to deterministic. So we want cryptographic proof of identity. We want real time validation of identity posture, and we want proof and certainty before the agent is allowed to take any actions in the environment.
So, what we say is when we put this kind of deterministic, defense in place for our customers, they can verify. We tell them verify, don't guess. So the principle is that you can verify user and security device posture continuously, not just at login, but all the time. That you have a hardware backed identity, that you have real time device posture, that you have integration with your whole security stack on every single access request from you, from AI, from humans, from bots. Everything in your organization is tied to a tied to hardware and, monitored in real time.
So we would love it if you would take a look at a quick demo of the product. If you go to our website beyond identity dot com and click on the platform link, you can get a fifteen minute prerecorded demo. It's very quick, and you'll see the the product live in action. You can see those real time policy checks and how how we work day to day.
So, we also have for you today a special preview of a new product we are building that is specifically intended to secure AI and MCP servers.
And what it is intended to do is give you control over tool use in your environment. So that might be if you're using a CLI like Cloud Code, that might be something like bash commands. Maybe you don't want the the tool to be able to do remote code execution. Execution.
Maybe you don't want the tool to be able to access your Oracle database or access your email. You as an administrator can control tool use across your organization so that you know exactly what information is flowing back and forth and you can control what information goes back and forth. Which brings us to step two. You get visibility into every MCP tool call.
Every tool call that the that the AI makes, you can see in your dashboard and you get insights into what is going on in your organization. You might have thousands of people interacting with AI in thousands of different ways, and you will be able to see all of that. You'll be able to see charts of what clients people are using, what MCP servers those clients are accessing, all in, very nice visualizations. And that helps you write and enforce policies in real time so that you can see what's going on in your organization, and you can say, you know what?
I'm really not comfortable with that. I'm gonna turn that off for right now while I go talk to my employees and figure out why they're doing that. And then maybe I'll turn it back on or maybe I'll turn it back on in a different way. So it lets you enforce that policy in real time and see what's going on in your organization, have control over it, and enforce the policy around it.
So this tool is at a different site. It's at beyond identity dot a I, and you can preview it. So this is what the interface looks like. You can see that, it's describing different tools. It's discovered tools. It's found one tool that's that's been denied by policy.
And, so this will give you a bunch of insight into how people are using AI in your organization, who the who the most common users are, and then a graph view of of how it's being used across the the information flow. So we would love to preview this for you. We would love to give you a demo, and and we may be able to get you into our early access program. We're trying to keep that small, but if you are interested in getting into that, let us know.
Awesome. Well, thanks, everybody. Thank you, Sarah, so much for diving deep with us. And if there are any questions about beyond identity dot a I, or our core platform and product on phishing resistant authentication and secure access to your a AI agents, visit beyond identity dot com with any questions, comments. We respond to everything. So excited to chat more about this. Thank you again.
TL;DR
Full Transcript
Hello, everybody. Welcome to another edition of Beyond Identity's webinars. Today, we're gonna be talking about the new blind spot in AI native orgs, untrusted devices and agentic access.
But before we get into the swing of things, let's just introduce your speakers for today. So hello. I am Kasia. I am a Product Marketing Manager here at the end Beyond Identity. But we also have Sarah here. Would you like to introduce yourself?
Yeah. My name is Sarah Cecchetti. I'm the Director of Product Strategy at Beyond Identity.
Yes. And I added a little note, a little shout out of that. You're also the author of the OpenID white paper identity management for AgenTic AI. So I feel like you're perfect to talk about the threats in AI native orgs. Really excited to see what's to come.
So before we get into before we get into, the actual vulnerabilities, wanna just talk about the impact of AI, on security. So at AI native orgs, you're most likely using a lot of cloud cursor. Your developers are really emphasizing speed, efficiency, agility in order to drive outcomes at the business. And we're not here to stop any of that or slow any of that down, but we wanna talk about what the impact that has caused within the cybersecurity industry.
And so here are some statistics from some reputable sources like Verizon or IBM and what they're seeing within the market this year. One of them being that Copilot increased the secrets, incident rate by forty percent, especially in AI assisted repositories, Or that employees are now using AI routine routinely, and a large number of those are either using non corporate emails as the identifiers for the accounts, so about seventy two percent of those employees. Or they were using corporate emails without integrated authentication systems in place, so most likely suggesting use of outside corporate policy.
And then for organizations with high levels of ShadowAI, those breaches added six hundred and seventy thousand dollars to the average breach price tag compared to those that had low levels of ShadowAI or just none at all. And things in those breaches that were exposed were things like PII or, intellectual property.
And then lastly, ninety seven percent is the share of organizations that reported an AI related breach, but also lacked, proper AI access controls. So here we can see that, you know, its impact is showing, new threat factors that AI native orgs, have something to to really catch their eye on and determine what security solutions are in place, to really tackle these.
And so when we really boiled it down, we found three different, vulnerabilities at the core of AI native organizations. So first being unmanaged device access and, session smuggling, especially on compromised endpoints, Then leaked credentials from MCP and API sprawl. And lastly, ShadowAI and excessive agent permissions. So we're gonna go through the depths of these three on how they work and what the fix is, not from, a probabilistic point of view of just detect detection or, you know, is this maybe a risk to my organization, but actually using deterministic proof to identify if something's a threat or not and blocking it from accessing your systems.
So, Sarah, would you like to take it away?
Yeah. So to go deeper on the first one that Kasha mentioned, what we're seeing from people coming to our company is a lot of unmanaged device access. And what you get from unmanaged device access is session smuggling. So the way that they do that is they get in through, an AI interface or an unmanaged laptop.
The device does not have, an endpoint detection and, gets malware on it, and then that malware is used to steal the tokens from within the browser. This can be done with a with a malicious browser extension as well. And once they have those tokens, they can act as the act as the user without triggering MFA. And so it's a way to get to corporate secrets to confidential information without needing to go through the authentication step.
And the way that we fix this for our customers at Beyond Identity is that we block access from unmanaged or insecure devices. So we can see the device. We can see the security posture of the device. So we'll be able to see that malware getting installed and then prevent authentication on that device.
We can also prevent that session replay. So we can bind the identity to the device so that stolen session tokens can't be used.
And we do continuous validation. So we reverify that that device posture and that human constantly to make sure that that we're terminating compromised sessions in real time.
And the next one I wanna go over is leaked credentials from MCP and API sprawl. So for those of you who are a little bit new to AI, MCP stands for model context protocol. And what model context protocol is used for is connecting your AI clients. So that might be, an IDE, like a cursor or a Versus code, or it might be a CLI, like Claude code, to an external server that might, access information in a database.
It might access linear tickets. It might access Canva designs. It might access Salesforce data. There's all sorts of things that it can go and get access to.
And if you have a malicious MCP server, it can go get access to a lot of things.
But even if the MCP server isn't malicious, even if it's valid, often the way that that MCP server is configured to connect is through the same API keys that you use to connect to APIs. And so this is quite dangerous. It causes API key sprawl. So we're seeing more and more people using AI within the organization, And then more and more of them say, oh, I need the API key for linear. I need the API key for Jira, and I'm gonna put it into my, my settings for my AI, and then they will accidentally commit those settings, somewhere public like GitHub.
And they will leak your company's API keys and give access to those secrets. So that's really the exploit that we're that we're seeing in the wild. And the way that we fix this is to bind the identity to the device. And so when we cryptographically bind access, those leaked keys are not useful.
And we enforce policy over tool use. So when we know that an API key has been compromised, we can down that MCP server until we know that we've rotated all of those keys and that all of the new keys have been distributed. So we have real time policy. And then if there is a, as I talked about at the top, a malicious MCP server that someone has inadvertently downloaded on their machine, or if they downloaded a perfectly good MCP server that a malicious person then sent an update to to make it malicious, we can block those, right, in real time with our policy engine.
And then the last one we wanted to go over was shadow AI and excessive agent permissions.
So we know that employees are bypassing the AI that they are supposed to be using at work and using, what we're calling shadow agents. And so they are paying their own personal credit card, their own personal email, and granting these agents broad read and write access to corporate data. And this should be terrifying to anyone who works in security because then, of course, if anyone has access to that machine, if anyone has access to that AI, either through a malicious MCP server or some other connection, they get access to everything that that user has access to.
And so, it's very important that we detect an inventory all of the AI agents that are running in the environment and that we block any risky actions that they are trying to do. So if they try to do something like elicitation, for example, with ChatGPT, an MCP server can ask ChatGPT what sorts of things your user has been researching. Have they been researching legal things or health things or anything that might relate to the security of the code base?
We don't want that, that sort of action happening when MCP, protocol interactions are going on. And so we can block those actions and just say, you know what? We don't allow elicitation at all and we don't want you knowing anything about what's in the user's memory.
And those policies, of course, like all beyond identity policies, are enforced in real time. So we can shut things down very quickly as soon as we see malicious behavior.
So to sum up, the the agentic AI world is quite different. The the traditional attacks still exist, but they have been supercharged and superpowered by, by AI. And the ability for attackers to write malware and to get into machines has been supercharged. But also the security perimeter around AI is very different than the security perimeter that existed around APIs, and it's being accessed by far more people in your organization.
And so, traditionally, when we look at controls around identity, we have a probabilistic defense. Right? So we're trusting static passwords and API keys. We're permitting untrusted devices with reusable credentials, and we're creating blind spots where AI operates.
And what we want to move to is from probabilistic to deterministic. So we want cryptographic proof of identity. We want real time validation of identity posture, and we want proof and certainty before the agent is allowed to take any actions in the environment.
So, what we say is when we put this kind of deterministic, defense in place for our customers, they can verify. We tell them verify, don't guess. So the principle is that you can verify user and security device posture continuously, not just at login, but all the time. That you have a hardware backed identity, that you have real time device posture, that you have integration with your whole security stack on every single access request from you, from AI, from humans, from bots. Everything in your organization is tied to a tied to hardware and, monitored in real time.
So we would love it if you would take a look at a quick demo of the product. If you go to our website beyond identity dot com and click on the platform link, you can get a fifteen minute prerecorded demo. It's very quick, and you'll see the the product live in action. You can see those real time policy checks and how how we work day to day.
So, we also have for you today a special preview of a new product we are building that is specifically intended to secure AI and MCP servers.
And what it is intended to do is give you control over tool use in your environment. So that might be if you're using a CLI like Cloud Code, that might be something like bash commands. Maybe you don't want the the tool to be able to do remote code execution. Execution.
Maybe you don't want the tool to be able to access your Oracle database or access your email. You as an administrator can control tool use across your organization so that you know exactly what information is flowing back and forth and you can control what information goes back and forth. Which brings us to step two. You get visibility into every MCP tool call.
Every tool call that the that the AI makes, you can see in your dashboard and you get insights into what is going on in your organization. You might have thousands of people interacting with AI in thousands of different ways, and you will be able to see all of that. You'll be able to see charts of what clients people are using, what MCP servers those clients are accessing, all in, very nice visualizations. And that helps you write and enforce policies in real time so that you can see what's going on in your organization, and you can say, you know what?
I'm really not comfortable with that. I'm gonna turn that off for right now while I go talk to my employees and figure out why they're doing that. And then maybe I'll turn it back on or maybe I'll turn it back on in a different way. So it lets you enforce that policy in real time and see what's going on in your organization, have control over it, and enforce the policy around it.
So this tool is at a different site. It's at beyond identity dot a I, and you can preview it. So this is what the interface looks like. You can see that, it's describing different tools. It's discovered tools. It's found one tool that's that's been denied by policy.
And, so this will give you a bunch of insight into how people are using AI in your organization, who the who the most common users are, and then a graph view of of how it's being used across the the information flow. So we would love to preview this for you. We would love to give you a demo, and and we may be able to get you into our early access program. We're trying to keep that small, but if you are interested in getting into that, let us know.
Awesome. Well, thanks, everybody. Thank you, Sarah, so much for diving deep with us. And if there are any questions about beyond identity dot a I, or our core platform and product on phishing resistant authentication and secure access to your a AI agents, visit beyond identity dot com with any questions, comments. We respond to everything. So excited to chat more about this. Thank you again.
.png)





.jpg)
.jpg)
.jpg)

.jpg)
.jpg)
.jpg)
.jpg)
.jpeg)






.png)
