Video

What 50 CISOs Told Us: The Top AI Risks They’re Budgeting for in 2026

Key Takeaways

  • AI is already production: AI tools now access source code, cloud infrastructure, and production systems, making AI security a present-day risk, not a future concern.
  • Data leakage tops fears, followed by Shadow AI, ungoverned agents, and over-privileged access.
  • Controls lag behind adoption: Access governance is still largely manual and role-based, with limited automation or policy-driven enforcement.
  • Budgets are shifting to AI: 88% of organizations plan to increase AI security spend for 2026, even though many lack dedicated AI security budgets today.
  • Visibility before control: CISOs prioritize monitoring, identity, and observability first, then policy enforcement, governance, and compliance automation.

Full Transcript

Welcome everybody to another webinar from Beyond Identity. We have juicy, juicy content and survey results for you guys. But before we get into that, just like a quick round of introductions.

Hello. I'm Kasia. I'm one of the product marketing managers here at Beyond Identity, and we have Nikhil as well. Would you like to introduce yourself?

Sure. Hey, Kasia, and hey, everyone. Thank you all for joining. My name is Nikhil, and I lead the product team here at Beyond Identity.

Awesome. Well, today, we're going to be talking about what fifty CSOs told us, the top AI risks they're budgeting for in twenty twenty six. So there's fifty CSOs surveyed, twenty different questions, some of them open ended, multiple choice, rankings. So we really get in the minds of these CISOs.

Just just covering some ground on the industries and the company sizes that these CISOs work for. So primarily, they're enterprise, so a thousand plus employees, but some of them are mid market as well, primarily in software, financial services, some in, like, business consulting as well. So just so you know, the lay of the land. But let's get right into it with question one of what are the top AI tools that, these organizations are using today? Primarily, ChatGPT is number one, not a huge shocker, but Microsoft Copilot really right like, right behind. There's a lot of AI being used in various SaaS tools. So think like Jira or GitHub, the ones that come embedded natively within those tools.

And things other ones that you've probably heard of as well, Gemini, Claude, internal AI tools, custom AI tools, the pack as well.

Then we also asked how these systems would describe their AI usage in their organization, like, on a day to day basis. Is it something that's widely adopted? Is it something that's precautionary and just on a case by case basis?

But a majority of it is wide wide widely used. So widely adopted at forty five percent in production use as well. So touching things like code repositories for thirty three percent. So we can really see here, like, AI is no longer a choice. It is something that CSOs kind of have to, like, let the organization use or else the business falls, falls behind.

And then which environments does AI currently have access to at these organizations? Primarily knowledge bases. That's something that's very expected just because of how effective referring to all of these knowledge bases that we have. But Source Code being at number two is definitely an eye opener.

That's one thing that obviously can pose a lot of risks at an organization. Cloud infrastructure having access, AI tools having access to that, production as well. These are something that definitely surprised me on, you know, how AI has access to these. But again, it goes to shows how organizations really can fly and grow with AI being connected to these, environments.

Things like customer PII, financial data, kind of follow behind those.

And then how would these CISOs rate their organization's current capacity to manage access and security. Most of them said that they have added adequate controls in place, nothing, extremely strong, but also not lagging behind. Following that, they believe that they have strong ones.

But you can see that a lot of CISOs just feel like they're in the middle of the pack.

Nikhil, I'm gonna punt this one to you.

Yeah. Absolutely. So, you know, when we dug deeper, right, our goal with this whole survey was to really understand, what does this mean? You know, what what does the AI adoption in the in the business and in the organization really mean from a in-depth security perspective. And so, you know, Kasia, I think you've actually given us a really great overview on, you know, how AI is being utilized, in the organization to kinda set a baseline.

Something that we learned is that all of the controls around how access is being governed is pretty manual, where giving licenses and, authorizing, AI tools access to various tools, are either a manual or role based approval.

Very little of that, access approvals is automated or policy based, which is really where we want to see the industry move towards.

Sounds good. Yeah.

Then for the next question, we also ask what their the CSO's level of concern is for AI enhanced attack techniques.

So this is a little bit different than, like, just generalized AI threats or AI risks. But this is these are attackers using AI to make other types of attacks, things like phishing, social engineering, deep fake impersonations even easier, faster, more accessible, and cheaper for them.

So we wanted to see if this is also a concern within CISOs minds, and it looks like there are high concerns primarily with AI related phishing and social engineering. We know from, know before data that even with a lot of security awareness and training, employees still click about forty per four four percent of the time. And so with AI making it easier, simpler, faster to deploy these types of attacks, it means that they're more widespread, they're more convincing, and unfortunately, employees still click four percent of the time. So this is something that's still top of mind for CSOs.

Next step, we also ask how confident are CSOs that their organization can detect and respond to an AI related security incident today. And so right now, it's, again, just most majority are somewhat confident. The majority are uncertain.

About fifteen percent of them are not confident whatsoever. So you can see here that, you know, again, the their hands are tied. Their business needs to grow with AI, and so they need to deploy it, but they don't have that certainty with security solutions today that it can actually combat the new risks and threats that they unfold.

Yeah.

Go ahead, Nikhil.

So so that really just, you know, jumps right into what is the biggest risk that's top of mind for every security leader. And, what's gonna come through the next couple slides is that data leakage and ShadowAI top the threat list in terms of, you know, when you have AI in your organization, you know, what is business critical information, and is that being leaked? And and, really, when we dig deeper into, you know, examples of what these fears are, There are things like employee leaking data through ShadowAI because, they're using unapproved, you know, unapproved AI tools. There are developers using, you know, MCP servers and and pushing information or proprietary information where you don't have visibility or controls. And so this data leakage and and is is is is something that is top of mind, from a CSO perspective.

And if we go to the next slide.

Yeah. I mean, we we kinda touched on it. And so when we tie that back to budgets and where security leaders want to spend their investments, it's really figuring out, you know, how do we address that data leakage problem, how do we address that shadow AI and that lack of visibility, and and, look to implement tools and controls to solve that problem.

Yep.

And so also was very curious to see if any CSOs have experienced any AI related incidents to date.

A majority of them indicated no incidents or unknown, but about twenty two percent have had near misses, at least one incident, multiple incidents.

So one in five organizations, you know, that's starting to be a pretty significant number.

And so this is when it starts to get into, you know, how does this shape CISO budgets looking at twenty twenty six? And so we asked a couple of different questions here, not only regard regarding AI's their AI budgets, but also their total budgets.

First, starting out on how they look to change their AI security spending in twenty twenty six. Most the the top one is actually moderate increase. So this is a ten to twenty five percent increase over last year.

The next one being slight increase, the next one being significant increase. So then in that case, totaling that up about eighty eight percent of organizations are increasing their security spending for twenty twenty six. Not a shock for knowing that they're being connected to code repositories, production systems.

But, you know, there's still some that are keeping budgets the same and no budget.

And so now here asking, do see this expect their total security spending to change compared to last year? And so, again, about seventy seven percent are increasing it. So eight percent increasing AI spending, seventy seven percent increasing their total budgets. Number one comes at ten to twenty five percent, and then also, you know, slight ten percent increase for the next runner-up.

Although, when we take a look at the snapshot today of if they're spending on AI security today, about thirty percent have no dedicated AI security budget, which is a critical gap, especially if seventy five percent have AI accessing source code. But, again, they're looking to change that with, their decisions in twenty twenty six.

Alright. Nikhil, wanna take this one away?

Yeah. Absolutely. And so, you know, when we asked, right, like, what was the driver behind, you know, any existing AI security investments, You know, the the top two items that we found is, first off, risk reduction. Right? Finding those vulnerable user and device populations and figuring out how can we manage and reduce risk and implement controls for those user device populations.

And then, of course, you know, figuring out how to enable secure adoption of AI.

I mean, it's inevitable. Right? And I think, Kasia, you referenced it earlier in the presentation how, you know, fifty percent plus organizations are are have deployed AI in their organization to, like, seventy percent or above, I think, was the stat. And so, you know, when we looked at the drivers behind the the AI security investment, talking about managing risk is really the the top driver.

And when we get to the next slide around, you know, how we should talk about this, right, we go back to that data protection, right, and that that data exfiltration risk. Anything that's PII, anything that's mission or business critical, what's top of mind is making sure that that information doesn't get leaked. Right? And some of the stories we've heard when we've talked to, you know, leaders in the field is, pretty pretty varied. Right? It could be IP information. It could be code.

It could be PII information that, you know, will open the the business up to, you know, regulatory risk or fines and things like that. And so that's why there's a really big data security component to, you know, why this is such a high priority behind the investment.

And so with these final five questions that we asked, they were more open ended. And so these are just some quotes and high level takeaways when we look at patterns in the CECL responses.

So the first one was, you know, what emerging AI security risks are you monitoring that, you know, aren't widely discussed today? You know, we brought up some examples with, like, data leakage, prompt injection, shadow AI. But is there something that we're missing or not considering that these CISOs are, you know, thinking about? And so they're also, concerned about agent to agent exploitation and vibe coding.

They aren't just looking to secure a AI models. They're looking to secure AI behavior, decision authority, identity at agents machine speed. So, these are just some quotes, like, are we're missing a collapse to zero to zero day exploits and breaches to break out. Traditional approaches don't work.

We need to rethink approaches at the speed of AI.

Vibe coding is the greatest threat. At the moment, developers are prompting their way into a necessarily complex code that because they do not actually write it themselves, they do not actually understand what they might be shipping.

And then also MCP agents lack a clear clear sense of identity. MCP agents assume the role of a user would not have a birth certificate or any immutable characteristics.

Next up, we asked to the CSOs to describe a scenario where AI access could realistically lead to a major security incident. What are they actually thinking about in terms of risky paths that AI could take at their organization? So some top scenarios include AI agents with excessive permissions accessing sensitive customer data, malicious MCP servers paired with preapproved automations, and then also unauthorized transactions with overprivileged AI systems. One of the quotes included an AI agent with excessive permissions could inadvertently or even maliciously access sensitive customer data, modify critical configurations, or execute unauthorized transactions. And so it's really hard to get that visibility, but also get the proof of, like, who that came from, from which device, from which agent, from which request. It's really hard to backtrack into where did this come from and what changes were made.

Next up, we asked, like, just in your ideal, vision, let's think, you know, one to two, three years from now, what does SafeCare AI actually look like? What would help CISOs sleep better at night? And so number one, it's tackling monitoring and visibility. It's hard to place any controls in in place when you don't even know what's going on, when you don't have that visibility. So things like real time tracking, activity tracking, comprehensive logging, continuous observability were things that were mentioned.

Then once they get that visibility, they wanna enforce policies. So automated guard guardrails, government's framework frameworks, and policy driven access decisions. And then lastly, tackling audit and compliance is something that's gonna be pretty, pretty big.

Having those audit trails, compliance automation, accountability mechanisms so they can answer who, what, where, when, how.

This was my favorite question to ask. If CSOs got a million dollars out of the sky today, what would they spend it on tomorrow? And it was very interesting because although fifty five percent of them decided they wanted to spend more on security tools, a lot they forty percent said that they wanted to just increase their staffing and talent.

A lot of CSOs feel like they just don't have the expertise and the time and the knowledge to understand the new AI threat vectors. So expanding their team would really help them have the resources to focus on these kinds of initiatives.

In terms of what kind of security tools that were mentioned, a lot of the times, it was AI government governance and visibility come first. Quotes like, I need to see what AI is doing before I can secure it were mentioned.

And then lastly, the final question was, like, if you could give one piece of advice to another CISO planning security budgets for twenty twenty six, what would you what recommend to them? And the critical takeaway was to avoid tools for all and instead focusing on understanding your specific risks, establishing governance, and building foundational controls before looking at different kind of solutions. First, understand what is the biggest risk to your organization, your industry, your employees today. Prioritize and focus, you know, the eighty twenty rule of, you know, eighty percent will cover a majority of the attacks, and then start with governments.

And so this, concludes all of the survey results. Some major takeaways include that AI is a production reality. AI has access to code repositories, production systems. So it is alive and taken action today.

But there's definitely a confidence gap. Only fourteen percent of CSOs are ready for the new wave of attacks that can come through AI. Data leakage is the number one thing that are keeping CSOs awake. The fact that it has access to all of these confidential and proprietary systems.

Although they are looking to increase spend, eighty eight percent are looking to increase spend, but about thirty percent of them aren't spending on AI security today.

So those are the major takeaways. Now we do have some recommendations on what this means for you and your organization. First up, discussing how do you communicate these risks to your executive team to influence budgets moving forward? And Nikhil has a little bit more on that.

Yeah. Absolutely. So I think, you know, there's obviously, Kasia, you touched on it earlier, right, is is starting with understanding what's in your environment, and categorizing the risk.

But just to give, like, a very kind of explicit example on how, you know, some of our, you know, some some of the for a lot of the customers that we act as trusted advisers for, you know, how should you message to your board or to your peers why an investment in AI security is is is important? And so, you know, a really good example that I like to demonstrate is how, you know, the the user and device population that you can argue has the most access and is the early adopter of, AI tools is usually your software engineers and your developers.

And so when you're talking about, you know, let's say, buying, a AI security solution, you actually want to share information in the form of, look. You know, the engineering team has embraced AI coding tools.

Right? Some percentage of them are using it daily.

And while it's great for productivity, right, it's helped us ship features so much faster.

Frankly, the blind spot is we have no clue what MCP servers and what tools that they're using.

And, actually, go in, right, talk to that user and device population, and ask them. Right? Ask them how they're using the tools. Ask them things that they might have, uncovered as risky. Now you have real data points that you can use to quantify that business risk and make a case to to, you know, adopt the tooling and and and find that budget.

So, really, you know, the the takeaway is that talk about getting and asking for budget as enabling, not restricting is is what I would take away.

Awesome. So switching gears, you know, and just to talk a little bit about Beyond Identity and our approach.

You know, obviously, we have identity in our name. You know, we take an identity centric approach to securing agentic use cases and and and AI in your organization.

And what that looks like and what that means is when you tie a user device and an agent to a hardware backed credential, what we're able to do is we are able to answer you know, the user, created an agent. That agent runs on a specific device.

That agent invoked a specific tool.

It's governed by, you know, a specific policy that you have written, and you have signed visibility and auditability over everything that happened. And so, really, when you look at how Beyond Identity approaches the adoption of AI in your organization securely, we take it from this kind of, identity centric approach where you have to tie the identity back to the device and the hardware back credential. And so if you hit the next slide, you know, I'm happy to actually give just a quick demo of what that looks like.

And so, what I actually have here is our Beyond Identity, Identity Suite product, which is the actual tool that our security team internally uses to monitor, AI usage in our organization.

And so just to quickly start talking about, you know, what the approach is and how it works, we effectively give users, an enrollment method to come in and and basically register with the Beyond Identity agent shield. And, subsequently, what that looks like is when these users are leveraging, agentic tools, specifically local agentic tools like Cloud Code or Gemini CLI, What we're actually able to do is we're we're able to actually see every kind of prompt that this user or this developer is is doing, you know, as part of their work. And the type of visibility that that gets us is we can actually see, you know, what models are being used across the organization.

We can see which specific AI tools are being leveraged. So in this case, you know, we're a pretty anthropic cloud heavy shop. You can see there's a lot of cloud CLI and and, agent SDK usage.

We can then go in a step further, and we can actually understand what tools is this agent are these local agents actually calling? So you can see that there's a whole host of firewall, MCP tooling that's being used to crawl the Internet.

We also cut that information, in a whole host of different way ways so that you can see kind of an over time, you know, what tools this user did. And how that becomes really interesting and and how that can be really valuable is when you can actually write policy on, this tool calling or this MCP calling. So the example that I wanna give is, you know, if I were to ask Claude, hey. Tell me what files are in this folder.

It's actually gonna use the Bash tool, and go ahead and, you know, give me a list of all the the files in my, in the current directory that I'm in.

Let's say now that this device fell out of security compliance. For example, maybe my firewall is turned off, or there is some indication that this device is now risky.

Under those scenarios, you might not want, you know, an an an agent powered by an LLM to be able to ask questions of this device. Right? Maybe there's malware on this machine. It's going to try to call Claude to go in and do some additional reconnaissance.

What's really great is that when you have this proxy solution, you can go in, set granular access policy, and be able to block, any LLM from running these type of commands. So if I try to go in and run that same query again, you'll notice that this tool access was denied.

And so this is just a quick taste of how, you know, our AI IdentityShield product works.

So, yeah, Kasia, I'll pass it back to you to talk about, you know, how folks can learn more if this is if if if if, you know, folks wanna learn more about how to to adopt AI securely in their organization.

Yeah. Absolutely. It's pretty easy. The next step is super easy.

We have a site beyond identity dot a I that talks a little bit more about what Nikhil was sharing on all the use cases. We actually just recently launched early access for our I AI security suite. So when you go to beyond identity dot a I, you can learn about it, but also sign up for how to get access. So it's really quick, thirty second form.

We'll let you know what the next steps are, but we're really excited to launch this and tackle a lot of the issues that were brought up by the fifty CSOs that we interviewed. But thank you so much to everyone who was able to join. I hope you learned a little bit, not only about the market of AI security, but also, you know, what solutions are in place and what we're designing towards. So thank you so much.

See you next time.

Thank you.

Key Takeaways

  • AI is already production: AI tools now access source code, cloud infrastructure, and production systems, making AI security a present-day risk, not a future concern.
  • Data leakage tops fears, followed by Shadow AI, ungoverned agents, and over-privileged access.
  • Controls lag behind adoption: Access governance is still largely manual and role-based, with limited automation or policy-driven enforcement.
  • Budgets are shifting to AI: 88% of organizations plan to increase AI security spend for 2026, even though many lack dedicated AI security budgets today.
  • Visibility before control: CISOs prioritize monitoring, identity, and observability first, then policy enforcement, governance, and compliance automation.

Full Transcript

Welcome everybody to another webinar from Beyond Identity. We have juicy, juicy content and survey results for you guys. But before we get into that, just like a quick round of introductions.

Hello. I'm Kasia. I'm one of the product marketing managers here at Beyond Identity, and we have Nikhil as well. Would you like to introduce yourself?

Sure. Hey, Kasia, and hey, everyone. Thank you all for joining. My name is Nikhil, and I lead the product team here at Beyond Identity.

Awesome. Well, today, we're going to be talking about what fifty CSOs told us, the top AI risks they're budgeting for in twenty twenty six. So there's fifty CSOs surveyed, twenty different questions, some of them open ended, multiple choice, rankings. So we really get in the minds of these CISOs.

Just just covering some ground on the industries and the company sizes that these CISOs work for. So primarily, they're enterprise, so a thousand plus employees, but some of them are mid market as well, primarily in software, financial services, some in, like, business consulting as well. So just so you know, the lay of the land. But let's get right into it with question one of what are the top AI tools that, these organizations are using today? Primarily, ChatGPT is number one, not a huge shocker, but Microsoft Copilot really right like, right behind. There's a lot of AI being used in various SaaS tools. So think like Jira or GitHub, the ones that come embedded natively within those tools.

And things other ones that you've probably heard of as well, Gemini, Claude, internal AI tools, custom AI tools, the pack as well.

Then we also asked how these systems would describe their AI usage in their organization, like, on a day to day basis. Is it something that's widely adopted? Is it something that's precautionary and just on a case by case basis?

But a majority of it is wide wide widely used. So widely adopted at forty five percent in production use as well. So touching things like code repositories for thirty three percent. So we can really see here, like, AI is no longer a choice. It is something that CSOs kind of have to, like, let the organization use or else the business falls, falls behind.

And then which environments does AI currently have access to at these organizations? Primarily knowledge bases. That's something that's very expected just because of how effective referring to all of these knowledge bases that we have. But Source Code being at number two is definitely an eye opener.

That's one thing that obviously can pose a lot of risks at an organization. Cloud infrastructure having access, AI tools having access to that, production as well. These are something that definitely surprised me on, you know, how AI has access to these. But again, it goes to shows how organizations really can fly and grow with AI being connected to these, environments.

Things like customer PII, financial data, kind of follow behind those.

And then how would these CISOs rate their organization's current capacity to manage access and security. Most of them said that they have added adequate controls in place, nothing, extremely strong, but also not lagging behind. Following that, they believe that they have strong ones.

But you can see that a lot of CISOs just feel like they're in the middle of the pack.

Nikhil, I'm gonna punt this one to you.

Yeah. Absolutely. So, you know, when we dug deeper, right, our goal with this whole survey was to really understand, what does this mean? You know, what what does the AI adoption in the in the business and in the organization really mean from a in-depth security perspective. And so, you know, Kasia, I think you've actually given us a really great overview on, you know, how AI is being utilized, in the organization to kinda set a baseline.

Something that we learned is that all of the controls around how access is being governed is pretty manual, where giving licenses and, authorizing, AI tools access to various tools, are either a manual or role based approval.

Very little of that, access approvals is automated or policy based, which is really where we want to see the industry move towards.

Sounds good. Yeah.

Then for the next question, we also ask what their the CSO's level of concern is for AI enhanced attack techniques.

So this is a little bit different than, like, just generalized AI threats or AI risks. But this is these are attackers using AI to make other types of attacks, things like phishing, social engineering, deep fake impersonations even easier, faster, more accessible, and cheaper for them.

So we wanted to see if this is also a concern within CISOs minds, and it looks like there are high concerns primarily with AI related phishing and social engineering. We know from, know before data that even with a lot of security awareness and training, employees still click about forty per four four percent of the time. And so with AI making it easier, simpler, faster to deploy these types of attacks, it means that they're more widespread, they're more convincing, and unfortunately, employees still click four percent of the time. So this is something that's still top of mind for CSOs.

Next step, we also ask how confident are CSOs that their organization can detect and respond to an AI related security incident today. And so right now, it's, again, just most majority are somewhat confident. The majority are uncertain.

About fifteen percent of them are not confident whatsoever. So you can see here that, you know, again, the their hands are tied. Their business needs to grow with AI, and so they need to deploy it, but they don't have that certainty with security solutions today that it can actually combat the new risks and threats that they unfold.

Yeah.

Go ahead, Nikhil.

So so that really just, you know, jumps right into what is the biggest risk that's top of mind for every security leader. And, what's gonna come through the next couple slides is that data leakage and ShadowAI top the threat list in terms of, you know, when you have AI in your organization, you know, what is business critical information, and is that being leaked? And and, really, when we dig deeper into, you know, examples of what these fears are, There are things like employee leaking data through ShadowAI because, they're using unapproved, you know, unapproved AI tools. There are developers using, you know, MCP servers and and pushing information or proprietary information where you don't have visibility or controls. And so this data leakage and and is is is is something that is top of mind, from a CSO perspective.

And if we go to the next slide.

Yeah. I mean, we we kinda touched on it. And so when we tie that back to budgets and where security leaders want to spend their investments, it's really figuring out, you know, how do we address that data leakage problem, how do we address that shadow AI and that lack of visibility, and and, look to implement tools and controls to solve that problem.

Yep.

And so also was very curious to see if any CSOs have experienced any AI related incidents to date.

A majority of them indicated no incidents or unknown, but about twenty two percent have had near misses, at least one incident, multiple incidents.

So one in five organizations, you know, that's starting to be a pretty significant number.

And so this is when it starts to get into, you know, how does this shape CISO budgets looking at twenty twenty six? And so we asked a couple of different questions here, not only regard regarding AI's their AI budgets, but also their total budgets.

First, starting out on how they look to change their AI security spending in twenty twenty six. Most the the top one is actually moderate increase. So this is a ten to twenty five percent increase over last year.

The next one being slight increase, the next one being significant increase. So then in that case, totaling that up about eighty eight percent of organizations are increasing their security spending for twenty twenty six. Not a shock for knowing that they're being connected to code repositories, production systems.

But, you know, there's still some that are keeping budgets the same and no budget.

And so now here asking, do see this expect their total security spending to change compared to last year? And so, again, about seventy seven percent are increasing it. So eight percent increasing AI spending, seventy seven percent increasing their total budgets. Number one comes at ten to twenty five percent, and then also, you know, slight ten percent increase for the next runner-up.

Although, when we take a look at the snapshot today of if they're spending on AI security today, about thirty percent have no dedicated AI security budget, which is a critical gap, especially if seventy five percent have AI accessing source code. But, again, they're looking to change that with, their decisions in twenty twenty six.

Alright. Nikhil, wanna take this one away?

Yeah. Absolutely. And so, you know, when we asked, right, like, what was the driver behind, you know, any existing AI security investments, You know, the the top two items that we found is, first off, risk reduction. Right? Finding those vulnerable user and device populations and figuring out how can we manage and reduce risk and implement controls for those user device populations.

And then, of course, you know, figuring out how to enable secure adoption of AI.

I mean, it's inevitable. Right? And I think, Kasia, you referenced it earlier in the presentation how, you know, fifty percent plus organizations are are have deployed AI in their organization to, like, seventy percent or above, I think, was the stat. And so, you know, when we looked at the drivers behind the the AI security investment, talking about managing risk is really the the top driver.

And when we get to the next slide around, you know, how we should talk about this, right, we go back to that data protection, right, and that that data exfiltration risk. Anything that's PII, anything that's mission or business critical, what's top of mind is making sure that that information doesn't get leaked. Right? And some of the stories we've heard when we've talked to, you know, leaders in the field is, pretty pretty varied. Right? It could be IP information. It could be code.

It could be PII information that, you know, will open the the business up to, you know, regulatory risk or fines and things like that. And so that's why there's a really big data security component to, you know, why this is such a high priority behind the investment.

And so with these final five questions that we asked, they were more open ended. And so these are just some quotes and high level takeaways when we look at patterns in the CECL responses.

So the first one was, you know, what emerging AI security risks are you monitoring that, you know, aren't widely discussed today? You know, we brought up some examples with, like, data leakage, prompt injection, shadow AI. But is there something that we're missing or not considering that these CISOs are, you know, thinking about? And so they're also, concerned about agent to agent exploitation and vibe coding.

They aren't just looking to secure a AI models. They're looking to secure AI behavior, decision authority, identity at agents machine speed. So, these are just some quotes, like, are we're missing a collapse to zero to zero day exploits and breaches to break out. Traditional approaches don't work.

We need to rethink approaches at the speed of AI.

Vibe coding is the greatest threat. At the moment, developers are prompting their way into a necessarily complex code that because they do not actually write it themselves, they do not actually understand what they might be shipping.

And then also MCP agents lack a clear clear sense of identity. MCP agents assume the role of a user would not have a birth certificate or any immutable characteristics.

Next up, we asked to the CSOs to describe a scenario where AI access could realistically lead to a major security incident. What are they actually thinking about in terms of risky paths that AI could take at their organization? So some top scenarios include AI agents with excessive permissions accessing sensitive customer data, malicious MCP servers paired with preapproved automations, and then also unauthorized transactions with overprivileged AI systems. One of the quotes included an AI agent with excessive permissions could inadvertently or even maliciously access sensitive customer data, modify critical configurations, or execute unauthorized transactions. And so it's really hard to get that visibility, but also get the proof of, like, who that came from, from which device, from which agent, from which request. It's really hard to backtrack into where did this come from and what changes were made.

Next up, we asked, like, just in your ideal, vision, let's think, you know, one to two, three years from now, what does SafeCare AI actually look like? What would help CISOs sleep better at night? And so number one, it's tackling monitoring and visibility. It's hard to place any controls in in place when you don't even know what's going on, when you don't have that visibility. So things like real time tracking, activity tracking, comprehensive logging, continuous observability were things that were mentioned.

Then once they get that visibility, they wanna enforce policies. So automated guard guardrails, government's framework frameworks, and policy driven access decisions. And then lastly, tackling audit and compliance is something that's gonna be pretty, pretty big.

Having those audit trails, compliance automation, accountability mechanisms so they can answer who, what, where, when, how.

This was my favorite question to ask. If CSOs got a million dollars out of the sky today, what would they spend it on tomorrow? And it was very interesting because although fifty five percent of them decided they wanted to spend more on security tools, a lot they forty percent said that they wanted to just increase their staffing and talent.

A lot of CSOs feel like they just don't have the expertise and the time and the knowledge to understand the new AI threat vectors. So expanding their team would really help them have the resources to focus on these kinds of initiatives.

In terms of what kind of security tools that were mentioned, a lot of the times, it was AI government governance and visibility come first. Quotes like, I need to see what AI is doing before I can secure it were mentioned.

And then lastly, the final question was, like, if you could give one piece of advice to another CISO planning security budgets for twenty twenty six, what would you what recommend to them? And the critical takeaway was to avoid tools for all and instead focusing on understanding your specific risks, establishing governance, and building foundational controls before looking at different kind of solutions. First, understand what is the biggest risk to your organization, your industry, your employees today. Prioritize and focus, you know, the eighty twenty rule of, you know, eighty percent will cover a majority of the attacks, and then start with governments.

And so this, concludes all of the survey results. Some major takeaways include that AI is a production reality. AI has access to code repositories, production systems. So it is alive and taken action today.

But there's definitely a confidence gap. Only fourteen percent of CSOs are ready for the new wave of attacks that can come through AI. Data leakage is the number one thing that are keeping CSOs awake. The fact that it has access to all of these confidential and proprietary systems.

Although they are looking to increase spend, eighty eight percent are looking to increase spend, but about thirty percent of them aren't spending on AI security today.

So those are the major takeaways. Now we do have some recommendations on what this means for you and your organization. First up, discussing how do you communicate these risks to your executive team to influence budgets moving forward? And Nikhil has a little bit more on that.

Yeah. Absolutely. So I think, you know, there's obviously, Kasia, you touched on it earlier, right, is is starting with understanding what's in your environment, and categorizing the risk.

But just to give, like, a very kind of explicit example on how, you know, some of our, you know, some some of the for a lot of the customers that we act as trusted advisers for, you know, how should you message to your board or to your peers why an investment in AI security is is is important? And so, you know, a really good example that I like to demonstrate is how, you know, the the user and device population that you can argue has the most access and is the early adopter of, AI tools is usually your software engineers and your developers.

And so when you're talking about, you know, let's say, buying, a AI security solution, you actually want to share information in the form of, look. You know, the engineering team has embraced AI coding tools.

Right? Some percentage of them are using it daily.

And while it's great for productivity, right, it's helped us ship features so much faster.

Frankly, the blind spot is we have no clue what MCP servers and what tools that they're using.

And, actually, go in, right, talk to that user and device population, and ask them. Right? Ask them how they're using the tools. Ask them things that they might have, uncovered as risky. Now you have real data points that you can use to quantify that business risk and make a case to to, you know, adopt the tooling and and and find that budget.

So, really, you know, the the takeaway is that talk about getting and asking for budget as enabling, not restricting is is what I would take away.

Awesome. So switching gears, you know, and just to talk a little bit about Beyond Identity and our approach.

You know, obviously, we have identity in our name. You know, we take an identity centric approach to securing agentic use cases and and and AI in your organization.

And what that looks like and what that means is when you tie a user device and an agent to a hardware backed credential, what we're able to do is we are able to answer you know, the user, created an agent. That agent runs on a specific device.

That agent invoked a specific tool.

It's governed by, you know, a specific policy that you have written, and you have signed visibility and auditability over everything that happened. And so, really, when you look at how Beyond Identity approaches the adoption of AI in your organization securely, we take it from this kind of, identity centric approach where you have to tie the identity back to the device and the hardware back credential. And so if you hit the next slide, you know, I'm happy to actually give just a quick demo of what that looks like.

And so, what I actually have here is our Beyond Identity, Identity Suite product, which is the actual tool that our security team internally uses to monitor, AI usage in our organization.

And so just to quickly start talking about, you know, what the approach is and how it works, we effectively give users, an enrollment method to come in and and basically register with the Beyond Identity agent shield. And, subsequently, what that looks like is when these users are leveraging, agentic tools, specifically local agentic tools like Cloud Code or Gemini CLI, What we're actually able to do is we're we're able to actually see every kind of prompt that this user or this developer is is doing, you know, as part of their work. And the type of visibility that that gets us is we can actually see, you know, what models are being used across the organization.

We can see which specific AI tools are being leveraged. So in this case, you know, we're a pretty anthropic cloud heavy shop. You can see there's a lot of cloud CLI and and, agent SDK usage.

We can then go in a step further, and we can actually understand what tools is this agent are these local agents actually calling? So you can see that there's a whole host of firewall, MCP tooling that's being used to crawl the Internet.

We also cut that information, in a whole host of different way ways so that you can see kind of an over time, you know, what tools this user did. And how that becomes really interesting and and how that can be really valuable is when you can actually write policy on, this tool calling or this MCP calling. So the example that I wanna give is, you know, if I were to ask Claude, hey. Tell me what files are in this folder.

It's actually gonna use the Bash tool, and go ahead and, you know, give me a list of all the the files in my, in the current directory that I'm in.

Let's say now that this device fell out of security compliance. For example, maybe my firewall is turned off, or there is some indication that this device is now risky.

Under those scenarios, you might not want, you know, an an an agent powered by an LLM to be able to ask questions of this device. Right? Maybe there's malware on this machine. It's going to try to call Claude to go in and do some additional reconnaissance.

What's really great is that when you have this proxy solution, you can go in, set granular access policy, and be able to block, any LLM from running these type of commands. So if I try to go in and run that same query again, you'll notice that this tool access was denied.

And so this is just a quick taste of how, you know, our AI IdentityShield product works.

So, yeah, Kasia, I'll pass it back to you to talk about, you know, how folks can learn more if this is if if if if, you know, folks wanna learn more about how to to adopt AI securely in their organization.

Yeah. Absolutely. It's pretty easy. The next step is super easy.

We have a site beyond identity dot a I that talks a little bit more about what Nikhil was sharing on all the use cases. We actually just recently launched early access for our I AI security suite. So when you go to beyond identity dot a I, you can learn about it, but also sign up for how to get access. So it's really quick, thirty second form.

We'll let you know what the next steps are, but we're really excited to launch this and tackle a lot of the issues that were brought up by the fifty CSOs that we interviewed. But thank you so much to everyone who was able to join. I hope you learned a little bit, not only about the market of AI security, but also, you know, what solutions are in place and what we're designing towards. So thank you so much.

See you next time.

Thank you.

Key Takeaways

  • AI is already production: AI tools now access source code, cloud infrastructure, and production systems, making AI security a present-day risk, not a future concern.
  • Data leakage tops fears, followed by Shadow AI, ungoverned agents, and over-privileged access.
  • Controls lag behind adoption: Access governance is still largely manual and role-based, with limited automation or policy-driven enforcement.
  • Budgets are shifting to AI: 88% of organizations plan to increase AI security spend for 2026, even though many lack dedicated AI security budgets today.
  • Visibility before control: CISOs prioritize monitoring, identity, and observability first, then policy enforcement, governance, and compliance automation.

Full Transcript

Welcome everybody to another webinar from Beyond Identity. We have juicy, juicy content and survey results for you guys. But before we get into that, just like a quick round of introductions.

Hello. I'm Kasia. I'm one of the product marketing managers here at Beyond Identity, and we have Nikhil as well. Would you like to introduce yourself?

Sure. Hey, Kasia, and hey, everyone. Thank you all for joining. My name is Nikhil, and I lead the product team here at Beyond Identity.

Awesome. Well, today, we're going to be talking about what fifty CSOs told us, the top AI risks they're budgeting for in twenty twenty six. So there's fifty CSOs surveyed, twenty different questions, some of them open ended, multiple choice, rankings. So we really get in the minds of these CISOs.

Just just covering some ground on the industries and the company sizes that these CISOs work for. So primarily, they're enterprise, so a thousand plus employees, but some of them are mid market as well, primarily in software, financial services, some in, like, business consulting as well. So just so you know, the lay of the land. But let's get right into it with question one of what are the top AI tools that, these organizations are using today? Primarily, ChatGPT is number one, not a huge shocker, but Microsoft Copilot really right like, right behind. There's a lot of AI being used in various SaaS tools. So think like Jira or GitHub, the ones that come embedded natively within those tools.

And things other ones that you've probably heard of as well, Gemini, Claude, internal AI tools, custom AI tools, the pack as well.

Then we also asked how these systems would describe their AI usage in their organization, like, on a day to day basis. Is it something that's widely adopted? Is it something that's precautionary and just on a case by case basis?

But a majority of it is wide wide widely used. So widely adopted at forty five percent in production use as well. So touching things like code repositories for thirty three percent. So we can really see here, like, AI is no longer a choice. It is something that CSOs kind of have to, like, let the organization use or else the business falls, falls behind.

And then which environments does AI currently have access to at these organizations? Primarily knowledge bases. That's something that's very expected just because of how effective referring to all of these knowledge bases that we have. But Source Code being at number two is definitely an eye opener.

That's one thing that obviously can pose a lot of risks at an organization. Cloud infrastructure having access, AI tools having access to that, production as well. These are something that definitely surprised me on, you know, how AI has access to these. But again, it goes to shows how organizations really can fly and grow with AI being connected to these, environments.

Things like customer PII, financial data, kind of follow behind those.

And then how would these CISOs rate their organization's current capacity to manage access and security. Most of them said that they have added adequate controls in place, nothing, extremely strong, but also not lagging behind. Following that, they believe that they have strong ones.

But you can see that a lot of CISOs just feel like they're in the middle of the pack.

Nikhil, I'm gonna punt this one to you.

Yeah. Absolutely. So, you know, when we dug deeper, right, our goal with this whole survey was to really understand, what does this mean? You know, what what does the AI adoption in the in the business and in the organization really mean from a in-depth security perspective. And so, you know, Kasia, I think you've actually given us a really great overview on, you know, how AI is being utilized, in the organization to kinda set a baseline.

Something that we learned is that all of the controls around how access is being governed is pretty manual, where giving licenses and, authorizing, AI tools access to various tools, are either a manual or role based approval.

Very little of that, access approvals is automated or policy based, which is really where we want to see the industry move towards.

Sounds good. Yeah.

Then for the next question, we also ask what their the CSO's level of concern is for AI enhanced attack techniques.

So this is a little bit different than, like, just generalized AI threats or AI risks. But this is these are attackers using AI to make other types of attacks, things like phishing, social engineering, deep fake impersonations even easier, faster, more accessible, and cheaper for them.

So we wanted to see if this is also a concern within CISOs minds, and it looks like there are high concerns primarily with AI related phishing and social engineering. We know from, know before data that even with a lot of security awareness and training, employees still click about forty per four four percent of the time. And so with AI making it easier, simpler, faster to deploy these types of attacks, it means that they're more widespread, they're more convincing, and unfortunately, employees still click four percent of the time. So this is something that's still top of mind for CSOs.

Next step, we also ask how confident are CSOs that their organization can detect and respond to an AI related security incident today. And so right now, it's, again, just most majority are somewhat confident. The majority are uncertain.

About fifteen percent of them are not confident whatsoever. So you can see here that, you know, again, the their hands are tied. Their business needs to grow with AI, and so they need to deploy it, but they don't have that certainty with security solutions today that it can actually combat the new risks and threats that they unfold.

Yeah.

Go ahead, Nikhil.

So so that really just, you know, jumps right into what is the biggest risk that's top of mind for every security leader. And, what's gonna come through the next couple slides is that data leakage and ShadowAI top the threat list in terms of, you know, when you have AI in your organization, you know, what is business critical information, and is that being leaked? And and, really, when we dig deeper into, you know, examples of what these fears are, There are things like employee leaking data through ShadowAI because, they're using unapproved, you know, unapproved AI tools. There are developers using, you know, MCP servers and and pushing information or proprietary information where you don't have visibility or controls. And so this data leakage and and is is is is something that is top of mind, from a CSO perspective.

And if we go to the next slide.

Yeah. I mean, we we kinda touched on it. And so when we tie that back to budgets and where security leaders want to spend their investments, it's really figuring out, you know, how do we address that data leakage problem, how do we address that shadow AI and that lack of visibility, and and, look to implement tools and controls to solve that problem.

Yep.

And so also was very curious to see if any CSOs have experienced any AI related incidents to date.

A majority of them indicated no incidents or unknown, but about twenty two percent have had near misses, at least one incident, multiple incidents.

So one in five organizations, you know, that's starting to be a pretty significant number.

And so this is when it starts to get into, you know, how does this shape CISO budgets looking at twenty twenty six? And so we asked a couple of different questions here, not only regard regarding AI's their AI budgets, but also their total budgets.

First, starting out on how they look to change their AI security spending in twenty twenty six. Most the the top one is actually moderate increase. So this is a ten to twenty five percent increase over last year.

The next one being slight increase, the next one being significant increase. So then in that case, totaling that up about eighty eight percent of organizations are increasing their security spending for twenty twenty six. Not a shock for knowing that they're being connected to code repositories, production systems.

But, you know, there's still some that are keeping budgets the same and no budget.

And so now here asking, do see this expect their total security spending to change compared to last year? And so, again, about seventy seven percent are increasing it. So eight percent increasing AI spending, seventy seven percent increasing their total budgets. Number one comes at ten to twenty five percent, and then also, you know, slight ten percent increase for the next runner-up.

Although, when we take a look at the snapshot today of if they're spending on AI security today, about thirty percent have no dedicated AI security budget, which is a critical gap, especially if seventy five percent have AI accessing source code. But, again, they're looking to change that with, their decisions in twenty twenty six.

Alright. Nikhil, wanna take this one away?

Yeah. Absolutely. And so, you know, when we asked, right, like, what was the driver behind, you know, any existing AI security investments, You know, the the top two items that we found is, first off, risk reduction. Right? Finding those vulnerable user and device populations and figuring out how can we manage and reduce risk and implement controls for those user device populations.

And then, of course, you know, figuring out how to enable secure adoption of AI.

I mean, it's inevitable. Right? And I think, Kasia, you referenced it earlier in the presentation how, you know, fifty percent plus organizations are are have deployed AI in their organization to, like, seventy percent or above, I think, was the stat. And so, you know, when we looked at the drivers behind the the AI security investment, talking about managing risk is really the the top driver.

And when we get to the next slide around, you know, how we should talk about this, right, we go back to that data protection, right, and that that data exfiltration risk. Anything that's PII, anything that's mission or business critical, what's top of mind is making sure that that information doesn't get leaked. Right? And some of the stories we've heard when we've talked to, you know, leaders in the field is, pretty pretty varied. Right? It could be IP information. It could be code.

It could be PII information that, you know, will open the the business up to, you know, regulatory risk or fines and things like that. And so that's why there's a really big data security component to, you know, why this is such a high priority behind the investment.

And so with these final five questions that we asked, they were more open ended. And so these are just some quotes and high level takeaways when we look at patterns in the CECL responses.

So the first one was, you know, what emerging AI security risks are you monitoring that, you know, aren't widely discussed today? You know, we brought up some examples with, like, data leakage, prompt injection, shadow AI. But is there something that we're missing or not considering that these CISOs are, you know, thinking about? And so they're also, concerned about agent to agent exploitation and vibe coding.

They aren't just looking to secure a AI models. They're looking to secure AI behavior, decision authority, identity at agents machine speed. So, these are just some quotes, like, are we're missing a collapse to zero to zero day exploits and breaches to break out. Traditional approaches don't work.

We need to rethink approaches at the speed of AI.

Vibe coding is the greatest threat. At the moment, developers are prompting their way into a necessarily complex code that because they do not actually write it themselves, they do not actually understand what they might be shipping.

And then also MCP agents lack a clear clear sense of identity. MCP agents assume the role of a user would not have a birth certificate or any immutable characteristics.

Next up, we asked to the CSOs to describe a scenario where AI access could realistically lead to a major security incident. What are they actually thinking about in terms of risky paths that AI could take at their organization? So some top scenarios include AI agents with excessive permissions accessing sensitive customer data, malicious MCP servers paired with preapproved automations, and then also unauthorized transactions with overprivileged AI systems. One of the quotes included an AI agent with excessive permissions could inadvertently or even maliciously access sensitive customer data, modify critical configurations, or execute unauthorized transactions. And so it's really hard to get that visibility, but also get the proof of, like, who that came from, from which device, from which agent, from which request. It's really hard to backtrack into where did this come from and what changes were made.

Next up, we asked, like, just in your ideal, vision, let's think, you know, one to two, three years from now, what does SafeCare AI actually look like? What would help CISOs sleep better at night? And so number one, it's tackling monitoring and visibility. It's hard to place any controls in in place when you don't even know what's going on, when you don't have that visibility. So things like real time tracking, activity tracking, comprehensive logging, continuous observability were things that were mentioned.

Then once they get that visibility, they wanna enforce policies. So automated guard guardrails, government's framework frameworks, and policy driven access decisions. And then lastly, tackling audit and compliance is something that's gonna be pretty, pretty big.

Having those audit trails, compliance automation, accountability mechanisms so they can answer who, what, where, when, how.

This was my favorite question to ask. If CSOs got a million dollars out of the sky today, what would they spend it on tomorrow? And it was very interesting because although fifty five percent of them decided they wanted to spend more on security tools, a lot they forty percent said that they wanted to just increase their staffing and talent.

A lot of CSOs feel like they just don't have the expertise and the time and the knowledge to understand the new AI threat vectors. So expanding their team would really help them have the resources to focus on these kinds of initiatives.

In terms of what kind of security tools that were mentioned, a lot of the times, it was AI government governance and visibility come first. Quotes like, I need to see what AI is doing before I can secure it were mentioned.

And then lastly, the final question was, like, if you could give one piece of advice to another CISO planning security budgets for twenty twenty six, what would you what recommend to them? And the critical takeaway was to avoid tools for all and instead focusing on understanding your specific risks, establishing governance, and building foundational controls before looking at different kind of solutions. First, understand what is the biggest risk to your organization, your industry, your employees today. Prioritize and focus, you know, the eighty twenty rule of, you know, eighty percent will cover a majority of the attacks, and then start with governments.

And so this, concludes all of the survey results. Some major takeaways include that AI is a production reality. AI has access to code repositories, production systems. So it is alive and taken action today.

But there's definitely a confidence gap. Only fourteen percent of CSOs are ready for the new wave of attacks that can come through AI. Data leakage is the number one thing that are keeping CSOs awake. The fact that it has access to all of these confidential and proprietary systems.

Although they are looking to increase spend, eighty eight percent are looking to increase spend, but about thirty percent of them aren't spending on AI security today.

So those are the major takeaways. Now we do have some recommendations on what this means for you and your organization. First up, discussing how do you communicate these risks to your executive team to influence budgets moving forward? And Nikhil has a little bit more on that.

Yeah. Absolutely. So I think, you know, there's obviously, Kasia, you touched on it earlier, right, is is starting with understanding what's in your environment, and categorizing the risk.

But just to give, like, a very kind of explicit example on how, you know, some of our, you know, some some of the for a lot of the customers that we act as trusted advisers for, you know, how should you message to your board or to your peers why an investment in AI security is is is important? And so, you know, a really good example that I like to demonstrate is how, you know, the the user and device population that you can argue has the most access and is the early adopter of, AI tools is usually your software engineers and your developers.

And so when you're talking about, you know, let's say, buying, a AI security solution, you actually want to share information in the form of, look. You know, the engineering team has embraced AI coding tools.

Right? Some percentage of them are using it daily.

And while it's great for productivity, right, it's helped us ship features so much faster.

Frankly, the blind spot is we have no clue what MCP servers and what tools that they're using.

And, actually, go in, right, talk to that user and device population, and ask them. Right? Ask them how they're using the tools. Ask them things that they might have, uncovered as risky. Now you have real data points that you can use to quantify that business risk and make a case to to, you know, adopt the tooling and and and find that budget.

So, really, you know, the the takeaway is that talk about getting and asking for budget as enabling, not restricting is is what I would take away.

Awesome. So switching gears, you know, and just to talk a little bit about Beyond Identity and our approach.

You know, obviously, we have identity in our name. You know, we take an identity centric approach to securing agentic use cases and and and AI in your organization.

And what that looks like and what that means is when you tie a user device and an agent to a hardware backed credential, what we're able to do is we are able to answer you know, the user, created an agent. That agent runs on a specific device.

That agent invoked a specific tool.

It's governed by, you know, a specific policy that you have written, and you have signed visibility and auditability over everything that happened. And so, really, when you look at how Beyond Identity approaches the adoption of AI in your organization securely, we take it from this kind of, identity centric approach where you have to tie the identity back to the device and the hardware back credential. And so if you hit the next slide, you know, I'm happy to actually give just a quick demo of what that looks like.

And so, what I actually have here is our Beyond Identity, Identity Suite product, which is the actual tool that our security team internally uses to monitor, AI usage in our organization.

And so just to quickly start talking about, you know, what the approach is and how it works, we effectively give users, an enrollment method to come in and and basically register with the Beyond Identity agent shield. And, subsequently, what that looks like is when these users are leveraging, agentic tools, specifically local agentic tools like Cloud Code or Gemini CLI, What we're actually able to do is we're we're able to actually see every kind of prompt that this user or this developer is is doing, you know, as part of their work. And the type of visibility that that gets us is we can actually see, you know, what models are being used across the organization.

We can see which specific AI tools are being leveraged. So in this case, you know, we're a pretty anthropic cloud heavy shop. You can see there's a lot of cloud CLI and and, agent SDK usage.

We can then go in a step further, and we can actually understand what tools is this agent are these local agents actually calling? So you can see that there's a whole host of firewall, MCP tooling that's being used to crawl the Internet.

We also cut that information, in a whole host of different way ways so that you can see kind of an over time, you know, what tools this user did. And how that becomes really interesting and and how that can be really valuable is when you can actually write policy on, this tool calling or this MCP calling. So the example that I wanna give is, you know, if I were to ask Claude, hey. Tell me what files are in this folder.

It's actually gonna use the Bash tool, and go ahead and, you know, give me a list of all the the files in my, in the current directory that I'm in.

Let's say now that this device fell out of security compliance. For example, maybe my firewall is turned off, or there is some indication that this device is now risky.

Under those scenarios, you might not want, you know, an an an agent powered by an LLM to be able to ask questions of this device. Right? Maybe there's malware on this machine. It's going to try to call Claude to go in and do some additional reconnaissance.

What's really great is that when you have this proxy solution, you can go in, set granular access policy, and be able to block, any LLM from running these type of commands. So if I try to go in and run that same query again, you'll notice that this tool access was denied.

And so this is just a quick taste of how, you know, our AI IdentityShield product works.

So, yeah, Kasia, I'll pass it back to you to talk about, you know, how folks can learn more if this is if if if if, you know, folks wanna learn more about how to to adopt AI securely in their organization.

Yeah. Absolutely. It's pretty easy. The next step is super easy.

We have a site beyond identity dot a I that talks a little bit more about what Nikhil was sharing on all the use cases. We actually just recently launched early access for our I AI security suite. So when you go to beyond identity dot a I, you can learn about it, but also sign up for how to get access. So it's really quick, thirty second form.

We'll let you know what the next steps are, but we're really excited to launch this and tackle a lot of the issues that were brought up by the fifty CSOs that we interviewed. But thank you so much to everyone who was able to join. I hope you learned a little bit, not only about the market of AI security, but also, you know, what solutions are in place and what we're designing towards. So thank you so much.

See you next time.

Thank you.