Thought Leadership

Will the “A” in RSA Be Replaced by “AI”?

Written By
Published On
May 1, 2023

In this episode, Beyond Identity's CTO, Jasson Casey, shares insights and key takeaways from the RSA Conference 2023.

Transcription

Joshua

Hello, and welcome to "Cybersecurity Hot Takes." This is your producer, Joshua, once again, filling in for the amazing and incomparable, Reece Guida. May she return soon. And with me on the left side of my screen, I have... You don't know who's on the left? 

Just pick. 

Jasson

We have no idea. I'm going to go with H.B. He's on the left side of your screen. 

Joshua

He is on the left side of my screen. 

H.B.

Well, in that case, I'm H.B., and I do product strategy here at Beyond Identity. 

Joshua

Yes, you do. And on the right side of my screen, I have... 

Jasson

I guess that leaves me. I'm Jasson Casey. I'm the CTO here. 

Joshua

Exactly. Welcome, everyone. And for today's "Cybersecurity Hot Take," will the A in RSA be replaced with AI? Jasson Casey was recently at the RSA conference, as many of our audience was also. And there were a lot of hot topics and a lot of kind of the same conversations going on. 

And so, he's going to fill us in on what some of those were, and we are going to discuss them. Jasson, how was RSA? 

Jasson

So, good show. The level of people there was probably the same level as 2020 before COVID, or the zombie apocalypse shut the world down. So, that was interesting. You know, as is my experience from before, RSA is generally...it's a very mixed show, right? 

You spend a lot of time with partner companies, not just prospects and customers. But I think it was good all around. Thematically, the stuff that was on the floor, I mean, clearly AI's big theme, right? It's, you know, hastily stenciled in paint and toothpaste on everyone's signage. Other big themes that seem to be consistent, attack surface or attack surface mapping or attack surface management. 

It seems like I only saw a few deception companies. A while back, it seemed like everyone had a deception thing, MDR, ITDR, XDR. Basically, if you had a R, it was a big deal and lots of people were putting R's at the end. Honestly, I didn't see a lot of new novel things in my mind. 

The only thing that I can say that really stands out as interesting and novel, and I don't know if it's fair because I kind of ran into them before RSA, was kind of an enterprise secure browser which was really, really interesting. There's a couple people in the space, but the company I ran into was called the Talon Security. 

But yeah, overall, it was a good show. It was worth going to, at least for me and our group. 

Joshua

That's awesome. Wish I could have gone, but maybe next year. There's always next year, you know? So, when it comes to, you know, kind of seeing so many of these people talk about the same things, which ones do you think were like either... could you see being the most impactful? 

Do you think any of them really had any staying power to, like, change this landscape as we know it, or...? 

Jasson

Well, that's just it. Like, I think that a lot of the categories are categories that existed for a while, right? XDR... 

Joshua

Sure. 

Jasson

MDR are natural evolutions, I think, of just EDR and kind of more modern antivirus. So, anyone coming into that space, you know, how in the world are you going to compete against the big players that are kind of already there? Right? You know, whether it's the recent adults like CrowdStrike and SentinelOne, or the graybeards like Palo and whatnot. 

That's the tricky thing, right? Like, how do you go in as anything other than a feature that they may just kind of buy? I guess I forgot the theme on zero trust. Clearly, a lot of people were talking about zero trust. 

So, the reaction that I got from a lot of customers on zero trust really falls into one of two buckets. One set of customers' prospects is burnt out on the word because much like AI of this year, it's a hastily appended phrase that generates organic search metrics for marketing. Maybe that's too cynical. 

But certainly, a large group of the markets kind of burned out and saying you're zero trust because they've heard everyone say they're zero trust. 

Jonathan

Sure. 

Jasson

But I did have a lot of really interesting discussions with a bunch of CISOs that really just boiled down to, at the end of the day, if you really want to make progress on prevention, right, not just detection and response, you kind of have to break down these siloed walls between identity, device trust, and device identity. And you know, traditionally, we have these silos, right, for identity, for endpoint security, endpoint trust, endpoint configuration management, and, you know, they only kind of join, or the information between those systems only comes together at the detection response phase, generally through SM and SEM analytics, maybe some other ways as well. 

But if you start to organize your thinking, and if you can somehow break down the walls from this vertical organization to a horizontal organization, right? If I join device and user identity, if I join device trust, really all three of those things and make those access context or access concerns, and I want to make decisions about access using that information, then I'm kind of moving the needle out of detection response and into more prevention. 

And almost all of my conversations really kind of centered around that. Now, granted, that might be a self-selection bias as well given, you know, that's kind of what we do. But that was certainly some of the more interesting conversations I had. 

H.B.

Were a lot of people more comfortable with their zero-trust strategies and what they were specifically looking for? Because I know that user and device trust hasn't typically been sort of central to the conversation in these kind of, like, conference settings. 

Jasson

I think, you know, obviously, everything's a life cycle. Customers are no different, right? So, different enterprises are at different phases of their own kind of maturity and growth. There are people that are far into the curve. They know exactly what they want. 

They've been working with zero-trust concepts before zero trust was a marketing term. And I think a good example there would be... Well, actually, I don't know if I can really name names on some of these things. But there were a couple CISOs that we spent time with where they knew exactly what they wanted. In their words, they called it device trust and device identity, and linking that to user identity. 

And they had been asking for this for a while from their CASB providers. But their CASB providers couldn't wrap their heads really around what the customer was asking for. And it's fundamentally different than kind of a CASB approach anyway. And so, very, very articulate, knew exactly kind of the crux of the problem. 

Then you've got people mid-journey where they know it's important, they know they need to get involved, but they don't really understand or they don't have their own strategy in place. Like, what's the first step for them in a zero-trust journey? And then I'd say there's the third group, which is, you know, maybe just earlier in the education side. 

H.B.

So, a lot of it was anchored on people who previously had been hoping for this to come from CASBs, from cloud access security brokers? 

Jasson

I mean, so yeah. One dinner, I spent quite a bit of time with a CISO on that particular topic. And... 

H.B.

I was just surprised because I haven't heard CASBs talked about in a favorable tone in quite a few years now. So, I was kind of curious. 

Jasson

Well, in his mind, it had to do...I think it had to do with footprint, right? So, where does it make sense? Which honestly kind of how we got here as well, right? Where does it make sense to place certain, you know, network functions or system functions? And if I already have footprint established, right, in the point in cloud, clearly it's best if I can establish those functions in that existing footprint, right? 

Who wants to manage a bazillion agents? Who wants to manage a bazillion different services? So, I think that's kind of what brought him to that point. He had a good relationship with this CASB vendor. He saw them as kind of leading the area, and he was just kind of asking them kind of, why not? 

H.B.

So, does that mean that like he's still seeing himself as being a CASB user just via, like, a SASE or SSE, and now he's just understanding that the SASE platforms and the SSE platforms need external support from an identity solution to be able to serve user and device trust? 

Jasson

So, you know, our conversation went all over. But I would say, in his mind, at the end of the day, his focus is less on checking the box for all the acronymed products and more on, can he definitively say that for all access, he actually understands the risk of the device, the identity of the device, the identity of the person relative to what they're trying to do? 

You know, that solution, everything doesn't have to be micro-segmentation for that problem, right? Everything doesn't have to be SASE for that problem. So, I think he was much more kind of results-driven, and the tools necessary for various parts of his workforce would be what was brought to bear. 

It wasn't kind of like he had his Bingo card where he was trying to check off the six steps of SASE as prescribed by product marketing. 

H.B.

So, most importantly, how did he want us to shoehorn GPT into the equation? 

Jasson

You know, we didn't actually... That night, we didn't talk GPT at all, we didn't talk AI at all. I mean, I know I certainly feel burnt out a little bit on the topic from a marketing and tech bro pump perspective. 

You know, it's an interesting technology and it certainly has its applied uses. But ChatGPT and large language models don't solve all problems. They solve a couple of problems. Anyway, it wasn't a large topic of discussion. 

H.B.

It reminds me a little bit about quantum computer hype over time, that people just imagine that a quantum computer is also magically a classical computer. 

Jasson

Oh, shoot. You know what? I totally forgot there was a standout at RSA. And it's a good thing I don't remember their name because it's not...what I have to say is not good. 

Joshua

Oh, no. 

Jasson

They were advertising themselves as post-quantum resilient. It's like, oh, cool, post-quantum resilient software. You know, it's a real thing, right? NIST is working on certifying these post-quantum or quantum-resistant algorithms. Maybe it's like a library company, right? So, you know, we clearly need to be up to date and be algorithm-agile and know all these players. Let me go talk to them. 

And so, they started it off as like, "Well, our approach is really kind of based on symmetric encryption and whatnot because like with good key and good pseudo-random number generators and whatnot, you can...with things like a strong AES, like, you actually can have quantum resilient functions." And in the back of my mind, this kind of made sense, right? 

Like, one-time pads are perfectly secure and whatnot. But then the obvious question then becomes, well, the whole thing with a symmetric algorithm is you need...you know, you need the same key on both sides. So, it kind of begs the key distribution problem, right? Like, how do I generate a key and pass it? Or, how do I synchronize on something securely that then generates the same keys, right? 

The problems are probably reducible to each other. Again, like, I'm not actually a cryptographer. But you know, I know a few, therefore, I'm dangerous. It's like the holiday and defense. But he started going on about how well that's their secret sauce, and that's really where they cut their mustard, and it's like, "Oh, cool. You know, I've got a really long flight on Friday. Can you point me to a couple of the academic papers that this is based on? I'd love to read it on my flight home." 

And that is where I realized I was kind of talking to essentially a bullshit company. And so, to the best of my knowledge, and they're doing some sort of seed... They have some sort of consensus algorithm that achieves synchronization on a seed value that they use to kind of seed symmetric generators that then generate the same key on both sides. 

[H.B.] Spake and speak algorithms are well established in the space. So, like, I think your question to ask for some sort of academic provenance seems reasonable. 

Jasson

And so, honestly, what sent up my Spidey sense is when I asked him for the papers, he's like, "Well, it's a proprietary algorithm. We don't actually talk about how we do that." And it is like, "Oh, I get it. That's the kind of company you are." 

H.B.

Even with open spake and speak algorithms, like, I remember when I was at Aruba, there was a whole bunch of controversy around Dragonfly. You know, you get, like, within 50 yards of a NIST person while working on any of this stuff, and all of a sudden, you're a government agent. But... 

Jasson

I mean, here's the thing, right? And we're all taught this in, like, the first class we ever take in this world, is at least related to security, is, like, keeping a thing secret, keeping the how, the algorithm secret is not a defense. This is an area that does involve actual mathematics and a level of rigor far beyond what I can do. But like, it's a real discipline. 

It's a real specialty. People spend decades of their life just getting to the point of being able to do the work, and they even make mistakes, right? So, like, the reason we trust the algorithms that we do that underlying cryptography is not just because the how has been published, but because it's been reviewed, and in some cases, it's actually been proven in, like, mechanical kind of improving ways. 

In other circumstances, it's just stood the test of time, right? And so, short of being able to answer, do one-way functions exist, right, or, is P equal to NP, like it's really, really hard to know that you have a secure system or you have a system that truly is resistant to a Turing machine in a reasonable amount of time, or to a quantum Turing machine. 

And the idea that a company is selling a product based on kind of hiding the algorithm is comical. 

H.B.

Yeah, I can imagine. But yeah, I think that goes back to that whole idea that when people start talking quantum or AI, or back in the day before LLM, NLP, and natural language processing, like, there's just a lot of washing that goes on where people are just applying it to everything and anything and just making shit up. 

The interesting thing on the AI side though, that you brought up, was the types of problems that it solves similar to sort of how quantum solves like, you know, problems that are in a quantum domain, and within a narrow set, there's an appropriateness. This whole AIOps situation, you were saying that some of the work from, like, folks, like, Palantir seemed interesting, but that it was isolated to certain use cases, could you sort of explain that? 

Jasson

Yeah. So, I think it was Nelson shared a video from Palantir. And they were showcasing kind of their applied use of LLMs as well as some other techniques in kind of a warfighter kind of DOD military sort of mode. And you know, without getting into all the specifics at a high level, you know, there's an incredible amount of information being surfaced to both kind of the soldier, right, in the field, as well as the combatant commander who's not in the field. 

And so, what they're doing, and at least what they were illustrating, is how do they use LLMs and other techniques to basically process all of this information, summarize it, and produce kind of actionable readouts in a way that makes sense and is much, much more efficient than having kind of a ton of human analysts do the same amount of work. 

And number one, it was fascinating to watch. Like, it honestly reminded me of kind of the old multiplayer. I forget what MUD even stands for now. But when we used to do the text-based MUDs back in the '90s, it was like, "Turn left. Do you see a light? Do you see a door? Open the door?" 

Because it was a free text interface to essentially interact with your data. It was kind like, you know, your AI assistant who's synthesizing all the data and giving you summaries, and then you're saying, "Give me a course of action or give me a couple courses of action and give me the analysis on those actions, and then I'll choose one." And you know, a couple things immediately popped up. Number one, like, LLMs are not smart. 

LLMs are models that are trained on data sets. So, they clearly inherit the error of the data. They clearly inherit the bias of the data, right? So, that was kind of what I was getting at in terms of their...you really kind of have to understand how the data was trained to begin with. 

But it also reminded me a little bit of kind of my days when I was at a company called SecurityScorecard. And we had a similar problem. We were producing an incredible amount of data by...essentially assets connected to the internet, both passive and active. And we had to try and make sense of that to understand, you know, what's going on inside of an organization, what likely belongs to an organization. 

And inherently, you're going to have error in this data, right? Like, these aren't perfect models because there's no discrete or deterministic system you're modeling off of. But at the end of the day, if I'm producing 1,000 data points that suggest a course of action and I have 10% of error, that means 100 data points are wrong, right? And 9,000 data points are still suggesting that course of action, right? 

Like, it's still worth the course of action. Or put in another way, if I have a team of analysts that can't handle the deluge of data that's being dumped on them, and they really want to focus on what's going to be the most impactful, or what's going to represent the most amount of risk for them to dig into, a probabilistic solution that comes with error is still going to make their outcomes better. 

Now, clearly, you have to understand that error and track that error. But even in the case that I just outlined, right, those five analysts are able to chew through much more information that's pertinent to their company, that represents an outsized risk impact than them kind of being able to make the decisions on their own. 

So, there's absolute applicability. But again, these things, they're not...they're models. We shouldn't treat them any differently than we treat a curve that we fitted to some data points in class. It's a more sophisticated version of that. They're not intelligent, but they do help us synthesize a vast amount of data quickly under a quantified amount of error. 

H.B.

So, there are uses in, like, the AIOps space and everything that people are looking at, but it's not the panacea that people are making it out to be at the moment. 

Jasson

Well, it was kind of like zero trust two years ago, right? Like, if 90% of the booths in RSA have AI written on them, what is the meaning of writing AI on the booth? Right? That doesn't necessarily mean that some of them aren't true, and that doesn't mean that some of them don't have applicability. 

But yeah, RSA is kind of a noisy event, right? 

Joshua

Sure. 

Jasson

Like, there literally felt like there was a carnival barker a couple of rows away from us at one point. There was this one booth where someone's got a microphone and they're trying to give this talk, and 90 degrees adjacent from the woman giving the talk was some sort of game, like, Simon, right? Touch the colors as they light up with all sorts of... 

Joshua

Oh, I saw that. It looks very fun. 

Jasson

And he was playing the game really aggressively while literally four feet away from him this woman was trying to give this talk, and it's just madness. It's a madhouse. And this is what I mean by, like, the noise floor is high. 

H.B.

Nice, nice, I'm going to be forever imprinted with that image. It's like when you used to see people playing Dance Dance Revolution with a little bit too much zeal at an arcade

Joshua

It was exactly like that. I saw videos of that game, and you are exactly on the money on that. 

Jasson

Maybe it was just, "You know what, guys? This is Thursday of RSA. I don't care anymore. I'm going to do whatever." Maybe that's the thing. 

Joshua

So, was your takeaway that we need to wait a couple of years similar to the zero trust situation, and just see how things pan out, and in a couple of years, we'll have better AIOps the same way that now we're able to see a clearer picture around zero trust authentication for...? 

Jasson

I wouldn't say people need to wait a couple years, right? Stuff's happening now. But you know, when you're talking to vendors, like, you're going to find them based on marketing terms, they're going to try and get you to find them based on marketing terms. Once you're actually engaged in a conversation, just try and cut through it quickly, right? So, when a company's trying to pitch AI, right, independent of the technology, what business problem are they actually helping you solve, right? 

And if it's a human problem, like a human analyst-style problem, what is the quantifiable impact? What is the quantifiable error measure? How did they actually come up with that? What is their model training? Where is their model training coming from? Are they using a system where they even own their own intellectual property, or is it all based on either open source creative common style rights, or some other company's rights that maybe they don't even understand? 

Right? The bar to create an interesting demo is really, really low. The bar to build a product that effectively uses a new technology to solve a business problem is not. 

Joshua

Sure. And that's also good advice the other way around for vendors. Like, you know, make sure that once you're engaged in those conversations with prospects, that you are also, you know, cutting through your own kind of marketing big words, and actually telling them how we can help and how we can solve the issue. 

Jasson

Yeah. And this is one of those things where I would just be careful, right? Your mileage may vary. Like, I understand I'm somewhere in the distribution of how humans behave, but I'm probably not in the middle. When people call on me, I usually give them just a few minutes to help where their job is to very quickly plan a flag in the map of where they live in my ecosystem. 

And if they can't do that, I immediately just shut them off and move on about my day, right? And once I know where they live, then the assessment is...then it becomes two things. Are they solving a problem I actually care about right now? And if the answer is no, I shut them off. If the answer is no, but I have time and there's something interesting or something compelling, then maybe I'll let them go a little bit more. 

But assuming they're solving something I actually care about, then I need an actual conversation about what they're doing, not the marketecture, right? And it doesn't have to be an engineer from the company, but it needs to be a technically trained person who understands, you know, what's what. 

Joshua

Yeah. Awesome. Thank you so much, Jasson, for giving us your rundown of your experience at RSA and all the conversations that were going on. Fascinating. You know, again, this was the second one, since the world shut down. And so, it's kind of back into the swing of things. 

And it's bigger than ever apparently. Thank you so much, everybody, for listening to this episode of "Cybersecurity Hot Takes." Be sure to like, subscribe, share this episode everywhere. And what was Reece's phrase that she said? I'm trying... 

Jasson

I don't think you can repeat it. 

Joshua

Good riddance. 

Jasson

Yeah, good riddance, indeed. ♪ [music] ♪

Get started with Device360 today
Weekly newsletter
No spam. Just the latest releases and tips, interesting articles, and exclusive interviews in your inbox every week.

Will the “A” in RSA Be Replaced by “AI”?

Download

In this episode, Beyond Identity's CTO, Jasson Casey, shares insights and key takeaways from the RSA Conference 2023.

Transcription

Joshua

Hello, and welcome to "Cybersecurity Hot Takes." This is your producer, Joshua, once again, filling in for the amazing and incomparable, Reece Guida. May she return soon. And with me on the left side of my screen, I have... You don't know who's on the left? 

Just pick. 

Jasson

We have no idea. I'm going to go with H.B. He's on the left side of your screen. 

Joshua

He is on the left side of my screen. 

H.B.

Well, in that case, I'm H.B., and I do product strategy here at Beyond Identity. 

Joshua

Yes, you do. And on the right side of my screen, I have... 

Jasson

I guess that leaves me. I'm Jasson Casey. I'm the CTO here. 

Joshua

Exactly. Welcome, everyone. And for today's "Cybersecurity Hot Take," will the A in RSA be replaced with AI? Jasson Casey was recently at the RSA conference, as many of our audience was also. And there were a lot of hot topics and a lot of kind of the same conversations going on. 

And so, he's going to fill us in on what some of those were, and we are going to discuss them. Jasson, how was RSA? 

Jasson

So, good show. The level of people there was probably the same level as 2020 before COVID, or the zombie apocalypse shut the world down. So, that was interesting. You know, as is my experience from before, RSA is generally...it's a very mixed show, right? 

You spend a lot of time with partner companies, not just prospects and customers. But I think it was good all around. Thematically, the stuff that was on the floor, I mean, clearly AI's big theme, right? It's, you know, hastily stenciled in paint and toothpaste on everyone's signage. Other big themes that seem to be consistent, attack surface or attack surface mapping or attack surface management. 

It seems like I only saw a few deception companies. A while back, it seemed like everyone had a deception thing, MDR, ITDR, XDR. Basically, if you had a R, it was a big deal and lots of people were putting R's at the end. Honestly, I didn't see a lot of new novel things in my mind. 

The only thing that I can say that really stands out as interesting and novel, and I don't know if it's fair because I kind of ran into them before RSA, was kind of an enterprise secure browser which was really, really interesting. There's a couple people in the space, but the company I ran into was called the Talon Security. 

But yeah, overall, it was a good show. It was worth going to, at least for me and our group. 

Joshua

That's awesome. Wish I could have gone, but maybe next year. There's always next year, you know? So, when it comes to, you know, kind of seeing so many of these people talk about the same things, which ones do you think were like either... could you see being the most impactful? 

Do you think any of them really had any staying power to, like, change this landscape as we know it, or...? 

Jasson

Well, that's just it. Like, I think that a lot of the categories are categories that existed for a while, right? XDR... 

Joshua

Sure. 

Jasson

MDR are natural evolutions, I think, of just EDR and kind of more modern antivirus. So, anyone coming into that space, you know, how in the world are you going to compete against the big players that are kind of already there? Right? You know, whether it's the recent adults like CrowdStrike and SentinelOne, or the graybeards like Palo and whatnot. 

That's the tricky thing, right? Like, how do you go in as anything other than a feature that they may just kind of buy? I guess I forgot the theme on zero trust. Clearly, a lot of people were talking about zero trust. 

So, the reaction that I got from a lot of customers on zero trust really falls into one of two buckets. One set of customers' prospects is burnt out on the word because much like AI of this year, it's a hastily appended phrase that generates organic search metrics for marketing. Maybe that's too cynical. 

But certainly, a large group of the markets kind of burned out and saying you're zero trust because they've heard everyone say they're zero trust. 

Jonathan

Sure. 

Jasson

But I did have a lot of really interesting discussions with a bunch of CISOs that really just boiled down to, at the end of the day, if you really want to make progress on prevention, right, not just detection and response, you kind of have to break down these siloed walls between identity, device trust, and device identity. And you know, traditionally, we have these silos, right, for identity, for endpoint security, endpoint trust, endpoint configuration management, and, you know, they only kind of join, or the information between those systems only comes together at the detection response phase, generally through SM and SEM analytics, maybe some other ways as well. 

But if you start to organize your thinking, and if you can somehow break down the walls from this vertical organization to a horizontal organization, right? If I join device and user identity, if I join device trust, really all three of those things and make those access context or access concerns, and I want to make decisions about access using that information, then I'm kind of moving the needle out of detection response and into more prevention. 

And almost all of my conversations really kind of centered around that. Now, granted, that might be a self-selection bias as well given, you know, that's kind of what we do. But that was certainly some of the more interesting conversations I had. 

H.B.

Were a lot of people more comfortable with their zero-trust strategies and what they were specifically looking for? Because I know that user and device trust hasn't typically been sort of central to the conversation in these kind of, like, conference settings. 

Jasson

I think, you know, obviously, everything's a life cycle. Customers are no different, right? So, different enterprises are at different phases of their own kind of maturity and growth. There are people that are far into the curve. They know exactly what they want. 

They've been working with zero-trust concepts before zero trust was a marketing term. And I think a good example there would be... Well, actually, I don't know if I can really name names on some of these things. But there were a couple CISOs that we spent time with where they knew exactly what they wanted. In their words, they called it device trust and device identity, and linking that to user identity. 

And they had been asking for this for a while from their CASB providers. But their CASB providers couldn't wrap their heads really around what the customer was asking for. And it's fundamentally different than kind of a CASB approach anyway. And so, very, very articulate, knew exactly kind of the crux of the problem. 

Then you've got people mid-journey where they know it's important, they know they need to get involved, but they don't really understand or they don't have their own strategy in place. Like, what's the first step for them in a zero-trust journey? And then I'd say there's the third group, which is, you know, maybe just earlier in the education side. 

H.B.

So, a lot of it was anchored on people who previously had been hoping for this to come from CASBs, from cloud access security brokers? 

Jasson

I mean, so yeah. One dinner, I spent quite a bit of time with a CISO on that particular topic. And... 

H.B.

I was just surprised because I haven't heard CASBs talked about in a favorable tone in quite a few years now. So, I was kind of curious. 

Jasson

Well, in his mind, it had to do...I think it had to do with footprint, right? So, where does it make sense? Which honestly kind of how we got here as well, right? Where does it make sense to place certain, you know, network functions or system functions? And if I already have footprint established, right, in the point in cloud, clearly it's best if I can establish those functions in that existing footprint, right? 

Who wants to manage a bazillion agents? Who wants to manage a bazillion different services? So, I think that's kind of what brought him to that point. He had a good relationship with this CASB vendor. He saw them as kind of leading the area, and he was just kind of asking them kind of, why not? 

H.B.

So, does that mean that like he's still seeing himself as being a CASB user just via, like, a SASE or SSE, and now he's just understanding that the SASE platforms and the SSE platforms need external support from an identity solution to be able to serve user and device trust? 

Jasson

So, you know, our conversation went all over. But I would say, in his mind, at the end of the day, his focus is less on checking the box for all the acronymed products and more on, can he definitively say that for all access, he actually understands the risk of the device, the identity of the device, the identity of the person relative to what they're trying to do? 

You know, that solution, everything doesn't have to be micro-segmentation for that problem, right? Everything doesn't have to be SASE for that problem. So, I think he was much more kind of results-driven, and the tools necessary for various parts of his workforce would be what was brought to bear. 

It wasn't kind of like he had his Bingo card where he was trying to check off the six steps of SASE as prescribed by product marketing. 

H.B.

So, most importantly, how did he want us to shoehorn GPT into the equation? 

Jasson

You know, we didn't actually... That night, we didn't talk GPT at all, we didn't talk AI at all. I mean, I know I certainly feel burnt out a little bit on the topic from a marketing and tech bro pump perspective. 

You know, it's an interesting technology and it certainly has its applied uses. But ChatGPT and large language models don't solve all problems. They solve a couple of problems. Anyway, it wasn't a large topic of discussion. 

H.B.

It reminds me a little bit about quantum computer hype over time, that people just imagine that a quantum computer is also magically a classical computer. 

Jasson

Oh, shoot. You know what? I totally forgot there was a standout at RSA. And it's a good thing I don't remember their name because it's not...what I have to say is not good. 

Joshua

Oh, no. 

Jasson

They were advertising themselves as post-quantum resilient. It's like, oh, cool, post-quantum resilient software. You know, it's a real thing, right? NIST is working on certifying these post-quantum or quantum-resistant algorithms. Maybe it's like a library company, right? So, you know, we clearly need to be up to date and be algorithm-agile and know all these players. Let me go talk to them. 

And so, they started it off as like, "Well, our approach is really kind of based on symmetric encryption and whatnot because like with good key and good pseudo-random number generators and whatnot, you can...with things like a strong AES, like, you actually can have quantum resilient functions." And in the back of my mind, this kind of made sense, right? 

Like, one-time pads are perfectly secure and whatnot. But then the obvious question then becomes, well, the whole thing with a symmetric algorithm is you need...you know, you need the same key on both sides. So, it kind of begs the key distribution problem, right? Like, how do I generate a key and pass it? Or, how do I synchronize on something securely that then generates the same keys, right? 

The problems are probably reducible to each other. Again, like, I'm not actually a cryptographer. But you know, I know a few, therefore, I'm dangerous. It's like the holiday and defense. But he started going on about how well that's their secret sauce, and that's really where they cut their mustard, and it's like, "Oh, cool. You know, I've got a really long flight on Friday. Can you point me to a couple of the academic papers that this is based on? I'd love to read it on my flight home." 

And that is where I realized I was kind of talking to essentially a bullshit company. And so, to the best of my knowledge, and they're doing some sort of seed... They have some sort of consensus algorithm that achieves synchronization on a seed value that they use to kind of seed symmetric generators that then generate the same key on both sides. 

[H.B.] Spake and speak algorithms are well established in the space. So, like, I think your question to ask for some sort of academic provenance seems reasonable. 

Jasson

And so, honestly, what sent up my Spidey sense is when I asked him for the papers, he's like, "Well, it's a proprietary algorithm. We don't actually talk about how we do that." And it is like, "Oh, I get it. That's the kind of company you are." 

H.B.

Even with open spake and speak algorithms, like, I remember when I was at Aruba, there was a whole bunch of controversy around Dragonfly. You know, you get, like, within 50 yards of a NIST person while working on any of this stuff, and all of a sudden, you're a government agent. But... 

Jasson

I mean, here's the thing, right? And we're all taught this in, like, the first class we ever take in this world, is at least related to security, is, like, keeping a thing secret, keeping the how, the algorithm secret is not a defense. This is an area that does involve actual mathematics and a level of rigor far beyond what I can do. But like, it's a real discipline. 

It's a real specialty. People spend decades of their life just getting to the point of being able to do the work, and they even make mistakes, right? So, like, the reason we trust the algorithms that we do that underlying cryptography is not just because the how has been published, but because it's been reviewed, and in some cases, it's actually been proven in, like, mechanical kind of improving ways. 

In other circumstances, it's just stood the test of time, right? And so, short of being able to answer, do one-way functions exist, right, or, is P equal to NP, like it's really, really hard to know that you have a secure system or you have a system that truly is resistant to a Turing machine in a reasonable amount of time, or to a quantum Turing machine. 

And the idea that a company is selling a product based on kind of hiding the algorithm is comical. 

H.B.

Yeah, I can imagine. But yeah, I think that goes back to that whole idea that when people start talking quantum or AI, or back in the day before LLM, NLP, and natural language processing, like, there's just a lot of washing that goes on where people are just applying it to everything and anything and just making shit up. 

The interesting thing on the AI side though, that you brought up, was the types of problems that it solves similar to sort of how quantum solves like, you know, problems that are in a quantum domain, and within a narrow set, there's an appropriateness. This whole AIOps situation, you were saying that some of the work from, like, folks, like, Palantir seemed interesting, but that it was isolated to certain use cases, could you sort of explain that? 

Jasson

Yeah. So, I think it was Nelson shared a video from Palantir. And they were showcasing kind of their applied use of LLMs as well as some other techniques in kind of a warfighter kind of DOD military sort of mode. And you know, without getting into all the specifics at a high level, you know, there's an incredible amount of information being surfaced to both kind of the soldier, right, in the field, as well as the combatant commander who's not in the field. 

And so, what they're doing, and at least what they were illustrating, is how do they use LLMs and other techniques to basically process all of this information, summarize it, and produce kind of actionable readouts in a way that makes sense and is much, much more efficient than having kind of a ton of human analysts do the same amount of work. 

And number one, it was fascinating to watch. Like, it honestly reminded me of kind of the old multiplayer. I forget what MUD even stands for now. But when we used to do the text-based MUDs back in the '90s, it was like, "Turn left. Do you see a light? Do you see a door? Open the door?" 

Because it was a free text interface to essentially interact with your data. It was kind like, you know, your AI assistant who's synthesizing all the data and giving you summaries, and then you're saying, "Give me a course of action or give me a couple courses of action and give me the analysis on those actions, and then I'll choose one." And you know, a couple things immediately popped up. Number one, like, LLMs are not smart. 

LLMs are models that are trained on data sets. So, they clearly inherit the error of the data. They clearly inherit the bias of the data, right? So, that was kind of what I was getting at in terms of their...you really kind of have to understand how the data was trained to begin with. 

But it also reminded me a little bit of kind of my days when I was at a company called SecurityScorecard. And we had a similar problem. We were producing an incredible amount of data by...essentially assets connected to the internet, both passive and active. And we had to try and make sense of that to understand, you know, what's going on inside of an organization, what likely belongs to an organization. 

And inherently, you're going to have error in this data, right? Like, these aren't perfect models because there's no discrete or deterministic system you're modeling off of. But at the end of the day, if I'm producing 1,000 data points that suggest a course of action and I have 10% of error, that means 100 data points are wrong, right? And 9,000 data points are still suggesting that course of action, right? 

Like, it's still worth the course of action. Or put in another way, if I have a team of analysts that can't handle the deluge of data that's being dumped on them, and they really want to focus on what's going to be the most impactful, or what's going to represent the most amount of risk for them to dig into, a probabilistic solution that comes with error is still going to make their outcomes better. 

Now, clearly, you have to understand that error and track that error. But even in the case that I just outlined, right, those five analysts are able to chew through much more information that's pertinent to their company, that represents an outsized risk impact than them kind of being able to make the decisions on their own. 

So, there's absolute applicability. But again, these things, they're not...they're models. We shouldn't treat them any differently than we treat a curve that we fitted to some data points in class. It's a more sophisticated version of that. They're not intelligent, but they do help us synthesize a vast amount of data quickly under a quantified amount of error. 

H.B.

So, there are uses in, like, the AIOps space and everything that people are looking at, but it's not the panacea that people are making it out to be at the moment. 

Jasson

Well, it was kind of like zero trust two years ago, right? Like, if 90% of the booths in RSA have AI written on them, what is the meaning of writing AI on the booth? Right? That doesn't necessarily mean that some of them aren't true, and that doesn't mean that some of them don't have applicability. 

But yeah, RSA is kind of a noisy event, right? 

Joshua

Sure. 

Jasson

Like, there literally felt like there was a carnival barker a couple of rows away from us at one point. There was this one booth where someone's got a microphone and they're trying to give this talk, and 90 degrees adjacent from the woman giving the talk was some sort of game, like, Simon, right? Touch the colors as they light up with all sorts of... 

Joshua

Oh, I saw that. It looks very fun. 

Jasson

And he was playing the game really aggressively while literally four feet away from him this woman was trying to give this talk, and it's just madness. It's a madhouse. And this is what I mean by, like, the noise floor is high. 

H.B.

Nice, nice, I'm going to be forever imprinted with that image. It's like when you used to see people playing Dance Dance Revolution with a little bit too much zeal at an arcade

Joshua

It was exactly like that. I saw videos of that game, and you are exactly on the money on that. 

Jasson

Maybe it was just, "You know what, guys? This is Thursday of RSA. I don't care anymore. I'm going to do whatever." Maybe that's the thing. 

Joshua

So, was your takeaway that we need to wait a couple of years similar to the zero trust situation, and just see how things pan out, and in a couple of years, we'll have better AIOps the same way that now we're able to see a clearer picture around zero trust authentication for...? 

Jasson

I wouldn't say people need to wait a couple years, right? Stuff's happening now. But you know, when you're talking to vendors, like, you're going to find them based on marketing terms, they're going to try and get you to find them based on marketing terms. Once you're actually engaged in a conversation, just try and cut through it quickly, right? So, when a company's trying to pitch AI, right, independent of the technology, what business problem are they actually helping you solve, right? 

And if it's a human problem, like a human analyst-style problem, what is the quantifiable impact? What is the quantifiable error measure? How did they actually come up with that? What is their model training? Where is their model training coming from? Are they using a system where they even own their own intellectual property, or is it all based on either open source creative common style rights, or some other company's rights that maybe they don't even understand? 

Right? The bar to create an interesting demo is really, really low. The bar to build a product that effectively uses a new technology to solve a business problem is not. 

Joshua

Sure. And that's also good advice the other way around for vendors. Like, you know, make sure that once you're engaged in those conversations with prospects, that you are also, you know, cutting through your own kind of marketing big words, and actually telling them how we can help and how we can solve the issue. 

Jasson

Yeah. And this is one of those things where I would just be careful, right? Your mileage may vary. Like, I understand I'm somewhere in the distribution of how humans behave, but I'm probably not in the middle. When people call on me, I usually give them just a few minutes to help where their job is to very quickly plan a flag in the map of where they live in my ecosystem. 

And if they can't do that, I immediately just shut them off and move on about my day, right? And once I know where they live, then the assessment is...then it becomes two things. Are they solving a problem I actually care about right now? And if the answer is no, I shut them off. If the answer is no, but I have time and there's something interesting or something compelling, then maybe I'll let them go a little bit more. 

But assuming they're solving something I actually care about, then I need an actual conversation about what they're doing, not the marketecture, right? And it doesn't have to be an engineer from the company, but it needs to be a technically trained person who understands, you know, what's what. 

Joshua

Yeah. Awesome. Thank you so much, Jasson, for giving us your rundown of your experience at RSA and all the conversations that were going on. Fascinating. You know, again, this was the second one, since the world shut down. And so, it's kind of back into the swing of things. 

And it's bigger than ever apparently. Thank you so much, everybody, for listening to this episode of "Cybersecurity Hot Takes." Be sure to like, subscribe, share this episode everywhere. And what was Reece's phrase that she said? I'm trying... 

Jasson

I don't think you can repeat it. 

Joshua

Good riddance. 

Jasson

Yeah, good riddance, indeed. ♪ [music] ♪

Will the “A” in RSA Be Replaced by “AI”?

Phishing resistance in security solutions has become a necessity. Learn the differences between the solutions and what you need to be phishing resistant.

In this episode, Beyond Identity's CTO, Jasson Casey, shares insights and key takeaways from the RSA Conference 2023.

Transcription

Joshua

Hello, and welcome to "Cybersecurity Hot Takes." This is your producer, Joshua, once again, filling in for the amazing and incomparable, Reece Guida. May she return soon. And with me on the left side of my screen, I have... You don't know who's on the left? 

Just pick. 

Jasson

We have no idea. I'm going to go with H.B. He's on the left side of your screen. 

Joshua

He is on the left side of my screen. 

H.B.

Well, in that case, I'm H.B., and I do product strategy here at Beyond Identity. 

Joshua

Yes, you do. And on the right side of my screen, I have... 

Jasson

I guess that leaves me. I'm Jasson Casey. I'm the CTO here. 

Joshua

Exactly. Welcome, everyone. And for today's "Cybersecurity Hot Take," will the A in RSA be replaced with AI? Jasson Casey was recently at the RSA conference, as many of our audience was also. And there were a lot of hot topics and a lot of kind of the same conversations going on. 

And so, he's going to fill us in on what some of those were, and we are going to discuss them. Jasson, how was RSA? 

Jasson

So, good show. The level of people there was probably the same level as 2020 before COVID, or the zombie apocalypse shut the world down. So, that was interesting. You know, as is my experience from before, RSA is generally...it's a very mixed show, right? 

You spend a lot of time with partner companies, not just prospects and customers. But I think it was good all around. Thematically, the stuff that was on the floor, I mean, clearly AI's big theme, right? It's, you know, hastily stenciled in paint and toothpaste on everyone's signage. Other big themes that seem to be consistent, attack surface or attack surface mapping or attack surface management. 

It seems like I only saw a few deception companies. A while back, it seemed like everyone had a deception thing, MDR, ITDR, XDR. Basically, if you had a R, it was a big deal and lots of people were putting R's at the end. Honestly, I didn't see a lot of new novel things in my mind. 

The only thing that I can say that really stands out as interesting and novel, and I don't know if it's fair because I kind of ran into them before RSA, was kind of an enterprise secure browser which was really, really interesting. There's a couple people in the space, but the company I ran into was called the Talon Security. 

But yeah, overall, it was a good show. It was worth going to, at least for me and our group. 

Joshua

That's awesome. Wish I could have gone, but maybe next year. There's always next year, you know? So, when it comes to, you know, kind of seeing so many of these people talk about the same things, which ones do you think were like either... could you see being the most impactful? 

Do you think any of them really had any staying power to, like, change this landscape as we know it, or...? 

Jasson

Well, that's just it. Like, I think that a lot of the categories are categories that existed for a while, right? XDR... 

Joshua

Sure. 

Jasson

MDR are natural evolutions, I think, of just EDR and kind of more modern antivirus. So, anyone coming into that space, you know, how in the world are you going to compete against the big players that are kind of already there? Right? You know, whether it's the recent adults like CrowdStrike and SentinelOne, or the graybeards like Palo and whatnot. 

That's the tricky thing, right? Like, how do you go in as anything other than a feature that they may just kind of buy? I guess I forgot the theme on zero trust. Clearly, a lot of people were talking about zero trust. 

So, the reaction that I got from a lot of customers on zero trust really falls into one of two buckets. One set of customers' prospects is burnt out on the word because much like AI of this year, it's a hastily appended phrase that generates organic search metrics for marketing. Maybe that's too cynical. 

But certainly, a large group of the markets kind of burned out and saying you're zero trust because they've heard everyone say they're zero trust. 

Jonathan

Sure. 

Jasson

But I did have a lot of really interesting discussions with a bunch of CISOs that really just boiled down to, at the end of the day, if you really want to make progress on prevention, right, not just detection and response, you kind of have to break down these siloed walls between identity, device trust, and device identity. And you know, traditionally, we have these silos, right, for identity, for endpoint security, endpoint trust, endpoint configuration management, and, you know, they only kind of join, or the information between those systems only comes together at the detection response phase, generally through SM and SEM analytics, maybe some other ways as well. 

But if you start to organize your thinking, and if you can somehow break down the walls from this vertical organization to a horizontal organization, right? If I join device and user identity, if I join device trust, really all three of those things and make those access context or access concerns, and I want to make decisions about access using that information, then I'm kind of moving the needle out of detection response and into more prevention. 

And almost all of my conversations really kind of centered around that. Now, granted, that might be a self-selection bias as well given, you know, that's kind of what we do. But that was certainly some of the more interesting conversations I had. 

H.B.

Were a lot of people more comfortable with their zero-trust strategies and what they were specifically looking for? Because I know that user and device trust hasn't typically been sort of central to the conversation in these kind of, like, conference settings. 

Jasson

I think, you know, obviously, everything's a life cycle. Customers are no different, right? So, different enterprises are at different phases of their own kind of maturity and growth. There are people that are far into the curve. They know exactly what they want. 

They've been working with zero-trust concepts before zero trust was a marketing term. And I think a good example there would be... Well, actually, I don't know if I can really name names on some of these things. But there were a couple CISOs that we spent time with where they knew exactly what they wanted. In their words, they called it device trust and device identity, and linking that to user identity. 

And they had been asking for this for a while from their CASB providers. But their CASB providers couldn't wrap their heads really around what the customer was asking for. And it's fundamentally different than kind of a CASB approach anyway. And so, very, very articulate, knew exactly kind of the crux of the problem. 

Then you've got people mid-journey where they know it's important, they know they need to get involved, but they don't really understand or they don't have their own strategy in place. Like, what's the first step for them in a zero-trust journey? And then I'd say there's the third group, which is, you know, maybe just earlier in the education side. 

H.B.

So, a lot of it was anchored on people who previously had been hoping for this to come from CASBs, from cloud access security brokers? 

Jasson

I mean, so yeah. One dinner, I spent quite a bit of time with a CISO on that particular topic. And... 

H.B.

I was just surprised because I haven't heard CASBs talked about in a favorable tone in quite a few years now. So, I was kind of curious. 

Jasson

Well, in his mind, it had to do...I think it had to do with footprint, right? So, where does it make sense? Which honestly kind of how we got here as well, right? Where does it make sense to place certain, you know, network functions or system functions? And if I already have footprint established, right, in the point in cloud, clearly it's best if I can establish those functions in that existing footprint, right? 

Who wants to manage a bazillion agents? Who wants to manage a bazillion different services? So, I think that's kind of what brought him to that point. He had a good relationship with this CASB vendor. He saw them as kind of leading the area, and he was just kind of asking them kind of, why not? 

H.B.

So, does that mean that like he's still seeing himself as being a CASB user just via, like, a SASE or SSE, and now he's just understanding that the SASE platforms and the SSE platforms need external support from an identity solution to be able to serve user and device trust? 

Jasson

So, you know, our conversation went all over. But I would say, in his mind, at the end of the day, his focus is less on checking the box for all the acronymed products and more on, can he definitively say that for all access, he actually understands the risk of the device, the identity of the device, the identity of the person relative to what they're trying to do? 

You know, that solution, everything doesn't have to be micro-segmentation for that problem, right? Everything doesn't have to be SASE for that problem. So, I think he was much more kind of results-driven, and the tools necessary for various parts of his workforce would be what was brought to bear. 

It wasn't kind of like he had his Bingo card where he was trying to check off the six steps of SASE as prescribed by product marketing. 

H.B.

So, most importantly, how did he want us to shoehorn GPT into the equation? 

Jasson

You know, we didn't actually... That night, we didn't talk GPT at all, we didn't talk AI at all. I mean, I know I certainly feel burnt out a little bit on the topic from a marketing and tech bro pump perspective. 

You know, it's an interesting technology and it certainly has its applied uses. But ChatGPT and large language models don't solve all problems. They solve a couple of problems. Anyway, it wasn't a large topic of discussion. 

H.B.

It reminds me a little bit about quantum computer hype over time, that people just imagine that a quantum computer is also magically a classical computer. 

Jasson

Oh, shoot. You know what? I totally forgot there was a standout at RSA. And it's a good thing I don't remember their name because it's not...what I have to say is not good. 

Joshua

Oh, no. 

Jasson

They were advertising themselves as post-quantum resilient. It's like, oh, cool, post-quantum resilient software. You know, it's a real thing, right? NIST is working on certifying these post-quantum or quantum-resistant algorithms. Maybe it's like a library company, right? So, you know, we clearly need to be up to date and be algorithm-agile and know all these players. Let me go talk to them. 

And so, they started it off as like, "Well, our approach is really kind of based on symmetric encryption and whatnot because like with good key and good pseudo-random number generators and whatnot, you can...with things like a strong AES, like, you actually can have quantum resilient functions." And in the back of my mind, this kind of made sense, right? 

Like, one-time pads are perfectly secure and whatnot. But then the obvious question then becomes, well, the whole thing with a symmetric algorithm is you need...you know, you need the same key on both sides. So, it kind of begs the key distribution problem, right? Like, how do I generate a key and pass it? Or, how do I synchronize on something securely that then generates the same keys, right? 

The problems are probably reducible to each other. Again, like, I'm not actually a cryptographer. But you know, I know a few, therefore, I'm dangerous. It's like the holiday and defense. But he started going on about how well that's their secret sauce, and that's really where they cut their mustard, and it's like, "Oh, cool. You know, I've got a really long flight on Friday. Can you point me to a couple of the academic papers that this is based on? I'd love to read it on my flight home." 

And that is where I realized I was kind of talking to essentially a bullshit company. And so, to the best of my knowledge, and they're doing some sort of seed... They have some sort of consensus algorithm that achieves synchronization on a seed value that they use to kind of seed symmetric generators that then generate the same key on both sides. 

[H.B.] Spake and speak algorithms are well established in the space. So, like, I think your question to ask for some sort of academic provenance seems reasonable. 

Jasson

And so, honestly, what sent up my Spidey sense is when I asked him for the papers, he's like, "Well, it's a proprietary algorithm. We don't actually talk about how we do that." And it is like, "Oh, I get it. That's the kind of company you are." 

H.B.

Even with open spake and speak algorithms, like, I remember when I was at Aruba, there was a whole bunch of controversy around Dragonfly. You know, you get, like, within 50 yards of a NIST person while working on any of this stuff, and all of a sudden, you're a government agent. But... 

Jasson

I mean, here's the thing, right? And we're all taught this in, like, the first class we ever take in this world, is at least related to security, is, like, keeping a thing secret, keeping the how, the algorithm secret is not a defense. This is an area that does involve actual mathematics and a level of rigor far beyond what I can do. But like, it's a real discipline. 

It's a real specialty. People spend decades of their life just getting to the point of being able to do the work, and they even make mistakes, right? So, like, the reason we trust the algorithms that we do that underlying cryptography is not just because the how has been published, but because it's been reviewed, and in some cases, it's actually been proven in, like, mechanical kind of improving ways. 

In other circumstances, it's just stood the test of time, right? And so, short of being able to answer, do one-way functions exist, right, or, is P equal to NP, like it's really, really hard to know that you have a secure system or you have a system that truly is resistant to a Turing machine in a reasonable amount of time, or to a quantum Turing machine. 

And the idea that a company is selling a product based on kind of hiding the algorithm is comical. 

H.B.

Yeah, I can imagine. But yeah, I think that goes back to that whole idea that when people start talking quantum or AI, or back in the day before LLM, NLP, and natural language processing, like, there's just a lot of washing that goes on where people are just applying it to everything and anything and just making shit up. 

The interesting thing on the AI side though, that you brought up, was the types of problems that it solves similar to sort of how quantum solves like, you know, problems that are in a quantum domain, and within a narrow set, there's an appropriateness. This whole AIOps situation, you were saying that some of the work from, like, folks, like, Palantir seemed interesting, but that it was isolated to certain use cases, could you sort of explain that? 

Jasson

Yeah. So, I think it was Nelson shared a video from Palantir. And they were showcasing kind of their applied use of LLMs as well as some other techniques in kind of a warfighter kind of DOD military sort of mode. And you know, without getting into all the specifics at a high level, you know, there's an incredible amount of information being surfaced to both kind of the soldier, right, in the field, as well as the combatant commander who's not in the field. 

And so, what they're doing, and at least what they were illustrating, is how do they use LLMs and other techniques to basically process all of this information, summarize it, and produce kind of actionable readouts in a way that makes sense and is much, much more efficient than having kind of a ton of human analysts do the same amount of work. 

And number one, it was fascinating to watch. Like, it honestly reminded me of kind of the old multiplayer. I forget what MUD even stands for now. But when we used to do the text-based MUDs back in the '90s, it was like, "Turn left. Do you see a light? Do you see a door? Open the door?" 

Because it was a free text interface to essentially interact with your data. It was kind like, you know, your AI assistant who's synthesizing all the data and giving you summaries, and then you're saying, "Give me a course of action or give me a couple courses of action and give me the analysis on those actions, and then I'll choose one." And you know, a couple things immediately popped up. Number one, like, LLMs are not smart. 

LLMs are models that are trained on data sets. So, they clearly inherit the error of the data. They clearly inherit the bias of the data, right? So, that was kind of what I was getting at in terms of their...you really kind of have to understand how the data was trained to begin with. 

But it also reminded me a little bit of kind of my days when I was at a company called SecurityScorecard. And we had a similar problem. We were producing an incredible amount of data by...essentially assets connected to the internet, both passive and active. And we had to try and make sense of that to understand, you know, what's going on inside of an organization, what likely belongs to an organization. 

And inherently, you're going to have error in this data, right? Like, these aren't perfect models because there's no discrete or deterministic system you're modeling off of. But at the end of the day, if I'm producing 1,000 data points that suggest a course of action and I have 10% of error, that means 100 data points are wrong, right? And 9,000 data points are still suggesting that course of action, right? 

Like, it's still worth the course of action. Or put in another way, if I have a team of analysts that can't handle the deluge of data that's being dumped on them, and they really want to focus on what's going to be the most impactful, or what's going to represent the most amount of risk for them to dig into, a probabilistic solution that comes with error is still going to make their outcomes better. 

Now, clearly, you have to understand that error and track that error. But even in the case that I just outlined, right, those five analysts are able to chew through much more information that's pertinent to their company, that represents an outsized risk impact than them kind of being able to make the decisions on their own. 

So, there's absolute applicability. But again, these things, they're not...they're models. We shouldn't treat them any differently than we treat a curve that we fitted to some data points in class. It's a more sophisticated version of that. They're not intelligent, but they do help us synthesize a vast amount of data quickly under a quantified amount of error. 

H.B.

So, there are uses in, like, the AIOps space and everything that people are looking at, but it's not the panacea that people are making it out to be at the moment. 

Jasson

Well, it was kind of like zero trust two years ago, right? Like, if 90% of the booths in RSA have AI written on them, what is the meaning of writing AI on the booth? Right? That doesn't necessarily mean that some of them aren't true, and that doesn't mean that some of them don't have applicability. 

But yeah, RSA is kind of a noisy event, right? 

Joshua

Sure. 

Jasson

Like, there literally felt like there was a carnival barker a couple of rows away from us at one point. There was this one booth where someone's got a microphone and they're trying to give this talk, and 90 degrees adjacent from the woman giving the talk was some sort of game, like, Simon, right? Touch the colors as they light up with all sorts of... 

Joshua

Oh, I saw that. It looks very fun. 

Jasson

And he was playing the game really aggressively while literally four feet away from him this woman was trying to give this talk, and it's just madness. It's a madhouse. And this is what I mean by, like, the noise floor is high. 

H.B.

Nice, nice, I'm going to be forever imprinted with that image. It's like when you used to see people playing Dance Dance Revolution with a little bit too much zeal at an arcade

Joshua

It was exactly like that. I saw videos of that game, and you are exactly on the money on that. 

Jasson

Maybe it was just, "You know what, guys? This is Thursday of RSA. I don't care anymore. I'm going to do whatever." Maybe that's the thing. 

Joshua

So, was your takeaway that we need to wait a couple of years similar to the zero trust situation, and just see how things pan out, and in a couple of years, we'll have better AIOps the same way that now we're able to see a clearer picture around zero trust authentication for...? 

Jasson

I wouldn't say people need to wait a couple years, right? Stuff's happening now. But you know, when you're talking to vendors, like, you're going to find them based on marketing terms, they're going to try and get you to find them based on marketing terms. Once you're actually engaged in a conversation, just try and cut through it quickly, right? So, when a company's trying to pitch AI, right, independent of the technology, what business problem are they actually helping you solve, right? 

And if it's a human problem, like a human analyst-style problem, what is the quantifiable impact? What is the quantifiable error measure? How did they actually come up with that? What is their model training? Where is their model training coming from? Are they using a system where they even own their own intellectual property, or is it all based on either open source creative common style rights, or some other company's rights that maybe they don't even understand? 

Right? The bar to create an interesting demo is really, really low. The bar to build a product that effectively uses a new technology to solve a business problem is not. 

Joshua

Sure. And that's also good advice the other way around for vendors. Like, you know, make sure that once you're engaged in those conversations with prospects, that you are also, you know, cutting through your own kind of marketing big words, and actually telling them how we can help and how we can solve the issue. 

Jasson

Yeah. And this is one of those things where I would just be careful, right? Your mileage may vary. Like, I understand I'm somewhere in the distribution of how humans behave, but I'm probably not in the middle. When people call on me, I usually give them just a few minutes to help where their job is to very quickly plan a flag in the map of where they live in my ecosystem. 

And if they can't do that, I immediately just shut them off and move on about my day, right? And once I know where they live, then the assessment is...then it becomes two things. Are they solving a problem I actually care about right now? And if the answer is no, I shut them off. If the answer is no, but I have time and there's something interesting or something compelling, then maybe I'll let them go a little bit more. 

But assuming they're solving something I actually care about, then I need an actual conversation about what they're doing, not the marketecture, right? And it doesn't have to be an engineer from the company, but it needs to be a technically trained person who understands, you know, what's what. 

Joshua

Yeah. Awesome. Thank you so much, Jasson, for giving us your rundown of your experience at RSA and all the conversations that were going on. Fascinating. You know, again, this was the second one, since the world shut down. And so, it's kind of back into the swing of things. 

And it's bigger than ever apparently. Thank you so much, everybody, for listening to this episode of "Cybersecurity Hot Takes." Be sure to like, subscribe, share this episode everywhere. And what was Reece's phrase that she said? I'm trying... 

Jasson

I don't think you can repeat it. 

Joshua

Good riddance. 

Jasson

Yeah, good riddance, indeed. ♪ [music] ♪

Will the “A” in RSA Be Replaced by “AI”?

Phishing resistance in security solutions has become a necessity. Learn the differences between the solutions and what you need to be phishing resistant.

In this episode, Beyond Identity's CTO, Jasson Casey, shares insights and key takeaways from the RSA Conference 2023.

Transcription

Joshua

Hello, and welcome to "Cybersecurity Hot Takes." This is your producer, Joshua, once again, filling in for the amazing and incomparable, Reece Guida. May she return soon. And with me on the left side of my screen, I have... You don't know who's on the left? 

Just pick. 

Jasson

We have no idea. I'm going to go with H.B. He's on the left side of your screen. 

Joshua

He is on the left side of my screen. 

H.B.

Well, in that case, I'm H.B., and I do product strategy here at Beyond Identity. 

Joshua

Yes, you do. And on the right side of my screen, I have... 

Jasson

I guess that leaves me. I'm Jasson Casey. I'm the CTO here. 

Joshua

Exactly. Welcome, everyone. And for today's "Cybersecurity Hot Take," will the A in RSA be replaced with AI? Jasson Casey was recently at the RSA conference, as many of our audience was also. And there were a lot of hot topics and a lot of kind of the same conversations going on. 

And so, he's going to fill us in on what some of those were, and we are going to discuss them. Jasson, how was RSA? 

Jasson

So, good show. The level of people there was probably the same level as 2020 before COVID, or the zombie apocalypse shut the world down. So, that was interesting. You know, as is my experience from before, RSA is generally...it's a very mixed show, right? 

You spend a lot of time with partner companies, not just prospects and customers. But I think it was good all around. Thematically, the stuff that was on the floor, I mean, clearly AI's big theme, right? It's, you know, hastily stenciled in paint and toothpaste on everyone's signage. Other big themes that seem to be consistent, attack surface or attack surface mapping or attack surface management. 

It seems like I only saw a few deception companies. A while back, it seemed like everyone had a deception thing, MDR, ITDR, XDR. Basically, if you had a R, it was a big deal and lots of people were putting R's at the end. Honestly, I didn't see a lot of new novel things in my mind. 

The only thing that I can say that really stands out as interesting and novel, and I don't know if it's fair because I kind of ran into them before RSA, was kind of an enterprise secure browser which was really, really interesting. There's a couple people in the space, but the company I ran into was called the Talon Security. 

But yeah, overall, it was a good show. It was worth going to, at least for me and our group. 

Joshua

That's awesome. Wish I could have gone, but maybe next year. There's always next year, you know? So, when it comes to, you know, kind of seeing so many of these people talk about the same things, which ones do you think were like either... could you see being the most impactful? 

Do you think any of them really had any staying power to, like, change this landscape as we know it, or...? 

Jasson

Well, that's just it. Like, I think that a lot of the categories are categories that existed for a while, right? XDR... 

Joshua

Sure. 

Jasson

MDR are natural evolutions, I think, of just EDR and kind of more modern antivirus. So, anyone coming into that space, you know, how in the world are you going to compete against the big players that are kind of already there? Right? You know, whether it's the recent adults like CrowdStrike and SentinelOne, or the graybeards like Palo and whatnot. 

That's the tricky thing, right? Like, how do you go in as anything other than a feature that they may just kind of buy? I guess I forgot the theme on zero trust. Clearly, a lot of people were talking about zero trust. 

So, the reaction that I got from a lot of customers on zero trust really falls into one of two buckets. One set of customers' prospects is burnt out on the word because much like AI of this year, it's a hastily appended phrase that generates organic search metrics for marketing. Maybe that's too cynical. 

But certainly, a large group of the markets kind of burned out and saying you're zero trust because they've heard everyone say they're zero trust. 

Jonathan

Sure. 

Jasson

But I did have a lot of really interesting discussions with a bunch of CISOs that really just boiled down to, at the end of the day, if you really want to make progress on prevention, right, not just detection and response, you kind of have to break down these siloed walls between identity, device trust, and device identity. And you know, traditionally, we have these silos, right, for identity, for endpoint security, endpoint trust, endpoint configuration management, and, you know, they only kind of join, or the information between those systems only comes together at the detection response phase, generally through SM and SEM analytics, maybe some other ways as well. 

But if you start to organize your thinking, and if you can somehow break down the walls from this vertical organization to a horizontal organization, right? If I join device and user identity, if I join device trust, really all three of those things and make those access context or access concerns, and I want to make decisions about access using that information, then I'm kind of moving the needle out of detection response and into more prevention. 

And almost all of my conversations really kind of centered around that. Now, granted, that might be a self-selection bias as well given, you know, that's kind of what we do. But that was certainly some of the more interesting conversations I had. 

H.B.

Were a lot of people more comfortable with their zero-trust strategies and what they were specifically looking for? Because I know that user and device trust hasn't typically been sort of central to the conversation in these kind of, like, conference settings. 

Jasson

I think, you know, obviously, everything's a life cycle. Customers are no different, right? So, different enterprises are at different phases of their own kind of maturity and growth. There are people that are far into the curve. They know exactly what they want. 

They've been working with zero-trust concepts before zero trust was a marketing term. And I think a good example there would be... Well, actually, I don't know if I can really name names on some of these things. But there were a couple CISOs that we spent time with where they knew exactly what they wanted. In their words, they called it device trust and device identity, and linking that to user identity. 

And they had been asking for this for a while from their CASB providers. But their CASB providers couldn't wrap their heads really around what the customer was asking for. And it's fundamentally different than kind of a CASB approach anyway. And so, very, very articulate, knew exactly kind of the crux of the problem. 

Then you've got people mid-journey where they know it's important, they know they need to get involved, but they don't really understand or they don't have their own strategy in place. Like, what's the first step for them in a zero-trust journey? And then I'd say there's the third group, which is, you know, maybe just earlier in the education side. 

H.B.

So, a lot of it was anchored on people who previously had been hoping for this to come from CASBs, from cloud access security brokers? 

Jasson

I mean, so yeah. One dinner, I spent quite a bit of time with a CISO on that particular topic. And... 

H.B.

I was just surprised because I haven't heard CASBs talked about in a favorable tone in quite a few years now. So, I was kind of curious. 

Jasson

Well, in his mind, it had to do...I think it had to do with footprint, right? So, where does it make sense? Which honestly kind of how we got here as well, right? Where does it make sense to place certain, you know, network functions or system functions? And if I already have footprint established, right, in the point in cloud, clearly it's best if I can establish those functions in that existing footprint, right? 

Who wants to manage a bazillion agents? Who wants to manage a bazillion different services? So, I think that's kind of what brought him to that point. He had a good relationship with this CASB vendor. He saw them as kind of leading the area, and he was just kind of asking them kind of, why not? 

H.B.

So, does that mean that like he's still seeing himself as being a CASB user just via, like, a SASE or SSE, and now he's just understanding that the SASE platforms and the SSE platforms need external support from an identity solution to be able to serve user and device trust? 

Jasson

So, you know, our conversation went all over. But I would say, in his mind, at the end of the day, his focus is less on checking the box for all the acronymed products and more on, can he definitively say that for all access, he actually understands the risk of the device, the identity of the device, the identity of the person relative to what they're trying to do? 

You know, that solution, everything doesn't have to be micro-segmentation for that problem, right? Everything doesn't have to be SASE for that problem. So, I think he was much more kind of results-driven, and the tools necessary for various parts of his workforce would be what was brought to bear. 

It wasn't kind of like he had his Bingo card where he was trying to check off the six steps of SASE as prescribed by product marketing. 

H.B.

So, most importantly, how did he want us to shoehorn GPT into the equation? 

Jasson

You know, we didn't actually... That night, we didn't talk GPT at all, we didn't talk AI at all. I mean, I know I certainly feel burnt out a little bit on the topic from a marketing and tech bro pump perspective. 

You know, it's an interesting technology and it certainly has its applied uses. But ChatGPT and large language models don't solve all problems. They solve a couple of problems. Anyway, it wasn't a large topic of discussion. 

H.B.

It reminds me a little bit about quantum computer hype over time, that people just imagine that a quantum computer is also magically a classical computer. 

Jasson

Oh, shoot. You know what? I totally forgot there was a standout at RSA. And it's a good thing I don't remember their name because it's not...what I have to say is not good. 

Joshua

Oh, no. 

Jasson

They were advertising themselves as post-quantum resilient. It's like, oh, cool, post-quantum resilient software. You know, it's a real thing, right? NIST is working on certifying these post-quantum or quantum-resistant algorithms. Maybe it's like a library company, right? So, you know, we clearly need to be up to date and be algorithm-agile and know all these players. Let me go talk to them. 

And so, they started it off as like, "Well, our approach is really kind of based on symmetric encryption and whatnot because like with good key and good pseudo-random number generators and whatnot, you can...with things like a strong AES, like, you actually can have quantum resilient functions." And in the back of my mind, this kind of made sense, right? 

Like, one-time pads are perfectly secure and whatnot. But then the obvious question then becomes, well, the whole thing with a symmetric algorithm is you need...you know, you need the same key on both sides. So, it kind of begs the key distribution problem, right? Like, how do I generate a key and pass it? Or, how do I synchronize on something securely that then generates the same keys, right? 

The problems are probably reducible to each other. Again, like, I'm not actually a cryptographer. But you know, I know a few, therefore, I'm dangerous. It's like the holiday and defense. But he started going on about how well that's their secret sauce, and that's really where they cut their mustard, and it's like, "Oh, cool. You know, I've got a really long flight on Friday. Can you point me to a couple of the academic papers that this is based on? I'd love to read it on my flight home." 

And that is where I realized I was kind of talking to essentially a bullshit company. And so, to the best of my knowledge, and they're doing some sort of seed... They have some sort of consensus algorithm that achieves synchronization on a seed value that they use to kind of seed symmetric generators that then generate the same key on both sides. 

[H.B.] Spake and speak algorithms are well established in the space. So, like, I think your question to ask for some sort of academic provenance seems reasonable. 

Jasson

And so, honestly, what sent up my Spidey sense is when I asked him for the papers, he's like, "Well, it's a proprietary algorithm. We don't actually talk about how we do that." And it is like, "Oh, I get it. That's the kind of company you are." 

H.B.

Even with open spake and speak algorithms, like, I remember when I was at Aruba, there was a whole bunch of controversy around Dragonfly. You know, you get, like, within 50 yards of a NIST person while working on any of this stuff, and all of a sudden, you're a government agent. But... 

Jasson

I mean, here's the thing, right? And we're all taught this in, like, the first class we ever take in this world, is at least related to security, is, like, keeping a thing secret, keeping the how, the algorithm secret is not a defense. This is an area that does involve actual mathematics and a level of rigor far beyond what I can do. But like, it's a real discipline. 

It's a real specialty. People spend decades of their life just getting to the point of being able to do the work, and they even make mistakes, right? So, like, the reason we trust the algorithms that we do that underlying cryptography is not just because the how has been published, but because it's been reviewed, and in some cases, it's actually been proven in, like, mechanical kind of improving ways. 

In other circumstances, it's just stood the test of time, right? And so, short of being able to answer, do one-way functions exist, right, or, is P equal to NP, like it's really, really hard to know that you have a secure system or you have a system that truly is resistant to a Turing machine in a reasonable amount of time, or to a quantum Turing machine. 

And the idea that a company is selling a product based on kind of hiding the algorithm is comical. 

H.B.

Yeah, I can imagine. But yeah, I think that goes back to that whole idea that when people start talking quantum or AI, or back in the day before LLM, NLP, and natural language processing, like, there's just a lot of washing that goes on where people are just applying it to everything and anything and just making shit up. 

The interesting thing on the AI side though, that you brought up, was the types of problems that it solves similar to sort of how quantum solves like, you know, problems that are in a quantum domain, and within a narrow set, there's an appropriateness. This whole AIOps situation, you were saying that some of the work from, like, folks, like, Palantir seemed interesting, but that it was isolated to certain use cases, could you sort of explain that? 

Jasson

Yeah. So, I think it was Nelson shared a video from Palantir. And they were showcasing kind of their applied use of LLMs as well as some other techniques in kind of a warfighter kind of DOD military sort of mode. And you know, without getting into all the specifics at a high level, you know, there's an incredible amount of information being surfaced to both kind of the soldier, right, in the field, as well as the combatant commander who's not in the field. 

And so, what they're doing, and at least what they were illustrating, is how do they use LLMs and other techniques to basically process all of this information, summarize it, and produce kind of actionable readouts in a way that makes sense and is much, much more efficient than having kind of a ton of human analysts do the same amount of work. 

And number one, it was fascinating to watch. Like, it honestly reminded me of kind of the old multiplayer. I forget what MUD even stands for now. But when we used to do the text-based MUDs back in the '90s, it was like, "Turn left. Do you see a light? Do you see a door? Open the door?" 

Because it was a free text interface to essentially interact with your data. It was kind like, you know, your AI assistant who's synthesizing all the data and giving you summaries, and then you're saying, "Give me a course of action or give me a couple courses of action and give me the analysis on those actions, and then I'll choose one." And you know, a couple things immediately popped up. Number one, like, LLMs are not smart. 

LLMs are models that are trained on data sets. So, they clearly inherit the error of the data. They clearly inherit the bias of the data, right? So, that was kind of what I was getting at in terms of their...you really kind of have to understand how the data was trained to begin with. 

But it also reminded me a little bit of kind of my days when I was at a company called SecurityScorecard. And we had a similar problem. We were producing an incredible amount of data by...essentially assets connected to the internet, both passive and active. And we had to try and make sense of that to understand, you know, what's going on inside of an organization, what likely belongs to an organization. 

And inherently, you're going to have error in this data, right? Like, these aren't perfect models because there's no discrete or deterministic system you're modeling off of. But at the end of the day, if I'm producing 1,000 data points that suggest a course of action and I have 10% of error, that means 100 data points are wrong, right? And 9,000 data points are still suggesting that course of action, right? 

Like, it's still worth the course of action. Or put in another way, if I have a team of analysts that can't handle the deluge of data that's being dumped on them, and they really want to focus on what's going to be the most impactful, or what's going to represent the most amount of risk for them to dig into, a probabilistic solution that comes with error is still going to make their outcomes better. 

Now, clearly, you have to understand that error and track that error. But even in the case that I just outlined, right, those five analysts are able to chew through much more information that's pertinent to their company, that represents an outsized risk impact than them kind of being able to make the decisions on their own. 

So, there's absolute applicability. But again, these things, they're not...they're models. We shouldn't treat them any differently than we treat a curve that we fitted to some data points in class. It's a more sophisticated version of that. They're not intelligent, but they do help us synthesize a vast amount of data quickly under a quantified amount of error. 

H.B.

So, there are uses in, like, the AIOps space and everything that people are looking at, but it's not the panacea that people are making it out to be at the moment. 

Jasson

Well, it was kind of like zero trust two years ago, right? Like, if 90% of the booths in RSA have AI written on them, what is the meaning of writing AI on the booth? Right? That doesn't necessarily mean that some of them aren't true, and that doesn't mean that some of them don't have applicability. 

But yeah, RSA is kind of a noisy event, right? 

Joshua

Sure. 

Jasson

Like, there literally felt like there was a carnival barker a couple of rows away from us at one point. There was this one booth where someone's got a microphone and they're trying to give this talk, and 90 degrees adjacent from the woman giving the talk was some sort of game, like, Simon, right? Touch the colors as they light up with all sorts of... 

Joshua

Oh, I saw that. It looks very fun. 

Jasson

And he was playing the game really aggressively while literally four feet away from him this woman was trying to give this talk, and it's just madness. It's a madhouse. And this is what I mean by, like, the noise floor is high. 

H.B.

Nice, nice, I'm going to be forever imprinted with that image. It's like when you used to see people playing Dance Dance Revolution with a little bit too much zeal at an arcade

Joshua

It was exactly like that. I saw videos of that game, and you are exactly on the money on that. 

Jasson

Maybe it was just, "You know what, guys? This is Thursday of RSA. I don't care anymore. I'm going to do whatever." Maybe that's the thing. 

Joshua

So, was your takeaway that we need to wait a couple of years similar to the zero trust situation, and just see how things pan out, and in a couple of years, we'll have better AIOps the same way that now we're able to see a clearer picture around zero trust authentication for...? 

Jasson

I wouldn't say people need to wait a couple years, right? Stuff's happening now. But you know, when you're talking to vendors, like, you're going to find them based on marketing terms, they're going to try and get you to find them based on marketing terms. Once you're actually engaged in a conversation, just try and cut through it quickly, right? So, when a company's trying to pitch AI, right, independent of the technology, what business problem are they actually helping you solve, right? 

And if it's a human problem, like a human analyst-style problem, what is the quantifiable impact? What is the quantifiable error measure? How did they actually come up with that? What is their model training? Where is their model training coming from? Are they using a system where they even own their own intellectual property, or is it all based on either open source creative common style rights, or some other company's rights that maybe they don't even understand? 

Right? The bar to create an interesting demo is really, really low. The bar to build a product that effectively uses a new technology to solve a business problem is not. 

Joshua

Sure. And that's also good advice the other way around for vendors. Like, you know, make sure that once you're engaged in those conversations with prospects, that you are also, you know, cutting through your own kind of marketing big words, and actually telling them how we can help and how we can solve the issue. 

Jasson

Yeah. And this is one of those things where I would just be careful, right? Your mileage may vary. Like, I understand I'm somewhere in the distribution of how humans behave, but I'm probably not in the middle. When people call on me, I usually give them just a few minutes to help where their job is to very quickly plan a flag in the map of where they live in my ecosystem. 

And if they can't do that, I immediately just shut them off and move on about my day, right? And once I know where they live, then the assessment is...then it becomes two things. Are they solving a problem I actually care about right now? And if the answer is no, I shut them off. If the answer is no, but I have time and there's something interesting or something compelling, then maybe I'll let them go a little bit more. 

But assuming they're solving something I actually care about, then I need an actual conversation about what they're doing, not the marketecture, right? And it doesn't have to be an engineer from the company, but it needs to be a technically trained person who understands, you know, what's what. 

Joshua

Yeah. Awesome. Thank you so much, Jasson, for giving us your rundown of your experience at RSA and all the conversations that were going on. Fascinating. You know, again, this was the second one, since the world shut down. And so, it's kind of back into the swing of things. 

And it's bigger than ever apparently. Thank you so much, everybody, for listening to this episode of "Cybersecurity Hot Takes." Be sure to like, subscribe, share this episode everywhere. And what was Reece's phrase that she said? I'm trying... 

Jasson

I don't think you can repeat it. 

Joshua

Good riddance. 

Jasson

Yeah, good riddance, indeed. ♪ [music] ♪

Book

Will the “A” in RSA Be Replaced by “AI”?

Phishing resistance in security solutions has become a necessity. Learn the differences between the solutions and what you need to be phishing resistant.

Download the book

By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.