Security Analogies Are Misleading
Informal security chat with Beyond Identity's CTO Jasson Casey, Head of Global Sales Engineering HB, and our host Marketing Empress Reece Guida on whether or not security analogies are overly simplistic and can lead to misunderstandings.
Hello. And welcome to another episode of Cybersecurity Hot Takes. It is me, the Marketing Empress, your host, Reece Guida. Today, I am joined by...
I'm Jasson Casey. I'm the CTO here at Beyond Identity.
I'm HB. I run Global Sales Engineering for Beyond Identity.
Excellent. We are here today to discuss a blog that Phil Venables, the CISO of Google Cloud, published back in June, and it is called "Are Security Analogies Counterproductive?" And his thesis in this blog is that security analogies can be good, but a lot of times, they're overly simplistic and they can lead to misunderstanding. So, what is you guys' reaction to this blog?
I want to know what happened to cause him to write the blog because it feels... As you mentioned, kind of, off camera, it feels personal, it feels situational. Was it a talk that annoyed him, right? That's a lot of energy to put a blog out based on a talk that annoyed him.
Yeah, it's at least a thousand words, by the way. I think I'm going to comment and ask him what the heck happened?
Or did someone do something, like, take physical action based on, kind of, a simple model of the world that resulted in, like, I don't know, an outage or an incident of some sort? I'm just curious what's the story behind the story? It's not a strange blog. It's just...I wouldn't have expected that.
Yeah, yeah. What do you think, HB?
I spend a lot of time on Twitter these days trying to keep up with the world and just follow various topics and threads that are interesting. And some of the stuff that Twitter has been promoting the most are these simplistic mental model threads, you know, where people come up and write, "I learned this," and/or, "X person did Y," and look at what, like, all of the things were that were important to their approach and progress.
Yeah, those threads have become really popular. And I remember when journalists first started getting really upset about the BuzzFeed listicle kind of approach to news. And then Axios got funded, and Axios basically makes it into a full-fledged business model for professional journalism to simply be bulletized.
And I think that just goes into, like, in a very complex and busy world. Not everyone has time or resources to become an expert on every topic. And analogistic thinking is just a really useful technique, and we need things refined and filtered for us.
The challenge is that a lot of the people, who purport to be experts and try to come up with these mental models, they aren't the elite of the world, right? It's not like you're getting, like, the Bill Gates or Warren Buffett or Jeff Bezos of the world.
Fortunately, tons of other people give it for them, but when you look at the nuance of it, it can often get overly simplistic. But I think analogistic thinking has a lot of value as well. When I was a kid, I learned most of what I learned about physics from listening to cassette tapes of Richard Feynman.
And almost everything that he related about the physical world and laws of physics began with very simple observations and comparisons and talk about how things jiggle, and how jiggling results in things like fire.
Wow, my worldview just changed in an instant.
Look, all of that stuff is really great, but you just have to have a great person doing the analogies, right? There's a huge difference in quality.
Yeah. Well, I think analogies are inherently fun and, kind of, playful. And I think Jasson was getting at that earlier like Phil Venables seems to not think that we should treat security like a toy, like something we can play with, or else we might get hurt from the consequences. But I think that given the circumstances, yeah, I see where he's coming from. But there's a job shortage in cybersecurity, and a lot of young people are going to have to fill it.
And I think that analogies are a really friendly way to get people on board with complicated ideas. And if they're hooked by that idea, they'll explore it more and learn it in literal terms, not always analogistic thinking.
The flip side to that is you're not always working or speaking with people that are in your domain or wish to be in your domain, but there's some, sort of, interdependency. You need them, they need you, blah, blah, blah. And demanding things from each other isn't really a long-term viable relationship. So explaining some sort of context usually requires you to meet that person where they are and what their current mental model is, which invariably is going to invoke the analogy.
But coming back to something Husnain was saying, I didn't really think about it that way. But I do wonder if there's maybe an explosion of analogies being used by a certain class of folks that's more of the shortcut-thinking mentality, right? Like the Twitter threads that you were describing, right?
Like, "Hey, here's my 10 tweet tweetstorm on how to be excellent at X." Now, look the person is like, "Never heard of you. I've never heard of your companies. I'm not exactly sure what your track record is." And we looked at one this morning, and the premise was, "For every dollar you raise from a VC, if you don't plan...you need to make sure that you can return at least ten. Otherwise, it's not even worth doing."
And I'm thinking that makes no sense whatsoever. From a VC's perspective, they are looking for a 10x return, right? From the VC's perspective? But from a founder's perspective, when you raise money, it comes with terms. It comes with preference, it comes with multiples like whether you actually are able to bring a return to employees and founders is highly dependent on all of those variables, and then also the time it takes you to get to some, sort of, liquidation or exit.
And honestly, you probably could have made a good explanation of that in your 10 tweets. But the thing was really just an assumption at the beginning that if you don't do one to 10, it's not viable for anyone. And then it was a bunch of follow-on statements about something else.
So, I could believe that maybe this thread of what are the shortcuts, how do I take shortcuts to knowledge and experience may have been the instigator of the cost too, because I certainly get frustrated by that sort of thing.
Well, you know, gentlemen, in life, there are no shortcuts. Now that I've spat that wisdom, I want to conclude this episode by asking what are some security analogies that stand out in your mind. I'll go ahead and give one. We talk about how there is no perimeter anymore and identity is the new perimeter. And we always compare it to a castle-and-a-moat. And I just think that's hilarious and silly because it's a very antiquated building that we're using to invoke modern interwoven technologies.
What do you guys think are some analogies that pop into your head when talking about security?
That analogy is pretty popular. And actually, I think it's transcendent being an analogy and turned into a real thing, right? Because when you look at security architectures of the '90s and the 2000s, they were designed around this concept that there's a perimeter and there's egress points and well-defined DMZs that you can essentially police like a lateral movement.
And actually, I don't know if a city is really that bad of an analogy when you're thinking about what is the thing you have to protect because in the city there aren't small egress points. Everything is highly connected. And in a computer system, ultimately, you're really just talking about data and data movement over connections.
And there's this thing in computer science called data flow, which is really useful when you're designing processors, it's useful when you're building distributed algorithms, it's useful when you're building compilers and whatnot, and it's also really useful as an analysis tool. If you can map out how data flows, you can understand, well, at every point in which the data moves and at every point in which the data resides, it's an opportunity for someone to access data that they shouldn't or read the data that they shouldn't.
So, I guess I'm, kind of, giving you around about way that maybe the city analogy isn't necessarily so bad at describing the problem. The falsity is people just assuming that...people not quite realizing how many orifice cities have.
I think that's a great one. So when you look at castle-and-moat, yeah, you can argue whatever you want to on castle-and-moat, but firewall concepts emerged in early '90s kind of time frame with NATs and all those kinds of things. The natural transition to that was that people talk about zero trust.
And in many ways, zero trust is a mechanism for having, kind of, analogistic thinking, that there's this idea of implicit complete trust and then zero trust. And is it truly zero trust? No, that's an extremely simplified mental model, and we don't actually ever get to a zero trust kind of model.
And because of this, people can misrepresent it, right? Like the mental model being so vague allows people to take a conventional castle-and-moat architecture like a VPN, and pretend like a VPN is suddenly a zero-trust product. I think those kinds of things are where there's a lot of risk in overusing these things. But to your point, with new people coming in, the spectrum of people in cybersecurity can be people like Jasson and I have spent 25 years thinking about network security to someone who just completed a 400-hour boot camp that promises to give you a certified ethical hacker or some sort of security professional training.
From 400 hours to 25 years of experience, you're just going to have a huge, huge gulf in terms of ability to understand things. I think appreciating where a heuristic is useful and where analogistic thinking is useful and not over-indexing on it for purposes of decisions and actions is super important. It's a tool for orienting yourself, right?
Like, if you look at Observe-Orient-Decide-Act, the Boyd Loop stuff, this is just a mechanism for people who are severely disoriented to be quickly helped in orientation by people who understand the topic better. But the issue of deciding and acting, like that decision stage, should probably be a little bit more intentional.
And that's probably where Phil is running into his heartburn and seeming anger on this topic, so.
Okay. Mic drop, HB. That's a way to end the episode. So, listeners out there, as always, like and subscribe. But please, with your analogies, use them, have fun with them, but be careful when you decide and act accordingly.
We will see you for our next episode. Who knows what we're going to talk about. Hopefully, it'll be fun and entertaining. Thanks for listening. See you next time.