Device Integrity Checks
Informal security chat with Beyond Identity's CTO Jasson Casey, Founding Engineer Nelson Melo, VP of Global Sales Engineering Husnain Bajwa, and our host Marketing Empress Reece Guida on how end-user decisions with authentication can negatively impact your security architecture.
Hello, everybody. Welcome to "Cybersecurity Hot Takes." I'm your host, Reece Guida.
And I'm Jasson Casey, the CTO and not a Muppet.
I'm Husnain Bajwa, usually known as HB, and I run our global sales engineering.
And I'm Nelson and I'm the founding engineer.
So, guys, we were just having a chat about how end-user decisions with authentication can negatively impact your security architecture. I think Jasson has pretty strong feelings about this, so I'll let him elaborate.
So the lead-in was the recent story on Krebs. Was a government worker...actually not a government worker, I think it was a contractor, a federal government contractor, went to acquire a PIV card reader off Amazon, and lo and behold there was malware in the tool that he bought off of Amazon to successfully authenticate or to securely authenticate into his work environment.
For those of you who don't know, PIV and CAC cards, it's an authentication standard that the government requires that uses asymmetric crypto, keeps private keys in a very specific piece of hardware. So it introduces, essentially, a token, something that you have. There's usually a pin code and a password in that setup. But what's interesting, I'm kinda tying it back to Reece's comment where we were talking about the other day, is kind of this concept of usability or UX security and what role does UX play in security architecture and security solutions.
And the central premise that we've been talking about, actually, for a couple of years, is that it has a critical role. And what we mean by that is if you do not consider the usability for your end-users of your security architecture and your security solution, regardless of tools, the user experience, if you do not consider your user experience, your users are going to create your next vulnerabilities. And this is a perfect example, right? Here's a security solution or architecture that enables choices by the end-user. And choices by the end-user aren't necessarily bad, but if the wrong choice by the end-user ends up in some sort of degraded security state, it's not a good architecture, right? And that's kind of the high-level litmus test, right? And that's also why we think passwords are bad, right? Because there's so much choice up to the end-user where essentially what's easy and usable for them is not secure.
But there's many other facets to that discussion as well. So just pulling it back, this was spurned by the PIV/CAC card reader with malware incident that we read about before. An end-user shouldn't be able to choose so poorly that they compromise an architecture. And I do think that that is a failure to consider usability and kind of full life cycle when you're designing solutions and architectures.
I think it's a common problem too because I think it pulls up in all sorts of areas where we get into incremental improvements. So, a little thing here, a little thing there, and we've essentially created a solution around a whole bunch of point tools that perhaps don't fit together as well as we had originally anticipated or hoped. And this is why it's really important to also have sort of a holistic assessment of your solution and security architecture. That when you start off with something weak, like usernames and passwords, and you bolt on sort of a 2FA or MFA solution, and over time, it evolves from TOTP, to SMS, to other kinds of approaches, like email magic links, whatever it might be, there's a tendency to look at the changes at each tool level sort of incrementally and sort of only tackle those.
Similarly, like, you know, this biometric supply chain problem. People start off thinking like, you know, it'd be great to use Windows Hello infrastructure, it would be great to use PIV/CAC card readers, all sorts of other like USB accessories, where people might need to buy something off of the Amazon.
It's hard to choose which one makes sense and it feels like a small part of the problem. And so, I think, people are going to have to start really thinking about this in terms of, like, a holistic solution perspective.
Do you guys have a sense if this person was just going off script and picking something because they didn't bother to read what the agency or whoever had specified for them to buy or is it just they don't specify and they just say, "Go get a PIV reader."
Well, I mean, these things are based on common protocols but the problem that...So the details and their specific environment, I think we are a little bit removed from, right? We just have access to the few articles that have been written by it. But, I think, the real thought here is if you're thinking about the full life cycle of access. If you're really thinking about choices that your end users can make, and making sure that those choices are all secure choices. If you're going to let your users bring their own hardware, right, which is clearly happening. It's happening with BYOD, it's happening during COVID, because of supply chain issues. It turns out supply chain issues may outlast COVID, so bring your own hardware is a thing, right?
So, if you want a secure access, you have to consider the integrity of the hardware and the software that it's running. And there are solutions that are based on standards and that have well-accepted methods of trust that do solve that problem, right?
The one that everybody probably has heard about, but may not realize how it does, is the TPM protocol or specifications that came out of a TCG, right? The Trusty Computing Group. But there is a root of trust, there is a bootstrap process, there is a sequence of measurements when the hardware is powered on to where the hardware loads the bootloader, to where it loads its firmware, to where it loads it's a little MicroOS, or whatever it's actually running in the applications, where you can get a trail of evidence of what is the integrity of not just the hardware, but the software that it's running.
Signed by something that roots back to your trusted root, right? Like, do we trust Infineon to make the TPM properly, right? Do we trust this particular signature from Microsoft on their particular version of the Windows OS? Do we trust this particular app developer? Like, those are things that can be mechanized, and in fact, have been mechanized. And TPM is probably the best example of it.
So that's, I guess, putting a better example of like, how do you enable choice while still having a responsible security architecture? It's by remembering, you know, the Trinity of Security, right? Privacy, authenticity, and integrity. Like, a lot of people don't really focus on integrity. They kind of leave it off the table, but this is exactly what was violated here, right? The person complied in terms of getting something that was standards-compliant, just turned out, it had a little extra. We have tools to measure extra, and those tools have been given to us a long time ago. In fact, they probably came out of the world of, like, signal processing and information theory, and they were just used to try and, like, clean up errors, but it turns out we can also use it to understand when somebody is murking with things.
Yeah. What you're saying, Jasson, about integrity as a basis and a basic of good security, reminds me about this anecdote that one of our customers said a couple of weeks ago. He was relieved that because of the device integrity checks we provide, that he doesn't have to rely on or worry about the attentiveness of his own users. I think that a lot of orgs make assumptions about the configurations of laptops and points in their own environments, and there's a sprawl of configurations. Like, you might think that the firewall is on or that the disk is encrypted, but do you really know? Can you enforce that in a seamless way that doesn't ask anything of the users?
So, I think, like most things in life, just taking stock and trying to simplify is always a good solution and it takes the burden off of your users, which is important because they have actual jobs to do.
System admin, network admin, security design. These are all complex engineering tasks. And I'd say most of the people listening to this have probably worked with an engineer or are an engineer and have written software or have worked with people who've written software. And so the litmus test is, how many times have you heard someone say, "It's good, it compiles."? And usually, these are your more junior people, right? And they kind of assume it's doing what it says. Or maybe you've delved into the world of C, writing C programs, or even assembly programs. How many times have you written something that executes, but it turns out it executes nonsense, right?
We have strongly typed languages for a reason. We have deeply instrumented runtimes for a reason. The reason is humans make mistakes. Some of us are better than others at being mechanistic, all right? Like, some of you probably have friends who are human compilers. Even they, have an error rate, maybe it's 1 in 10,000. And when you're doing CIS Admin, when you're doing network admin, when you're doing software development, if you turn the crank 10,000 times and your error rates 1 in 10,000, you're gonna introduce at least one error.
And most of us turn the crank more times than that. So, you can't eliminate all sources of error, but everywhere where you can employ a mechanistic technique to prevent error from going out in the first place, and then for detecting it and responding to it quickly if you can't, you should, right? That shouldn't be on your user.
And one of the things that always drives me insane, around security with end-user training is...I guess it's not a bad thing. Like, you do have to train people in terms of awareness and, like, why they should care, but you're not gonna really get through to most of the world about all of the things that they need to try and do when it turns out it's still not enough, and most of them can still get phished by things that look completely legitimate. It's not their prime business. We can't rely on them knowing what is good and what is bad in all environments, we have to do better with just more mechanization. And in the case that we started this talk with, like, why is not the integrity of that hardware and the software it's running, not part of the access exchange.
Yeah. Wise words to end on. Return to the basics, make sure you have them covered. Thanks for listening guys, let us know what you think in the comments. Do you want us to discuss anything next week? Let us know. And be sure to like and subscribe. We'll see you next week. Thanks.