What's Wrong With Our Sign In Process? Hackers Don’t Break In: They Log In!
Listen to the following security and product experts share their insights in the webinar:
- Jasson Casey, Chief Technology Officer at Beyond Identity
- Richard Penshorn, Founder and CEO of Modern Stack
Transcription
Jasson
All right. Welcome to this webinar. My name is Jasson Casey, and I'm the CTO here at Beyond Identity. And we're going to spend a little bit of time today talking about the sign-in process or how people access services, whether we're talking about their corporate services, or whether they're trying to access services they pay for through some sort of application.
We have this general premise or thesis that hackers don't really break into systems that much. They kind of just log in. And here speaking with me is Richard Penshorn. Why don't you introduce yourself, Richard?
Richard
Hello, everybody. So I'm the founder of a little company called Modern Stack. And I've been a former pen tester for nine, 10 years. And the big thing that we're trying to help with is try to get people off of legacy technologies and getting them away from just straight-up passwords. The big thing that we have seen in the past has been usually a password or a secret is the tool that gets people in trouble.
And I'm really excited to talk about some of these very recent leaks and hacks from Heroku, Okta, and Conti.
Jasson
Thank you. So as I said I'm Jasson, I'm the CTO here at Beyond Identity. And if you hadn't guessed from any of the pre-read or the marketing material from our company, we provide a Zero Trust secure access solution that, as part of its premise, it gets away from shared secrets or credentials or passwords as you know them.
But more fundamentally, we try and help companies really answer the question, who is on the other end of that authentication request? What are they actually trying to do? And what are the security controls in place on the machine that's about to retrieve the data from their access or access the application?
What are they trying to access it from? And with that, just a little bit about the format, we're keeping this webinar kind of conversational so it's going to be me and Richard, just kind of going back and forth. We'll have some questions at the end. But to start with, we wanted to kind of dip into three incidents that have happened over the course of this year that's kind of top of mind for everyone, if at least just in the topical news, but probably a little bit more detail than that.
And so the first one that we're going to talk through is the Heroku incident. But then we're going to talk about Okta and the Lapsus incident. And we'll wrap things up with Conti, the ransomware gang operating out of Eastern Europe, and some of the interesting things we found there.
So I guess, just to kind of start things off, Richard, why don't you give us a little synopsis about what is the Heroku incident?
Richard
Yeah. So Heroku, if you are not familiar with Heroku, they basically can take some code and then run it in their cloud environment. So you can put an application on your startup, do something interesting there. And back in April, there was an incident that allowed some threat actor to start gathering GitHub OAuth tokens.
And what this really allowed this attacker was access to all of Heroku's, or a subset of Heroku's code repositories that they're pulling and running from. And what is interesting about this incident was April 7th is when the actual incident happened, but April 13th is the first time the general public really found anything about this.
And a lot of users found out later in April, April 16 where their GitHub integration was actually revoked and stopped working. The application might have worked, but when a customer of Heroku tried to deploy a new version, it effectively would have stopped if they were using the GitHub integration.
So over, like, the next month or so, there was a lot of interesting components that came out of that where Heroku basically said, "Hey, something terrible has happened, you need to inspect your code bases, look for secrets, revoke any OAuth applications that your application might be using, ensure that you rotate your SSH keys. And effectively we've kind of lost some of the validity or the ability to run your code in a secure environment."
So the big thing that I take away from this is, like, Heroku kind of had, like, a terrible compromise and the drama of that event kind of unfolded over a month rather than being really transparent and upfront about that. And Heroku, either based off their design and whatnot, they, kind of, didn't have a good secret store that allowed an attacker basically to access all the customer secrets that cause a potential large chain of code repositories to be compromised.
Jasson
So, from my understanding, I don't think anyone...I mean, "Does anyone know?" is a hard question to answer. But no one's really published what the initial access was or did I miss that?
Richard
No, we don't know exactly what the initial access is, it's a lot of opinions and hearsay. But there was something that they posted an update earlier in May, talking about the attacker had access to some type of pipeline level environmental configs. And that secret could have allowed an attacker to access a database or access something on the internal side.
But the interesting bit is there's no, like, definite way in. That hasn't really been published by Heroku's security team.
Jasson
Got it. I mean, there's clearly a way in, we just don't know about it yet. There's the way the adversary got in, there's the team on the ground actually knowing about it. And then there's the team on the ground authorizing it being published and the rest...or it being disclosed in some form or fashion.
I want to replay this and make sure I'm understanding this right. So threat actor, adversary was able to gain initial access, we're not exactly sure. We can assume that there was some sort of movement involved, right, whatever they probably got initial access to wasn't actually the credential store. But at some point, they were able to get to this credential store, right, this data store that contained these OAuth tokens.
They exfilled these OAuth tokens, and then I believe they used those OAuth tokens to then enumerate GitHub, to try and discover shared customers, right? And then once they discovered...so you think about it, right, like, when you're attack planning, right, like discovery reconnaissance is kind of a key phase. It seems like they were using this relationship to figure out what Heroku customers that they got the OAuth token for were also GitHub customers.
Because then once they got a hit there I think they were able to basically pull the private side repos down. Is that fair so far?
Richard
Yep. That's fair.
Jasson
So interesting. So at this point, this is a way for the adversary to not just discover the pairwise relationships, right. which customers belong to both Heroku and GitHub. But then once they had that hit, to then start pulling down whatever those folks have in their repos. Now I can't remember, those tokens were they read-only tokens or did they allow for write access?
Richard I think, for the most part, read-only. But what's interesting really is kind of the response that Salesforce security team did disclose was you should be running, like, secret scanning in your code base. Because a lot of the customers out there when they create the code base and whatnot, they're committing different private keys, they're committing secret strings to their databases, just so their system works.
And...
Jasson
I don't know. And I've never met a developer that commits secrets to a repo. Are you sure?
Richard
I'm very sure. There's tools out there like TruffleHog and things like that they're building a good business around that. And even GitHub, they now offer I think a secret scan.
Jasson
Yeah, we did something like that before, basically, just looking for high entropy strings in repos. I mean essentially, I make this in jest, but almost every repo at some point, either through inattention or just, kind of, error ends up with secrets of some form, right, whether they're API access keys, shared secrets, or I've even seen...I mean, we've all seen private keys get checked into repos.
And then, of course, like, does someone even understand that when they remove something from a repo, when they remove a file, and they hit Commit, do they really understand that it's still in the repo, it's just not at the top of the tree? So what I'm getting at here, and this is just some...so I made this mistake. About 10 years ago, I was working on some packet captures at a couple gigabyte traces.
And I made the mistake of checking them into a repo, which killed their performance, pushing and pulling that repo. And then, to my shock and amazement, I realized that, wait a minute, when I delete the file and I type Commit and Push, my performance didn't improve. It didn't improve because I didn't actually remove it from the history.
So yeah, secrets are hard to keep out of your repos. Clearly, it's not good practice to put it in there. And not only do people introduce secrets into their repos through all different types of error, but there's also...they incorrectly assume they've removed them from their repos as well. There's a lot of interesting risk just associated with that, right?
I mean, the direction was basically, like, make sure you're paying attention to, not just what secrets were in your repos, but do you have some way of validating if they're being used right now? And please, for the love of God, revoke them all.
Richard
Yeah. I mean, that was one of my favorite things doing as the pen tester was, we'd get a git repo, like, someone would zip up the repo, send it off, send it to us and we just go through git commits. And usually, early on in the project, you'll find AWS main key, or the root account,you just log in and go like, "Okay, I don't need architecture diagrams, I got access to the AWS console now.
So the idea of we have secrets and humans we can't memorize secrets. We like writing them down. As a developer, like, that's a very common thing. And to get into adding a secret storage like a vault or some other product that will hold that secret takes a lot of engineering effort to get to that point.
Jasson
And just to kind of put a lid or a nail in the or not a nail but, like, the final thing. I think in the last thing I read on Heroku, it looks like the adversary actually exfilled the hash salted passwords file as well, for Heroku accounts.
So not only did they compromise machine-to-machine secrets that they could then use to find inappropriately placed secrets. But then they were also able to...obviously, they still had to go through the exercise of cracking that file.
But they had an ability to compromise the Heroku account access as well, at least the potential.
Richard
Potentially, yeah, it's really hard to store credentials even going through that cracking exercise, computers keep getting better each day, cloud computing gets cheaper by the month. So it's not that much effort to then use those credentials to log in to say maybe your NPM repository to log into some other entity like your email that allows further access into the corporate IT stack.
Jasson
Adversaries don't break in, they log in, right, just kind of tying it back to the theme. You know, the way I've been explaining this as of late, I keep trying to change my mental metaphors with folks to try and convey the image.
But the one I've been using lately is more around data at rest and data at motion, right, because everybody knows those phrases, right? I want to protect the data at rest, I want to protect the data at motion. And when we talk about credentials and specifically some sort of credential that I use to prove I am who I say I am or I'm a processor, a machine that you expect, the first thing that would be really, really nice is what if you didn't have to protect the credential in motion?
What if the credential never moved, right? Like, that would be really nice, right? Asymmetric cryptography clearly gives us an avenue for that. But what about the data at rest? What if I could guarantee that credential was never in a file system and was also never in memory, right? And obviously, I'm foreshadowing Secure Enclaves and TPM, T2s that sort of thing, right?
You don't move the credential around,you send little checksums down to that processor and you say, "Hey, sign this for me using credential x, y, and z." And of course, depending on the policy around that credential and the processor, it will say, you know, "Raise your right hand, do the hokey pokey, give us your local pin, give us a biometric.
In my mind, the only real way to systematically address some of these issues is number one to make it mechanistic. Like, humans have errors, humans do lots of things, therefore their errors show up. So, like, how do you take some of these mistakes out of the hands of humans and make it mechanistic? So moving from symmetric secrets to asymmetric secrets seems like an easy win. And then leveraging modern enclaves that have some form or fashion had been an equipment since about 2016 seems to be the other leg, right?
And I'm not saying these are perfect systems, but the bar gets raised rather significantly, just doing those two things.
Richard
Yep. And, like, the idea of asymmetric is now the data you're storing can be all public, even though maybe you don't want it to be that. What's interesting is in the last 10 years, we've gotten a lot of technologies out there like the TPMs built into our machines where the management of this public-private key infrastructure has become a lot easier.
So I mean, about 10 years ago, we were trying to set up PIV cards for our pen testers, and, like, the management of the HSM that everyone would plug in issuing that, signing those, was a nightmare. And it feels like in the last five years, we're starting to get better protocols like WebAuthn that manages all that.
And the website operator, the only thing they really get to hold on is the public key, which is unique for the website that they're accessing.
Jasson
When we first launched our product, a lot of our marketing was very focused on certificates and X.509. And our reasoning at the time was we wanted to be very, very clear with people that we weren't inventing new cryptographic protocols or algorithms, we were basically integrating existing solutions, but in a more facilitated way to kind of get at some of the problems that you were describing.
Unfortunately, for us, in our first failed attempt, what we did is we actually triggered everyone's memory of having to manage PKI themselves, which as you referenced, was not pleasant. And so we've adjusted our marketing message around that. But yeah, no, not only is asymmetric possible, and asymmetric possible using hardware that lives in most people's equipment, the management can be incredibly automated or mechanistic where the load is almost non-existent on the admin.
But with that said shall I move to incident number two?
Richard
Yes. Let's do it the Okta attack.
Jasson
Tell us about the Okta attack.
Richard
So this one was all over the newspapers. But effectively, at some point in early January so January 16th to the 21st, a threat actor accessed one of Okta's third-party companies that basically helped provide customer service. On January 20th, Okta's security team was aware of some nefarious activity.
And effectively, I believe the threat actor tried to add a second multi-factor authentication to the support engineers, which Okta's services desk then terminated that user session. And then also Okta shared those indicators compromised back to the third party processor.
So at that point, the security teams was like case closed and whatnot. But what happened...and this is where all the news came up, was on March 22nd, screenshots were released on Twitter showing internal access to Okta's customer support portal.
And this is where Okta's response started. Effectively, they published a blog post trying to outline the events saying that, "Hey, we knew about this." And at the end of it, more leaked documents came on Twitter showing that their third-party processors, Active Directory environment was fully compromised.
And there was effectively a compromise of domain admins so every user within that environment was compromised. So it's interesting in the sense of Okta had an output that, yes, they did do right and they did try to limit the blast radius, but the path of compromise that was taken is fairly common and whatnot.
And it led to this public backlash, if that makes sense.
Jasson
So I was traveling when this all came out. I was on European time so I think it was morning where I was and I started reading the Telegram channel where Lapsus was basically...it was a little bit like cat and mouse. Or maybe it was more like South Park "Nothing to see here, move along."
And then in the Telegram channel it's like, "You're right there's nothing to see here but check out this screenshot, and this screenshot, and this screenshot." The thing that was most striking to me...and maybe they're right, right? But, like, the narrative was certainly lacking to explain it.
So for everyone, right, Okta is an SSO. Okta parlays access into what, technically, we call relying parties or third-party applications. And the Lapsus screenshots was showing access to things like AWS and Slack, and JIRA. And they were showing screenshots of some of the tickets in Atlassian and JIRA.
And you could see they were talking about very specific, fairly large customers and issues those customers were experiencing. And so when you contrast, essentially, the narrative that, you know, everything's fine, they didn't do anything, we terminated all the sessions. But then you kind of think a little bit about how an SSO works, right?
You can terminate the SSO session, but if it's already granted an access token to a delegate application, the SSO can't do anything, right, about the lifetime of that access token. And by the way, this is something that we want to change. There's some new standards being developed to try and be able to revoke that. But by and large, most applications that integrate an SSO don't really have a revocation facility for that access token.
So like, the immediate questions that come to your mind is like, well, shouldn't the customers go track down, or shouldn't someone go track down what all the active sessions were with those third-party apps and try and revoke them, at least with that third-party app provider? Or, I don't know.
It was very lacking in terms of, like, how that side of the equation was getting handled. And then the other thing that I found that was really, really interesting was just that dichotomy, right? Like, clearly, there's visual access to these APs. There's a story saying nothing was accessed. It would have been nice to say, "We reviewed the session logs of all of this.
Nothing was accessed, and we tried to hunt down the... like, something about what the investigation actually looked like. I understand they're probably being leaned on heavily by legal, right, because everything you say is going to show up in a class action at some point. But I think it was...was it Alex Stamos put something out the other day, basically saying that, you know, the legal risk versus the customer risk is probably at a point of imbalance in this particular case.
Richard
Yeah. And, like, in today's day and age, like, that transparency component really builds trust with your user base. How you handle an incident, how you let your customers know, and how you kind of, like, tie up those loose ends feels very important as a customer of Okta. But also as how are they going to respond the next time a compromise happens?
Because at the end of the day, like, compromises always occur, it's how you respond to that compromise that really puts you in that top 10% or in the bottom 90%.
Jasson
Well, there is the golden way of ensuring there's no compromise, right? Shut down your networks, turn off your disks, and then burn it to the ground.
Richard
Yep, the most secure computer is still the one in the box.
Jasson
Yeah, it's just not that useful for most of us. Another thing that I found really interesting, did you come across the Mandiant timeline?
Richard
Yeah, I did. And kind of the path to compromise felt very standard, but also very script kiddie, like they're using open source tools like Mimikatz. And then on top of that, like they straight up just killed FireEye on the compromised toast, and then did whatever they wanted to do.
Jasson
So the guy who posted that on Twitter the last tweet of that thread said, "I'm looking for a new job, by the way." And I'm wondering if he got fired because of that tweet stream?
Richard
Oh, that would be a good question. I mean, at the end of the day, like, I can tell you, I've built that chart of compromise probably 100 times during internal pen tests. And the sense of... it's a very standard way of doing that, where, once you get an Active Directory password, it gives you lateral movement and you try to increase your privilege over time.
So you might start as a support engineer, you have access to a file server or something along those lines, there's another credential that elevates your path again. And then eventually you become domain admin and then you can either dump all the passwords, or you can impersonate other users, or you can create something like a golden ticket so you can move around the network.
Jasson
They could have been credential hopping, they could have been doing a bunch of things. Yeah, so there was initial access, I think I saw RDP for lateral movement. They downloaded a bunch of binaries straight from GitHub. That was kind of fun. And then the last thing that caught my eye was, they exfilled a Microsoft spreadsheet called LastPass domain admin.
I'm sorry, was it domainadminlastpass.xls, something like that?
Richard
Yeah.
Jasson
What was that?
Richard
I mean, Microsoft Excel makes the best password manager.
Jasson
That was probably an export from their password manager in Excel format, though, right?
Richard
Yeah, most likely. Especially when you export out of LastPass to another tool like a OnePass or something like that you do have to have those credentials in the clear. And it's interesting, they always put a big warning going, like, "Hey, you need to delete this file once you've imported it into your other system." But no one does.
That downloads folder just grows and grows and grows.
Jasson
Here's a large knife it's only for cutting things that aren't you, don't cut yourself. Okay. So similarities like, clearly...so the portability of a password was clearly on display here, right? The ability to harvest credentials from a local machine was on display.
And then pulling back even more credentials at the end of it was kind of the coup de grâce, at least in terms of what they were doing inside of Sitel, the support organization. When in Okta, we can only speculate because there still doesn't seem to be much information.
Richard
Yeah. I'm sure that's a closely guarded secret, the controls that they have in place. But the thing that I really take away is that portability of the password, even though it came from one machine, it was being used everywhere. So on machines that maybe that user will never log into for using that credential to hop around.
Nowadays, we're getting to a point where we can start managing our machines where we can pair a machine to a user to some type of security level. And it's intriguing that those technologies have existed for a while. But it's really hard for companies of any size to actually get to that level where users only have access to one or two machines, rather than having access to hundreds of machines, just due to maybe the policy is too complicated, or maybe there's not the experience on the IT team to kind of lock that sphere of risk down.
Jasson
Yeah, I think our response to that would basically be kind of the same thing we said on the first incident, you know, asymmetric crypto, so you don't have to move credentials. Enclave, so you actually have strong guarantees that credentials can't be harvested. You're not going to get away...at least I don't think you'll be able to get away from the fact that some people just have to touch many different workloads, but force them to actually access those workloads using those properties that we just talked about a minute ago.
In that sequence at least, they can't credential hop with either new or existing credentials. They have to show themselves for who they are at each point of access. And they actually have to prove that they're the right identity to start with, right? So it's a much smaller surface area to hit than kind of what happened, right? They just kind of went through there with wildfire.
And then in the end, they left the party with even more candy.
Richard
Yep. I always say authenticate early and often. And if that authentication scheme is really easy and doesn't require the user to type in the password 50 times is a great advantage because then you're able to always be validating the endpoint health state, validating does that user still exist, validating their user roles and groups.
And as an attacker, if I had to go and proxy everything through a machine, I have to start building a toolkit, I have to use various specialized malware or some type of program that allows me to do all that proxying. So it increases the level of effort from being able to type in a password to having to program and build kind of a malware kit that lets me impersonate the machine or impersonate that user on the endpoint.
That opens up great opportunities for, like, endpoint detection, event logging, and things like that. So you can maybe detect that within a couple hours something like that, and then be able to walk back that compromise.
Jasson
Actually, I would take it a step further. I think it's an opportunity to prevent access by tying those signals into the access loop itself, right? So we go back to January when the U.S. government put out the federal compliance, right? Federal agencies must start including at least one device posture signal into their access decisions. Kind of what they're getting at...well, they're trying to get at a couple of things.
But one of the key points is if you have EDR, if you have MDM, if you have some ability in this machine to actually know is FireEye running? Is basic RDP protection rules in the firewall enabled? Is the firewall enabled? Maybe I'm aiming too high.
Like, there's no reason these can't be tied into the access decision itself in a very simple and painless way.
Richard
Yep, I agree 100%. At some point, the attacker now has a managed laptop in front of them rather than just any old RDP session or some corporate [inaudible.
Jasson
But back in February when Russia invaded the Ukraine, the Conti group basically made a posting saying, you know, "We're for Russia in this engagement and if the U.S. and any of its allies try to go after Russia in the cyber domain, we're going to..." and I'm paraphrasing. Well, actually, there's two stories here.
So one story is one of the members of Conti was Ukrainian and clearly had a different view, and basically posted a set of disclosures. Basically, they had posted all the internal chats, posted the internal source code, posted basically everything from the inside as a retaliation. The second story or the second version of this is, it was a Ukrainian researcher, security researcher, who had penetrated the Conti Group for a while and chose that moment to actually just disclose all of that information.
Do you know which one of those is in fact actually true or are we still kind of speculating?
Richard
I do not know which one of those is true.
Jasson
But the interesting thing about it is Conti is...so I found a couple different numbers. They operate anywhere from 50 to 150 plus million U.S. dollars a year in ransomware payments. They operate like a business, they have an HR department, they recruit on jobs boards, they have a high turnover rate with their younger engineers.
It's a criminal organization, basically extorting U.S...actually, they pioneered double extortion. So when you land ransomware in a target, you get them to pay you first to decrypt the data, and then you get them to pay you again to not release the data.
So that's why they kind of call it double extortion because there's kind of a blackmail angle on the end. So criminal gang, operating like a business. These leaks were a really interesting way to kind of look inside and see how they operate, how they behave, what tools they use, what techniques they rely upon. And I think part of the mystery that was pierced is they operate a lot like the rest of us in how we run a company and hire people and try and motivate and whatnot.
Clearly, their business aim is a bit different. But Richard, why don't you kind of chime in with your take?
Richard
So just reading through the chats. So somebody about three months ago released English translated version of all the chats to GitHub. And just going through that log, you can see a lot of mentorship, you can see there are lead pen testers or lead attackers. And effectively, they're mentoring some of those younger engineers with the techniques to actually break into a network.
And generally, it goes down to some type of spear phishing campaign, can you get one credential or two, get VPN access? And then your goal is to get onto a machine and then it's a standard process to get to the domain admin. And what's interesting about all this is they had the easy mode. So effectively using open source tools like Mimikatz, using Beacon, which is a paid-for hacking toolkit.
But also they had some of their own modules and their own malware that they use to actually create this compromise. So I would say the biggest takeaway from me is when I was a pen tester nine years ago, reading those chats reminded me all the same questions I would ask my manager and other security professionals that are considered, you know, on the good side.
Like, the ethics of it is intriguing where, as a user, like, there's this operational knowledge and we have, kind of, two sets of it. The good person operational knowledge and the bad person operational knowledge, where it's about making money, it's about accessing systems, and so on.
But at the end of the day, it's the same playbook, where you're getting into a network, you're gathering passwords, and then you're trying to extend your access. And once you do that to a certain point, you're able to either write a report if you're a good guy, or you deploy your malware and you start encrypting everything if you're a bad guy.
Jasson
What were their initial access techniques? Were you able to pick up on some of them?
Richard
I think one of the ones I saw was they tried to get VPN access. So through some type of spear phishing or brute forcing, they would guess a user credential. And generally, you're trying to get around not having like two-factor or something like that. Once you do that, then you have the ability to start accessing internal resources. So there's a lot of chatter about file shares and going through file shares looking for documents, like the domain admins LastPass document, going through trying to understand how the network is set up, look at internal check communications or internal documentation that the IT admins would have and use that as leverage to take that next step.
And that might be RDP onto a system, that might be deploying Beacon, or deploying Mimikatz on a system, or using one of the many open source tools out there to actually get remote code execution on a box.
Jasson
One of the technical or one of the techniques that I found interesting was they're purposely targeting IoT devices. And their reasoning for targeting IoT devices is they'll be less watched, they'll be more likely to have a default password or a simple password, they won't get patched often or if ever.
So, you know, you don't have to use sophisticated escalation techniques. There's probably, like, well-known published vulnerabilities that you could just deploy something off the shelf to escalate or even to compromise the system. Anyway, I thought the IoT angle was interesting. RDP showed up a lot which again, just kind of going back to the theme, right, Conti certainly displays a lot of sophistication, right?
They have a reverse engineering group, they have an exploitation group, they had an operations team, they had a regular dev team, they had, was it the professor who's basically the negotiator? But it seems like a large portion of their techniques really just boil back to phishing and credential compromise in one form or fashion.
Richard
Yep. And once you get that password... and the crazy thing about passwords is users are very predictable in their passwords. So a good one might be summer 2022! And that probably, given a large enough user base, would lead to a valid account.
Jasson
So I should probably change my password?
Richard
Yeah, make it winter 2022.
Jasson
What if I pick it in the future? So I guess with that said, why don't we kind of move on to the Q&A section?
Richard Awesome.
Jasson
So I believe we have a couple of questions submitted from the audience. Richard, I think you have them up on your teleprompter can you go through them?
Richard
Yep. So the first question we have is why passwordless? Why not something like a CAC card or a PIV card?
Jasson
So I'll give you my take on that. Richard, feel free to chime in. So a PIV or CAC card...so for those of you that don't know, these are cards that have been around for a while. And they actually use the exact same technology I was talking about before. So it's based on asymmetric credentials, right, so the public key moves around, the private key doesn't move.
There's an enclave in the card so the private key can actually be protected and you just send data to the card to get signed. These are good techniques, but we've recently learned that there are several challenges with them. So first of all, my PIV or my CAC card doesn't really understand anything about the machine it's being plugged into.
And I think we just had an incident over the last month. I think I read about it on Krebs almost like Craigslist on Krebs. And it was a U.S. federal government contractor went to go buy a CAC/PIV card reader off of Amazon. I guess he needed one.
And when he popped his card into his card reader and went through his process, he discovered that there was malware in the card reader or the driver for the card reader. And so it kind of brings up this interesting question, where we've come back to the... remember, the U.S. government put that thing out in January, which basically said, all federal agencies need to move to unphishable, or phishing-resistant authentication techniques that must include at least one form of device posture.
And what they're...if you just look for what they're saying doesn't sound like what we were just talking about. But if you really kind of understand from an architecture perspective, what they're driving at with their Zero Trust definition, they're trying to answer the integrity question.
So we all learned security originally, we were kind of taught the tripod of security was privacy, right, when do I want information to be private or not private? Authenticity, how do I know that information was authored from Alice and not Bob, right? And integrity, how do I know the information wasn't modified in some form or fashion, right? And so the property that this particular scenario calls into question is integrity, it's integrity of the authentication chain itself.
So why not PIV/CAC card over passwordless? I would argue that the PIV and CAC card actually can be part of a passwordless strategy but they're not enough. And what you need to do in addition to that is you also need to include integrity of the authentication process as part of the authentication process.
How do I know the hardware is unmolested and in fact was produced by Infineon? How do I know the bootloader, the operating system, the application I'm just talking to were all unmolested and are genuine Windows and genuine Slack? Like, all of these questions are valid integrity questions, and any modern Zero Trust access platform is going to have an ability to solve those questions for you, at the time of access.
Not after the fact, not as an incident that you have to go respond to, but at the time of access, helping operate a more preventative mindset.
Richard
That was beautiful. All right. Shall we move on to the next one?
Jasson
Let's move on. All right, so question number two is, "How have tactics and techniques changed over the years in terms of hacking?"
Richard
So overall, I would say that the techniques haven't really changed that much over the years. The tooling gets better, the databases, the collection of data to help aid the attacker or the adversary to figure out what is the next path have gotten better. Overall, the same basic technique of credential harvesting, to lateral movement, to eventual exfiltration, or exploitation of your goal.
The interesting bit is as we start layering on new technologies...like, two-factor was supposed to be one of the silver bullets to actually solve a lot of the credential theft out there. And we're finding that, not only are humans susceptible to phishing once, you can actually go phish them a second time for their two-factor token, effectively getting around most two-factor systems out there like SMS, text base, or one-time tokens.
So what's interesting is, now attackers, if they are put up with some type of, like, block, let's say we're getting stronger authentication, they go after the sessions. And that's where to Jasson's point, we need to have the integrity of the system also included as part of that.
Because if we lose integrity, then we don't really have any way of as an attacker trying to gain access to the next bit of the system.
Jasson
One thing I would add on the back of that you're saying that techniques haven't really changed. I can't agree more. They still haven't changed that much. Like this was published in 1989, and granted, the technology they were working on has changed drastically, the techniques of adversarial access, discovery, enumeration, lateral movement, the techniques of detection and response, they're kind of almost identical to what we do today.
Now clearly the product names are different. Well, there were no products here he invented a lot of it himself. But anyway, if you haven't read this, you should. And if you have, man, wasn't it good?
Richard
And the crazy thing about, like, techniques and whatnot is there's a college course that was taught at the University of Texas and the basis of the course is all using, like, Windows XP and things like that. But the relevant skills that come out of that, effectively, the discovery, trying to figure out how to look for usernames and passwords and things like that can be applied to the latest and greatest operating systems.
So even through my career, not much has changed. But the research and development and, like, processes of, like, BloodHound, or you can now be a little bit more stealthy, or you don't have to brute force, you're able to pick the right account based off context clues has gotten a little bit better.
Jasson
Good university that.
Richard
Yep. But you're an Aggie, right?
Jasson
Both. I split the difference, got two degrees, one from each school.
Richard
Let me ask you this, what town did you like living in more? College Station or Austin?
Jasson
Austin was clearly the town to live in. Yeah, living in College Station was, you did it because that's where the work was and that's where the people were. But Austin was just nice to live.
Richard
Awesome.
Jasson
All right. Do we have any more questions?
Richard
We have two other ones, cost of migrating to a passwordless solution or what's the cybersecurity insurance benefits?
Jasson
You want to just do them in that order?
Richard
Yeah. What is the cost of migrating to a password solution?
Jasson
So cost comes in a couple different ways. So what is the dollar cost? What is the administrative cost? And what is the end user impact? So we feel pretty strongly that the administrative cost needs to be incredibly low. Any Zero Trust access solution has to work with an existing infrastructure which is going to include an existing SSO.
Most enterprises are going to have MDM and EDR deployed. So a Zero Trust access solution has to, not just work with those existing product categories, but help the customer get more value out of their existing investment of those platforms. And so we think very much the theme is to shift and try and get a lot more of these incidents prevented, as opposed to detected and responded to.
So if we could use some of those signals from MDM and EDR in the point of access, you absolutely should. So what it typically looks like from an administrator perspective is integrating as a delegate identity provider, setting up directory mastering using open protocols like SCIM. We typically tell folks if they know their environment well, it takes 30 to 60 minutes to get things set up. And then they can choose either on an app by app basis, or on a person by person, or group by group basis who gets to experience the new secure access solution.
In terms of SATs administrative costs. In terms of end-user costs, we firmly believe the end user is going to spend less time and energy to get their work done or to complete their shopping cart and actually complete their purchase.
They're not having to remember a password, they're not having to go through a password reset process. Like, it's just a much better end-user experience. So for a workforce perspective, it's more productivity. From a consumer B2C perspective, it's higher shopping cart completions, right, that sort of thing.
From an actual purchasing cost like...I mean, we could just talk about ours like our list price is $6 an identity a month. So, you know, it's not a significant expense to get, not just improved security posture and phish-resistant access, but also to improve your workers' or your customers' experience with your product or your work.
Richard
Awesome. And then what is the cybersecurity insurance benefits of going passwordless?
Jasson
So cybersecurity insurance benefits. So I used to work with the Cyber Risk Underwriters quite a bit in my last job, and the golden questions would always come up. And the golden questions in their mind was do you run patch management? Do you have MFA?
And do you deploy a password manager? And the reason we call those the golden questions is that drove kind of 80% of the risk equation, as far as the underwriters were concerned and whether someone was an easy target from an attack of some sort. So when a solution like this, right, where you actually deploy a Zero Trust access solution that actually has continuous authentication, number one, there is no password so there's no credential to be stolen, right?
Whether we're talking about in transit or even at rest, right the credential lives in a TPM, or a T2, or some form of Secure Enclave. So that risk is off the table. The solution by definition is multi-factor and through a policy engine. Usually, the administrator can really dial things up or down or fine-tune settings, depending on the risk of what someone's trying to do.
If they're trying to access a critical app, dial up the factors. If they're trying to do something less critical, you can dial down the factors. And in some of the...or in a lot of the scenarios, some of the information that can be included in the access attempt is, what are the security controls in place on the device at the time someone was trying to access the thing?
So putting that more concretely, what's the integrity of the operating system? Is FireEye actually running? So being able to answer those sorts of questions kind of checks off that last box around patch management, right, but it also clearly goes much beyond that. So the reason I throw those things out there is, for as little cost in all three of those domains that we just talked about, you can really take a lot of the concern of an underwriter off the table in securing access to your environment from a workforce perspective.
Richard
One of the big things I've seen, especially going through those questionnaires in recent time is they are definitely starting to ask for more and more proof of these technologies and whatnot. And having something like a system that basically is checking every time someone logs in, you either get two signals.
The user can't log in, you have to go fix it, or the user is logging in and you're in that telemetry where you are able to prove it, and you're able to then share it and show hey, we have a good inventory, we have a good software inventory, and we kind of know the state of where things are sitting. And by having that, you're already a little bit better than most of the other competitors you might have.
Jasson
We've also been using it internally to accelerate. So when you go through audits, right, there's an evidence collection period, and they have a name for it, I forget. But it's easy just to pull reports from the event log and actually use that as evidence and just kind of move through what used to be quite painful, back and forth and document proofs and whatnot.
Like, here's my policy, you can see it hasn't changed. It's even integrity protected. And here are my events so just tell me how you want to spot-check things.
Richard
Yeah. I'll run a script It makes it a lot easier as an auditor.
Jasson
Well, if only the auditors could run scripts, but that's for another day.
Richard
Awesome.
Jasson
All right. Well, thanks, everybody, for hanging out with us. This was fun for me, and hopefully, it was informative and interesting to you.
Richard
And it's nice talking to everybody as well. This was enjoyable. And one of the big things is we hope that we actually make a difference in the cybersecurity realm, help everyone reduce a little bit of risk. So see you on the interwebs.