Hello everyone, this is Tom Uran and I'm here as per usual with the gruck. G'day gruck, how are you? G'day Tom, I'm good and yourself?
I'm well. This week's episode is brought to you by Socket. I have an interview with Socket's CEO and founder, Faros Abugadije, on the channel this week. It was a fascinating discussion about open source software and supply chain security. So have a listen to that. Socket makes technology to identify supply chain attacks, which has been pretty topical over the last recent while.
So today, Gruck, we thought we'd talk about the life cycle of Zero Day. I think there's a lot of sort of myths and misconceptions about O'Day and their relationship to the threat landscape. When they get burned, when they get found. Okay, okay. I'm going to present you the...
Well, I hesitate to use conventional wisdom, but the mainstream media conventional wisdom. Perhaps that's the thing to do. And Oday is this magical thing that allows you to do anything. And as soon as it's discovered, it's totally useless. So it goes from hero to zero very, very rapidly. Yes.
I think the thing about our understanding of ODE is it sort of, it comes from two sources. One was the hacker community of the late 90s, early 2000s, where ODE was kind of magic, but all the systems out there were just so terribly insecure and ODE was so easy to find that, you know, you could sort of sit down on a Monday, grep around for a bug and
and then have an exploit on Tuesday and on Wednesday be owning hundreds of servers on the internet. And like that was kind of magical, but that was 20 years ago. So I think there's sort of that understanding about it, where it used to just be really easy to use.
Really easy to find, really easy to use. They get patched. Everything was sort of run by system administrators who were diligent in rolling out patches and sort of paying attention to bug track and stuff.
And I think the other source that's really screwed things is people from the intelligence community that rely on cyber intelligence stuff. And the approach that Western agencies have is very, very stealth, low detection, obsessed, I would say. And so for like a Western agency using O'Day information,
is kind of critical because you want to use something that's not going to be detected, that is going to work reliably, and that is very unlikely to be found by someone else. It sort of gives you unique access and things like that. So in terms of those agencies, when they find something that they can use, it's very valuable because it allows them to get access
Mm-hmm.
from a very secrecy-obsessed organization that brings its own different set of risks. Like buying credentials is, I think, a practical solution, but also you need to buy them covertly, you need to make sure no one knows who you are, yada, yada, yada. I think it's nowadays a very, very plausible solution to the problem of getting access because you can get the same things, right? You can get access without...
people necessarily knowing. But I think especially maybe five or 10 years ago, that was probably not a thing that those agencies would do. Right. I think that's actually a good point is that they've probably matured in their, I don't know if risk tolerance is quite the right word, but they've matured in that there's more options that they're willing to explore.
beyond just Oday. And I think that the perceptions of Oday were calcified long ago, predating all of this stuff. There's this late 90s, early 2000s hacker culture and that first tranche of ex-NSA, ex-whatever people coming into the community, bringing their ideas
understanding of how to use ODEI and the value that ODEI has. Right, right. So you're saying that those intelligence people, they had a particular perception of how useful ODEI was because it was so useful for intelligence agencies.
Western intelligence agencies. Yes. Specifically, right. Secrecy-obsessed intelligence agencies. And that perception permeated the, I guess I'd call it the cybersecurity industry, even though many other actors are not secrecy-obsessed. Yes, absolutely. I think that that's led to...
to a lot of misunderstandings, right? So that these rules of thumb about how things work are, these are rules of thumb for a secrecy-obsessed agency doing risk assessments for a particular operation. You have an O-day, you're going to use it in a particular situation. There's a non-zero chance that your zero day is going to get discovered, in which case you cannot use it again.
And so that's the sort of thing that you kind of have to put into your calculus and be like, oh, you know, is it worth penetrating this target if there's a 50% chance we lose this capability and so on. And so I guess I would think of like the Uber Oday, it would allow you to get onto, and this is entirely fictional, of course, would allow you to get onto any system that
anywhere at any time, and it would allow you to get access at the highest privileges and leave no logs. But even in that case, there's still a chance that someone will detect you doing something strange because they're monitoring the network or whatever. Right. You pick the one day that someone is sitting there doing micro benchmark performance analysis on their box to see how Postgres is working
And that's the day you hack their box with your Uber O-Day. Yeah. And so that's the extreme example. Even though it's all-powerful, because it's all-powerful, you have to still be really careful with it. That sort of thing makes sense in particular contexts. You know, if you're secrecy-obsessed or if you're trying to explain to a military officer
cyber tooling works. And you're trying to say like, look, if we write a worm for you that exploits this particular vulnerability, we can deploy that and ship it out. But after we've done that, everyone's going to figure out the exploit that we used that's going to get patched and that worm will not work a second time. Yeah. Yeah. Well, yeah, we can use this against everyone, but if we use it against everyone, we can use it against no one.
Yeah, like we can use this exactly once, right? And so you start adding these things together and you're like, okay, so like a cyber weapon is a single use ephemeral tool that you can deploy one time and it goes away. And that makes sense because Oday, we know when they get burned, that's a bad thing. We all understand that a burned Oday is bad and you can't use it anymore.
These two rules reinforce each other. They're obviously complementary. And the thing is, like, they're not laws. They're not rules of cyber. They're rules of risk assessment. Yeah, that's right. I think the not using it again, like you can clearly use a vulnerability again, right? But from a secrecy focused organization.
You don't use it again because that allows the adversary to link operations. So that's entirely we're worried about ourselves, not because the vulnerability won't necessarily work.
Exactly. And it's even to the point of that using a known vulnerability will increase your risk of being detected because the odds of someone having a system looking for known vulnerabilities being used is higher. Yeah. Again, these are self-imposed rules of risk mitigation for secrecy obsessed organizations. And that's fine. Like it's worth knowing how they think, et cetera, et cetera. But
But they don't transmute to anything else. Not every actor on the internet is secrecy focused. Yeah. So, for example, one of the things that we used to talk about was that for Chinese threat actors, having a Tianfu cup with a burn Ode, so this was quite a while ago, but like having this
The pwn2own thing where they go out and they can burn very, very good Oday is actually extremely useful to them because the chance that there's been sort of a co-discovery that one of the Oday they're burning is also being used by a Western agency is reasonable or it's non-zero anyway.
And so they might be removing something from a Western agency's arsenal. They might be removing a capability. But as long as they themselves don't care about whether it's ODE or not, they're just making more cover for themselves. So the idea would be that a Chinese researcher burns an ODE in Chrome. That ODE can now be used by everyone except...
secrecy obsessed agencies right yeah like there's this concept of like deterrence by denial and so it's basically like i can deter you from using cyber by making sure that my stuff is so secure that you can't hack me right right but the the other way of looking at it is i can prevent you from hacking me by burning all of your tools preemptively right yep whereas i don't care
Yeah, yeah. So there was a recent example where NSA told Microsoft that there was a bug. I think it was in the print spooler or something like that. I think there was news this week or last week that Russian, I can't remember which one, had been using it for two years prior to NSA telling Microsoft and Microsoft patching it.
So my kind of vague assumption was probably that NSA found the Russians using this bug, figured out what the vulnerability was, told Microsoft. And so there's an example of trying to deny the adversary some capability, right? Yeah. Yeah. So I think what that gets at actually is one of these other big myths that exists about ODA is that when it's burned, it's no longer useful.
And we've addressed the, like, it's not useful if you're obsessed with secrecy. But obviously a patched bug is very different from an unpatched bug in that theoretically an unpatched bug you can always exploit. Whereas a bug that can be patched
it might be patched on the target that you're going after. So it goes from a high 90s chance to maybe a 50/50 or high 70s or something. Yeah, I would have thought there'd be this dynamic where for the types of bugs that intelligence agencies ship off to Microsoft to get patched, that they're trying to protect organizations that I'm assuming are better at patching than average.
It's not a, they're not going after the small medium enterprise market. Yeah, that's right. And so for the cut and thrust of intelligence agencies back and forth of the cyber battlefield or whatever, like that, that kind of strategy probably is pretty good. But, but in terms of the whole landscape,
Patching is probably... Yeah, and that's an excellent point in that the goal of these agencies is not to make the internet more secure. That's not a mission statement that they have. It might be make it more secure for Australian businesses or whatever, but it's
It's not the sort of altruistic, improve it for everyone. I think the UK's strategy says that their goal is to make the UK the safest place to do business online. Yeah. So when you report a bug to a vendor and a vendor takes, you know, 90, 180 or however many days to issue a patch,
and then that patch gets rolled out piecemeal by different people. For the intelligence agency, they care about the Department of Defense network, government networks, maybe the really big enterprises in the national security level stuff. They want those guys to be patched. We want our nation to be secure against this attack. They don't really care that making that bug possible
via this patch mechanism or via the release or whatever is going to increase the number of people who can use that bug. Right, right, right. That's not a concern at all. Yeah, yeah. I guess just to spell out the logic or the flow of events there, people release a patch and then invariably release
researchers will figure out what the patch is fixing. And therefore, just the process of patching something and pushing it out publicly by its nature inherently tells people who want to find out how to exploit it more or less. So it's very much a double-edged sword. So you get a population who will patch and you get a population who won't patch.
And you get a population who will reverse engineer the patch. So there's sort of three things going on. Right. So you go from a small number of people who have the exploit using it against whomever they're using it against. I'm going to say it's a small number of targets, but, you know, it might not be. But anyway, so you've got like a small number of users against a small set of targets and then it gets patched.
And now you have a large number of users in that anyone can use this exploit because everyone knows about it. So anyone can use it. But you also have a smaller pool of vulnerable machines. Yeah, I would have thought that that pool would be the pool of machines that likely governments care about less, but may still be worthwhile attacking for criminals.
Right. So I guess the way to sum that up is that people fetishize Odeh because of a heritage that's left over from... The good old days. Yeah. It's a nostalgia thing. Maybe not nostalgia, but it's very much a legacy hangover. It's sort of this conventional wisdom that was calcified and ossified back when it made sense. But that was 20 years ago.
And so much has changed since then. So you wouldn't argue that people shouldn't disclose bugs and we shouldn't do patching. It's just correct. Yeah. It's just that they're not the complete solution. They're a partial solution. It probably works pretty well for some organizations some of the time.
Yeah, and there needs to be this nuanced understanding that when you report O'Day, you don't remove it from the arsenal of the bad guys. You remove it from the arsenal of some people, but you add it to the arsenal of a lot more people. And that's the one side of the equation, that you're increasing the number of threat actors that are using this actively. Right.
And on the other end, the number of people who can protect themselves goes up because you can apply the patch. However, the number of potential victims also goes up because... Now that sounds to me like an argument for...
Things like Chrome and iOS where patches or updates get pushed out pretty much automatically. Yes, absolutely. So my opinion is that the reporting ODA, it's only one piece of the solution because the other side is mandatory patching. It has to be this sort of, if an ODA gets made public, then all the systems need to be fixed. Right.
Otherwise, you're not necessarily decreasing the number of people who are going to get hacked. Well, so the life cycle of a ODA or a vulnerability we've described so far is that it gets created somehow. It exists in some software. It can get exploited at times. And then there comes a time when it's discovered, at least in this life cycle that we're talking about.
There's a patch issued and then what was once a closely held secret becomes a, I don't know, a kind of public good in a way for better and for worse in that people can use it to protect themselves, the knowledge of that vulnerability. They can use it to protect themselves by patching or other mitigations or whatever. And at the same time... And detection on the wire or in the, you know, et cetera, et cetera.
Yep. And at the same time, it opens that that same knowledge that is used for patching and protection is used to attack a whole other swathe of people who either don't care or don't know how to, aren't in a position to protect themselves. And so it's...
It's not a panacea, but it seems like, at least from a government point of view, you want your organizations to be in the group that patches and cares. And so, well, what else are you going to do? One thing that we haven't looked at is the burning process. So people have this fantasy idea that if I'm using an iOS device and I don't need to worry that much about being hacked because I'm not worth burning an O-Day. Right. Right.
And the thing is, unless I actually have a way of detecting that my device has been compromised and figuring out how, then there's a 0% chance that attacking me is going to burn an O-Day.
Like, unless I can actually do those things, then obviously from the point of view of an agency using this stuff, you have to assume that, you know, if you're attacking a thousand people, I mean, and these are not like random people, right? These are like government diplomats, people at intelligence agencies. People at Kaspersky. Hypothetically, yeah.
I'm referring to a blog series Kaspersky released about they'd been targeted by presumably a state actor. And it was, we spoke about that a while ago, I think. Right. So like the thing is this, this idea that like using a bug will get it burned. Yeah.
is from these intelligence agency risk assessment types who tend to use bugs against hardened targets. Right, yes. Or against targets that are under increased scrutiny or more likely to be monitoring. Yeah, so it's true for those in that case, intelligence agency versus another government or another state or whatever. Right, right. But it's much less true...
against the majority of just sort of regular everyday targets, right? If you're going after your regular size, like lawyer, law firm sort of thing, that you're going to ransomware and take them for $200,000 because you can, there's basically a 0% chance that you're going to be detected and that you're going to burn.
Not that you would need to use an Oday, but the point being that you wouldn't need to make that risk assessment. You wouldn't be concerned that going after this small 10-person shop is risking my ability to use this capability in future. Right. I guess a different way of operating, there have been a couple of cases where PRC-linked actors have just gone hell for leather as soon as a zero-day is air quotes burnt. Right.
Yeah, we call that a land grab, where every exchange is vulnerable. And so they will get literally every hand on keyboard that they can to try and get every exchange.
Yeah, yeah. So that's, you know, notionally that zero day is tremendously valuable because I think the one I'm thinking of, you could get onto any exchange box anywhere. But as soon as it's burnt, instead of just retiring, they're like, well, go for as much as we can. So, yeah, like you go from this, a cautious use of precious capability into the capability loses all value overnight, right?
And so overnight, you try and squeeze everything you possibly can out of it. Yeah. Yeah. So I guess the overall message is that the value of a zero day depends very much on the context and the situation. Yeah. The value of a zero day depends on who's asking. I think that that's our core message is that there's these old ways of thinking and
and they are not correct anymore. Like they're based on risk assessment theories for intelligence agencies. They're based on the way that it used to work 20 years ago for hackers.
And we're doing ourselves a disservice to keep using that model for how to think about ODE. Yeah, yeah. Like, that totally makes sense to me in the sense that even those agencies which use those risk assessments, I don't think they behave exactly like that anymore. So it's a holdover from an era that doesn't exist anymore.
So what you were really saying is that we should just retire the phrase O'Day has been burnt and we should just say it's been, O'Day has been. O'Day has been democratized. The democratization of O'Day. There we go. I think that's it. It goes from like the hands of the elite to the hands of everyone. And you can patch it or you can hack. Yeah. Thanks a lot, Grah. Thanks a lot, Tom.