cover of episode Srsly Risky Biz: The US Government's cyber insurance plans are silly

Srsly Risky Biz: The US Government's cyber insurance plans are silly

2024/8/15
logo of podcast Risky Business News

Risky Business News

AI Deep Dive AI Chapters Transcript
People
P
Patrick Gray
T
Tom Uren
Topics
Tom Uren: 美国政府提出的网络保险兜底计划可能并不合理。该计划旨在弥补网络保险缺口,并提高整体网络安全水平。然而,该计划缺乏经济上的合理性,因为网络活动并未因网络战风险而停止。此外,该计划的实施存在难度,难以确定有效的安全措施,并衡量其成本效益。Tom Uren认为,政府应考虑是否有更直接有效的方法来提高网络安全,而不是依赖于网络保险兜底计划。他认为,如果该计划的目标是通过保险公司提高安全要求来改善整体网络安全状况,那么这是一种巧妙的策略,但并非最佳方案。 Patrick Gray: 同意Tom Uren的观点,认为美国政府提出的网络保险兜底计划可能并不合理。他认为,该计划的实施存在难度,难以确定有效的安全措施,并衡量其成本效益。 Patrick Gray: 与2016年相比,如今的媒体在处理泄露的材料时更加谨慎,不会轻易被利用来干扰选举。如今的记者在核实泄露信息的真实性方面更加专业和谨慎。媒体机构在决定是否报道泄露信息时,会权衡其新闻价值和信息来源的合法性。美国法院关于地理围栏令的裁决可能意味着任何数据库搜索都可能违宪,这将对数据安全产生重大影响。由于谷歌改变了手机位置数据的存储方式,该裁决对智能手机的影响可能有限。存储大量数据存在风险,公司应改变数据管理策略,降低风险。

Deep Dive

Chapters
Tom Uren and Patrick Gray discuss the US government's proposal to introduce a cyber insurance backstop, exploring its potential to improve cybersecurity and whether it's the best way to address security gaps.

Shownotes Transcript

Translations:
中文

Hey everyone, and welcome to another edition of Seriously Risky Business, the podcast we do here at Risky Business Media HQ, where we talk to Tom Uren about the newsletter that he writes for us, which is called Seriously Risky Business. And you can find it at news.risky.biz if you wish to subscribe, and I recommend you do because it is a terrific newsletter.

Tom's work with us is supported by the William and Flora Hewlett Foundation. And we also work with Lawfare on this one, Lawfare Media. So big thanks to all of them. And also we have a sponsor this week for Seriously Risky Business. And that's Corlite, which of course makes the Zeke.

network security sensor, sensor. Wow. That was a hell of a pronunciation there, sensor. And, you know, Corelight and Zeek, it's the industry standard really for turning network traffic, crunching it and getting the security relevant information out of it and throwing it into seams, NDR platforms, whatever. If you don't know Corelight, you absolutely should. But anyway, Tom, let's talk about what you've written this week. You've covered a few things this week, all very interesting, but I want to start off

by talking about this US government proposal to introduce a sort of backstop for cyber insurance. The reason I want to talk to you about it is I started to look at this when I was preparing the main podcast with Adam and I was just like, "Eh, too hard." I'll ask Tom what he thinks about this and to do some research on this. And really what you found is that it looks like to a degree what the US government is trying to do here doesn't really quite make sense.

That's my take on it. So stepping back, the big picture is people have thought that insurance is a way to improve cybersecurity because you give companies a monetary incentive to improve. So the idea is that if they're...

They're ticking off certain security boxes, their premiums will be less. So there's an economic incentive. Yeah, I interviewed a CISO from an insurance, like one of the big global insurance companies once, and they were like, yeah, you know, so we might say to them, well, you need MFA here or your premium is just going to be like insane. And it works, right? So, I mean, so far, so normal. Yep. And...

Every now and then there becomes a problem in an insurance market where people can't get insurance and the government needs to step in. And one example of that is after the September 11 attacks, the insurance for terrorism became prohibitively expensive and insurers basically wouldn't cover it.

And in that case, economic activity ground to a halt in the States. So people were not building things because they couldn't get that insurance. And so the government at that point stepped in. There was a Terrorism Risk Insurance Act, I think it was called, and they provided what's called a backstop. Now, in the cyber insurance market...

Well, hang on, hang on. Just before you go on, what exactly is a backstop? I mean, you know, we're a cybersecurity podcast. Not everybody's an insurance expert. It probably makes sense to explain that. Yeah, yeah, yeah. So the idea would be that in the terrorism case, it's not feasible really for private insurers to insure against terrorism acts like 9-11 because they're just so horrendously expensive and

And so the government said, okay, we'll step in and we'll provide that insurance coverage. So everyone, I'll use the word magically, it's not magic, of course, has terrorism coverage that's provided by the government.

Right, so they'll just underwrite risks that the private sector insurers won't, I guess. And it's automatic and applies to everyone. And that means the insurance companies aren't scared to give like comprehensive policies to people because there's a terrorism risk. Yeah, that's the vague idea. I'm sure some of the details are wrong, but yeah. We're going to get shouted at by insurance people for sure. But anyway, that's okay. Move on.

So what's happened in the insurance market is that insurers are gradually adding more exclusions for pretty serious events like war. So the concern is that if there is a catastrophic cyber war, people will be left without insurance and will be in a world of hurt. Now, so you get different views on this. So one of the people I spoke to, Daniel Woods,

He said basically there's no justification for a cyber backstop because economic activity is still going on. So people are still operating online, they're still using the internet. It's not as if Amazon or anyone is saying we're not going to have an online presence because of the risk of cyber war. So there's no economic justification.

I think we need to fold this company because we can't get cyber insurance against Black Swan risk. I get it. In the 9-11 case, yeah, when construction activity stops, obviously it makes sense to try to do something about that. But yeah, the point that he made, and I should point out he's a cyber risk and insurance researcher at the University of Edinburgh. This is someone who knows what they're talking about. He's like, well, come on. It's not like internet innovation has ground to a halt. So I think that's a good point. That's point number one.

Yep. Now, another person I spoke to, Josephine Wolfe, her argument is that there's these gaps that exist. They're getting, perhaps they're getting larger. And so the way to tackle that is perhaps you use the backstop as a way to improve security. And you say, okay, insurers, if you want a backstop, okay. But in return, you've got to impose these security rules.

measures. Your policyholders must do these things. Now, the problem with that is it's quite tricky because what are those things you would do that you would get people to do that actually make a significant difference? And so my thinking on this is that if there's no economic driver,

The main reason you would do this is as a lever to improve security just generally across the economy, across companies that are buying insurance. And this is funny, so I should also point out to Josephine Wolfe, the person you're speaking about, she actually wrote a book on cyber security insurance, right? Which is amazing. I did not know that people regularly wrote books about this just as a single topic.

And both Wolf and Woods, they've actually collaborated on a number of papers, so they know each other. So they're like this nexus of insurance who happen to have slightly differing views on this issue. So I found the whole thing very fascinating.

Sorry, what I was just going to get at there though is that if the goal here is to actually introduce a backstop that you can use to get insurers to change their policies such that companies and I'm guessing government departments and all sorts of things start to improve their security, this is almost like a sneaky way of introducing security.

I mean, something that kind of functions like a regulation that isn't a regulation, which I think, so if this is the goal, this is a really clever manipulation by whoever in the US government is cooking it up. When what we're saying is there's not really a practical requirement or need for it, but it is a way for the government to kind of move the needle on, you know, forcing people to adopt better security controls. So that's the part that I find interesting about this, which is maybe that's what they're trying to do.

If that's what they're trying to do, I think that is potentially a great idea. So I'm not 100% convinced. That's the most Tom Uren statement of Tom Uren statements, which is if that's what they're trying to do, I think it's possibly maybe a good idea, yes. That's right. And the reason it's caveated is...

Like, I guess the first question is, what would you get them to do? And so interestingly, Daniel Woods has actually done research on what security measures actually make sense. And it's not like vastly paraphrasing. It's there is no single checkbox or series of checkboxes. It's a lot of it is about how you implement things.

And so a company can have MFA and another company can have MFA. And it really comes down to the difference between the two can have different end state security postures based on how well they implement it and all the sort of stuff around it. It's not just the checkbox. Well, this is why I've always rolled my eyes at companies like Security Scorecard because the idea that you can just get, excuse me, some sort of uniform workbook

risk measurements without having, you know, so much context on individual environments. You know, it's always been in my mind a fundamental problem with the way a lot of this insurance stuff operates. That said, I think things like, you know, universal MFA, there are certain things that

that certainly help and certain things that are indicative of a better security posture. The type of company that has MFA applied to all of its users is also the type of company that's capable of doing that. And that tends to suggest certain things. But I take what you're saying and I agree with you, which is that there's not really a way to accurately and uniformly and easily measure risk across different companies when risk is so context dependent.

Yeah, and so when it comes to trying to raise the bar, I think to me it is a genuine question. Is this the best way to raise the bar? And if it is, I think, yeah, go for it. Now, Josephine Wolfe described doing this as tricky because there's so many different issues that you're trying to, I guess, weigh up against each other and trade off.

And so the question this left me with is, is this the best bang for buck way to improve security?

Or is there a more direct way to improve security? Now, if you're talking about some of the bigger risks in this category, it's critical infrastructure. Is there a more direct way to try and encourage them to improve security? And I guess perhaps in the context of the US, there isn't, in that there's been this back and forth about whether, for example, the EPA can...

impose cybersecurity regulations on water and wastewater facilities. - Well, I think we've already learned the lesson that they can't. They can't enforce that, but they can issue guidelines and whatnot. But also, you know, you and I have spoken previously about how an advantage to the government in the Chinese system is they can just tell people what to do,

But I think anyone who's taken a poke around the Chinese internet would tell you that it's not, you know, that hasn't necessarily lifted the bar all that much over there just yet. And maybe in time it will. But, you know, that's sort of a fundamental question about like, you know, even if you had unlimited regulatory power, is that something that would even help when you've got a limited workforce and expensive controls and whatever? And then we get into a whole philosophical discussion. Yeah, yeah. So like...

I guess I came down on the sense that this felt like a tricky issue where it would be hard to get traction and therefore the government should do something else, basically. And if you had unlimited government capacity to do all sorts of things, yeah, sure, go for it. This seems like an additional tool that would be helpful. But...

I doubt that it's the number one tool. Like if you've got the list of the most bang for the buck measures you could take, is this number one? I don't think so. I think it's further down the list because it's tricky. And I think we're still in an era where there's easier things to do.

Yeah, I mean they are doing some of those things, right? But I certainly take your point. Now look, let's move on to another thing that you covered. We touched on this in the weekly show yesterday, which is this Iran hack and leak targeting the Trump campaign. Of course, they stole a bunch of documents and have been leaking them to the media. What's interesting about this is the media in the United States hasn't taken the bait. And I really, really enjoyed reading what you've written.

about this because it's a point that I intended to touch on yesterday and didn't. One of the big differences between now and 2016 is that in 2016, you know, a leak of stolen material had such novelty that people were writing these pointless stories about like internal DNC politics that really, I mean, weren't

even that much in the public interest. It's just that it was an insight that came through stolen documents and everyone was writing about it and it sort of gummed up the press cycle. Whereas now, publications have kind of come out and said, "Yeah, here's what was leaked to us, these documents. We're not really going to report on them." You know, case closed and the news cycle moves onwards. And the point you made, which is again, is one that I intended to make yesterday, is that if we've raised the threshold

At which publications will deem something newsworthy when it's in a stolen archive. Like, that's a win. So, of course, if they receive leaked material that's full of evidence of, like, felonies or whatever, or some really explosive stuff, they're probably still going to report on it. But what they're not going to do is let this turn into a massive distraction during an election campaign. And that is something to feel good about. Yeah, I think there's...

That's definitely a win. There's also, you look around, and for example, Christopher Bing. So he's a cybersecurity reporter. And he's, you know, just on Twitter, he said, you know, here's the things that you could do to kind of verify these documents. And he's got a little list of things. And so I think the reporters, not every reporter, but there are reporters who are much more savvy about how you would actually go about checking the veracity and things you could do to verify

verify things technically. So that's also a win. I agree with you totally, though, that if it was full of felonies, it would absolutely get reported. One of the articles I looked at was the Washington Post's media reporter, and he went around to different media organizations and asked them about their decision not to just republish or, I guess, mine the documents.

And it was clear from all the quotes he got that every single one of them had thought, what's the balance here? Is this document newsworthy or is it the hack that's newsworthy? And most of them were like, yeah, we looked at the documents. We thought about it. We made a deliberate decision that they were hacked changes thresholds.

And so I thought that was a really, and even just the presence of that piece, you know, here's the media introspectively examining itself about this hack. I thought it was an interesting reflection on how things have changed. Well, we in the media love to give people insight into what we're thinking because often we believe that's the most important thing. Do you wonder though, if this were, you know, if these were stolen documents from like the DNC again, I think one of the differences though, like,

It's the fever dream, like fever swamp right-wing media.

that would go ape with this stuff. Do you know what I mean? So I do wonder if one of the reasons this is being treated a bit more responsibly is because, you know, Newsmax, Fox News, those sorts of outlets aren't really, you know, this stuff is damaging to their guy, would be damaging to their guy if they were to focus on it. Whereas, you know, in this case, it's like outlets, like you said, like Washington Post, New York Times, CNN, whatever, you know, they're the ones looking and saying, no, no, we should be responsible here. I just wonder how much of this

How much of the win here is just because of which side of politics it happened to affect. You know what I mean? Which makes me feel a bit icky, but yeah. Yeah. One of the things I found quite hypocritical was the Trump campaign's

statements about the material. Oh, this is outrageous. This is how dare someone do a hack and leak, right? And they're out here trying to destroy democracy. Yeah, yeah. And that's contrasted with what Trump did in 2016, which he basically said, you know, Russians, if you're out there, go and hack away at Hillary's email. I mean, you can imagine Harris coming out and saying, Iran, if you're listening... LAUGHTER

But I mean, the fact that that seems so ridiculous is sort of a sign that we've moved on. But look, that was all very interesting as well. Now, another thing that we covered on the show yesterday, but you've got more detail here, is this court ruling in the United States that has found that geofence warrants might be, in fact, unconstitutional on Fourth Amendment grounds, which, you know, is very, very interesting and all of that. But I remember seeing that there was, people thought that this was a nuclear take on

from the judge but I couldn't remember why and you've gone and found a I think it's a post a blog post from Oren Kerr who is a professor of law at UC Berkeley and you've quoted from that blog post and basically what he's saying is that this decision could be interpreted to to basically this decision could basically mean that any database search not just a geofence

Yeah.

Yeah, no, it actually conflicts with other previous rulings as well. But Kerr says the reasoning in the judgment is that when you're searching a large database, and it has to be large, but we don't know how large, because you're examining the records of all the individuals in the database, that's an unconstitutional search, even if you're just searching for particular people.

you know, very specific records, you know, that you can uniquely identify. And so that means that it applies not just to geofences, but anything that's a database query. So, you know, cell site location information, keyword searches, like everything is a database nowadays. So that would be, I think, very, very problematic if that was found to be right. Now, in terms of smartphones, because when you think of geofencing,

My first thought is about smartphones. That is probably a moot question because these geofences used to work on Android devices because Google kept location data centrally, but they've moved away from that. So now location data is kept on devices. I'm pretty sure Google just wanted to get out of the business of satisfying geofence search warrants.

Well, I think it's also a store of information that is quite appealing to attackers as well. And it would be scandalous if it got out. I think Apple's kind of moved the needle on a lot of this thinking, like with the way that they're doing their end-to-end encrypted stuff.

you know, iCloud backups and stuff. And, you know, I was on a briefing call with them about that. And really, you know, the thinking is, well, breaches happen. One of the day, one day it could be us, right? We can't, you know, if we're in a position where we can't really guarantee that this data isn't going to get walked, we need to change our thinking, change our model, right? So I think that's, it's great that Google's done that. I don't think it's just about not wanting to serve up geofence warrants. I think it's a bigger issue.

which is just keeping all of this stuff is a liability. We used to say that data is the new oil. You remember when that was the big thing, collect as much as possible. I think now people are realizing, and I can't remember who coined this, but they're saying it's not the new oil, it's the new uranium. And storing it is risky. So in terms of smartphones, I think it probably doesn't make any difference, this ruling, but like Kurt says,

Possibly vast implications. Yeah, if it makes all database searches unconstitutional, that might be an issue. All right, Tom, we're going to wrap it up there. You've written about a whole bunch of other stuff in the shorts section of the newsletter. Once again, people can go to news.risky.biz and subscribe. But, mate, thank you very much for this conversation. I always enjoy it and I'll look forward to doing it again next week. Thanks, Patrick.