cover of episode Container Security, with Michele Chubrika

Container Security, with Michele Chubrika

2024/10/15
logo of podcast Kubernetes Podcast from Google

Kubernetes Podcast from Google

AI Chapters Transcript

Shownotes Transcript

Hello and welcome to the Kubernetes podcast from Google. I'm your host, Kaslin Fields. And I am Abdel Sigiouach. This episode is special. We collaborated with the folks behind the Cloud Security Podcast from Google, Anton Shuvakin and Team Peacock,

To bring you a joint episode, we had the pleasure to jointly interview Michelle Shuberka, a cloud security developer advocate at Google. We talked about VM and container security, debunked some myths about isolation, attack surfaces, immutability of containers, and more. We are so excited about this episode, so make sure to listen to the end. But first, let's get to the news. ♪

Google announced the availability of NVIDIA NIM on GKE. NVIDIA NIM, which stands for NVIDIA Inference Microservices, which is part of the NVIDIA AI Enterprise Platform, is a containerized microservice that helps running common AI models across multiple platforms, including Kubernetes, with a single command line. The Kubernetes Steering Committee election results for 2024 are in. Three new members joined the committee this year.

Antonio O'Hare and Benjamin Elder from Google and Sasha Grunert from Red Hat.

The steering committee for the Kubernetes project oversees the governance of the entire project. The members of the committee serve a two-year term. The schedule for KubeCon and CloudNativeCon India is now live. The event will take place in Delhi from the 11th to the 12th of December, 2024. This will be the first time a KubeCon and CloudNativeCon event is held in India. Head to the show notes for a link to the schedule. Diagrid announced the beta availability of their managed dapper platform called Catalyst.

The platform offers a set of managed services for building microservices applications on top of Dapr. We covered Dapr in one of our episodes with Mauricio Salatino, aka Salaboy. Don't forget to check out the links. They are in the show notes. And that's the news.

Hello and welcome everyone to this special edition of the Kubernetes podcast and the Cloud Security podcast. We are excited today to be doing a collab podcast where we are interviewing Michelle Chubrika. Michelle, would you like to introduce yourself? I'm a cloud security advocate at Google, a former architect for finance, places like Capital One, Bank of America, and a lot of other things.

I've also worked in academia, and I'm one of the few security people who will actually touch container security in Kubernetes. We love to hear it. That's exciting. Maybe not the rare part, but we love people working on container stuff. And we have a very special episode today with more hosts. We also today have Anton. Would you like to introduce yourself? Sure.

Sure. I kind of feel like I'm in this scene between guest and host based on your introduction. So yes, we are doing this together. And I am the usual host, well, one of the two hosts for the Cloud Security Podcast by Google, which touched Kubernetes a couple of times. I mean, as rarely as Michelle pointed out. So I guess that confirms the unfortunate trends of security people not enjoying Kubernetes as much. But I think that this will be very fun.

And yes, I work for Google Cloud Office of the Sea, so I keep forgetting about this. And I think we have probably a fifth member more in spirit than in person, who is your co-host, Tim. Correct. Yes, Tim is virtually here and maybe he'll be back by the time this airs. So this will be fun. All right. So on a typical episode, how much annoyance do you provide versus how much Tim provides? Or are you doing both parts work? Oh,

Oh, no, no, no, no. It's mostly me. Tim just provides the clues and I think the trendy word would be the prompt. He provides the prompt and I provide the annoyance. It's cool. And for those who are usually listeners of the Cloud Security Podcast, we should introduce ourselves, Abdel.

Yes. Hi, everyone. I am Abdel Sigiwar. I am a cloud developer advocate at Google. I actually live in Europe and I do stuff with Kubernetes, basically. And I'm Kaslyn Fields. I'm the other co-host of the Kubernetes podcast from Google. I do Kubernetes community things. I'm a co-chair of the SIG for contributor experience, which we will not go into today. And also do a lot of stuff in the cloud native community.

Excited to be here. This is exciting for me because it's part of the ongoing journey that I've had with Kubernetes. I remember the first time I was working for a small software company called Ellucian, and I remember somebody coming up to me, one of the distinguished engineers saying, you know, we think we want to use this container thing.

on Kubernetes. You want to help with that? Went, yeah, sure. That's the path. That's how it started.

And just for the reference, because people are mostly going to listen to this podcast on audio, when Michelle was making the face of her colleague asking the questions, she was making a discussing face. I'm like, would you help us? Yeah, whatever. Like an uninterested person, whatever. All right, so I'm going to go ahead and ask the first question. And the first question is...

approaching the question of what is more secure, VMs or containers? And I realize this is basically a big bottle of warm water opening here. So which one is more secure?

I think that's the wrong question. I mean, it was kind of a bait. It's a bait. Yeah, it was bait. You're right. Right. It's bait. There's no debate about it. And in my experience, I mean, like I said, I'm one of the few security people that, you know, I've worked at multiple places over the years as an architect, and it was always like the hot potato. None of the security folks really wanted to be the ones to partner with

platform engineering or cloud engineering on Kubernetes and containers because it just seemed so fraught. The more I've done with it, though, I don't think it's a question. I think it's that standard architect response of, it depends, right?

It depends on the context that you're going to use it. It depends on what I like to call Conway's law of cloud security, how your organization is constructed and does security get along with platform engineering? Will platform engineering take on the security of containers and Kubernetes or will they just act like it doesn't exist? So your mileage may vary in that

If you have this really good infrastructure with integrated security that is managed to support

but you haven't done anything for containers or Kubernetes, then obviously your VMs are going to be more secure right then. But if you have a really engaged, proactive platform security team and the security engineering folks want to work very closely with platform engineering, you could get just as secure containers. That may not be the answer you want to hear, but obviously,

I think it depends. I think it's how you use it. I mean, are you, how are you going to protect the edge? Are you going to try to put, you know, expose it? I mean, it's all about the architecture and as I called it, Conway's law of cloud security, right?

Those two main factors together are going to decide which one is going to be more secure for you. I mean, people still debate Linux or Windows, which is more secure. I know it's a very 2004 question, but it still comes up. And I think most intelligent commentators give that answer. Namely, it depends on the org, right? Yep. I remember this also brings me back to the early days of Kubernetes, very relevant for its 10th year. This was something that came up a lot in the early days and still does today. Yeah.

I remember a meetup that I went to in the very early days of Kubernetes where this clicked for me. The speaker gave this wonderful presentation about how differently you would have to implement security for containers versus virtual machines because they're just implemented so differently. So it's really all about the systems that you understand. And if you understand the attack vectors and how the technology is put together, then you can secure it more well. Yeah.

I also think it's a social issue. You have to have teams that are, you know, that idea from Dora of a generative culture, right, where people innovate together, where there's a level of trust, where they embrace novelty. I think I've been in both types of organizations. And, you know, as the security architect slash engineer myself,

I allowed myself to be schooled by the Kubernetes and container folks, by the software developers and engineers. You have to meet each other where the other person is so that you can interact

build security together, right? When you have those kinds of teams working together like that, then you get that, you achieve Indora. You know, if you read the report, it talks about that kind of elite generative team will be that much more secure, will deliver better secure applications. I mean, this is actually a little...

strange because ultimately if it's all about the culture then what about the real technical differences like I feel like there is still a bit of a balance in question so I want to touch on one of the more technical arguments and I think that

It's this dreaded isolation argument. And some people will make a case that VMs are strong isolation. And of course, I've seen people who very much cringe when they hear it because their VM escapes, there are other problems. So how would you treat the isolation differences? I think that there's still built-in, not just cultural, but built-in security differences. So let's clarify a couple of things. I never say isolation with containers. I say segregation because it's a shared kernel.

Right. Or you provide, you assume that containers offer no isolation. Some people do use that word. I personally take issue with it because if it's a shared kernel, I don't consider it personally isolation. I consider it segregation. Now,

I think where that becomes okay, sure, in my experience, now this is anecdotal, I've found more instances of container escape than VM escape. Now, I'm not a mandiant person. I'm sure that there are very eloquent

elegant, probably hidden nation state attacks against, you know, that can generate VM escape. However, in my experience, I tend to see more container escape than VM escape. Now, I think the trick to Kubernetes, and I came up with this a while ago, is that

Kubernetes is about optimization of applications that are homogenous, right? You're trying to optimize compute of homogenous workloads, right? So if you're...

Organizing, I am all about resource hierarchies and trust boundaries and things like that when you're designing the segmentation of a network or your infrastructure. So if you're putting a PCI DSS application on the same cluster as some internal backend application that is just for your internal users, I mean, that's not really good, right?

So I think it's more about the guidance of how you organize things. I think that you have to be a little more careful with trust boundaries on your Kubernetes clusters. It's about...

Between VMs and between containers on whatever, by the way, I want to recognize that there's also nomads still out there. A lot of people still use nomad, right? I think it's more about how you organize your containers and your VMs and you look at ways to equal out your mitigations, understanding that the mitigation techniques will be different in either of those environments, right? Does that make sense, what I'm saying? Yeah.

I think it does. But I do have a very quick follow up question on that. So whenever people discuss Kubernetes, security in the Kubernetes context, I think that there is usually a lot of focus on application containers. And I never hear people talking about system containers.

So the containers that make up Kubernetes itself. Oh. Your Kubelet, your API server, your control plane and all that stuff. Because that's also code that most of the time you're not building yourself. You're just downloading it pre-made or you are even downloading a pre-made container to run your, this is assuming you're crazy enough to run your own Kubernetes, right? But like, how do you kind of factor that in, right? Yeah, if you're self-managing a cluster, by the way, that's one thing we haven't talked about, hosted versus self-managed is,

I think that leads into another problem that people have. Kubernetes is a large endeavor. It's a lift, right? And it's a development lift. And I don't think a lot of people...

It's almost cafeteria syndrome, right? Their eyes are bigger than their stomachs when they decide to say, no, we're going to self-manage Kubernetes, right? And they don't understand the level of commitment. And so what I've seen mostly from that to sort of mitigate that issue with the control plane and trying to protect it is...

you have a separate set of clusters for the control plane kind of applications like that, the management applications and stuff. And that's how you can, because then you start to segregate using the tools that you have, specialized network policies and things like that.

You have lots of options to manage all of that, but usually what you do, what I've seen when people have a large Kubernetes infrastructure is you separate that out, right? You separate out that kind of management stuff into a separate set of clusters. You can also do the cluster in the cluster thing. That's another way of doing it so that the developer has...

Kubernetes cluster that's in Kubernetes, like Kubernetes and Kubernetes. You've heard of that, right? Yeah, kubectl. Yeah, you can do that. I mean, there are lots of ways of attacking that problem, but the bottom line is I think it's about...

Trust boundaries. How are you going to lay out your trust boundaries? And the problem is it's just like cloud. People jump into cloud and they don't organize it. And then now you have a brownfield environment and you have to then on the back end try to organize. And it's the same with Kubernetes. They stand up a cluster and they throw a bunch of stuff on it together, a lot of dev stuff, and then maybe somebody accidentally makes a prod. And then now you have to rebuild a bunch of clusters and you have to build them from the perspective of segregation.

I think that's really what it comes down to is if you understand how the boundaries work, for example, in Kubernetes, you have the container boundary, you have the pod boundary, you have the namespace boundary, you have the cluster boundary. If you understand what on a technological level those mean, then you can better choose which security boundary you need for any given application. And I also want to call out that Kubernetes, especially right now, is doing a lot of work to enable new kinds of workloads

to be run on clusters, especially with the whole AI thing going on. There's a lot of work in the community going on to better enable use of different types of hardware by different types of workloads. So it has features built in for a wide variety of workloads, but how you understand the security boundaries determines how you use clusters, I think.

Which, by the way, reminds me, people who are born or, well, not necessarily born, but who use the traditional data centers may have particular challenges in understanding these boundaries because some of them don't have direct equivalence to the quote-unquote the 1990s that we keep joking about on our podcast, like your security is from the 1990s. But if your security is from the 1990s, the boundaries would not make sense to you and you probably would make a mistake. That to me is kind of this whole cloud native versus cloud.

data center plus service thing. It's particularly challenging for people who don't have that in their heads.

Well, I started my career in the 2000s and worked in data centers. And I can assure you that it was still at the time, the boundaries is your firewalls and WAF for the outside world. But within the data center itself, there are basically no boundaries, no micro segmentation, nothing. Pretty much everything can talk to everything. And that's how, I mean, like it was just still through 15 years ago. So I don't think it changed that much.

Yeah, I think this is where identity actually becomes really important. And that means application identity, because you're not, you can't think about it like the IP addresses are fungible. The IP addresses are ephemeral, right? It's not this immutable environment where you have an IP address and you have a network. It's now fungible.

just temporary. Everything's really, it's the same with certificates. And that's why it's so important for you to have identity in terms of, you know, TLS and mutual TLS and understanding that that's the way you're going to achieve security now, or one element of it. Right. And it,

Traditional environments, it messes with, and I go back to Conway's Law, because if you have this environment that's very dependent, the security is very dependent on these traditional segmentation tools and methods, then it's

You're going to drop in Kubernetes. And if you don't have members of a team that are willing to learn that, that are willing to rethink the way they're segmenting and putting in controls, then the security is not going to happen, right? It's lost. Right.

Service mesh, that was the term I was looking for. I don't know why it left my head. And everybody thinks service mesh also is going to solve the problem. And I've seen that a lot with network security teams. They go, yeah, we want some of that service mesh. What they don't understand is they just signed on for a level of complexity that's about to blow their minds, right? Yeah. Yeah.

We've talked a lot here about how you implement your security depends a lot on understanding the underlying systems. And one thing I want to discuss with regard to security is attack surfaces. That's a very broad concept, I think, in the security world. And I think a lot of folks out there

A lot of the problems, I think, come from folks not fully understanding certain parts of the system and understanding others. So what advice would you have, Michelle, for folks considering attack surfaces of containers versus VMs?

Okay. And if I may stick my nose into this for a very quick second, people assume it's smaller for containers, which is interesting that it kind of comes as the unspoken assumption that it's smaller. And I am afraid of unspoken assumptions. Yeah. You know what? It's the same challenges as you have with SRE. Steve McGee is going to love me for this, right?

I think, so you've taken this really, this monolith, right? It's not as common that people are taking, you know, that they're doing these greenfield, beautiful cloud native applications. That's what we want, right? They're so lovely and decomposed and microservices, you know, that are so

identifiable. Unfortunately, that's not usually the way it plays out, right? So you're taking these sometimes legacy application monoliths and maybe you're deconstructing them. Well, here's the problem. A lot of times people didn't have a clear sense of what those monoliths were. Now you're decomposing them and now they've got to be reliable, but also secure. So yeah, like Anton's saying, that gets really confusing, right?

And so now you've broken all this up and they've still got to talk to each other and there are paths for that. And you don't know also, you'll hear a lot of times security people come in and they'll say, oh, use SecComp and all these other mitigating controls to restrict access between the components in the right way. But a lot of times developers didn't take the time to understand, oh, I

I need access to these particular libraries. And this is what my application uses because really at its heart, a container is a software package. That's really what it is. And in my mind, a lot of times the mistake is when you make, you think it's apples and apples when it's not right. So now you've got this really complexified container.

deconstructed application that you're going to have to try to build controls around as it connects and communicates with all the components it needs to talk to. But you may not understand it because you're the security person. And unfortunately, the software developer and the engineering team may not have great understanding of it because they had to, somebody told them they had to put everything in containers. Right.

I've worked in places like that. Nope, we're putting everything in Kubernetes because I heard it's more efficient. So now they're like rushing and maybe they didn't, they're cutting corners and they don't know. So it's complicated. I think the attack surface you've had, you have all the attack surface of the supply chain now, right? You've got to worry about the container supply chain because, right, it's a software package, right?

Are you going to use a golden container? Do you have a process for creating a golden container? Are you going to let your devs download it from Docker Hub, right? Do you have the security tools to actually inspect container images? Or do you have like this old fashioned threat and vulnerability management pipeline that only understands VMs?

Okay, so now you've got to create this whole image assurance process that you may not have that is sort of adjacent to your software scanning process with your SAS and your DAS tools. Okay, now you also, though, you need runtime assurance. You need to know, okay, has there been container drift? Are people logging in, SSHing into containers and changing it while it's live? Do you have a mechanism in place that says, okay,

If it drifts, we kill it. We bring it down, right? The attack service is more about the processes and the lack of knowledge as you containerize your legacy stuff into that new environment. Was that, is that, that was a lot. I'm sorry. That was probably overwhelming. I,

actually do a lot of talks on this. I'll give you links to, I have this great talk I used to do a couple of years ago on container security is all about the supply chain. I checked it the other day. It looks pretty, still pretty up to date. So I'll send you that.

So first of all, usually in my talks, I say that containers are just glorified zip files, but I like your description slightly better. Not worse, too. I may steal that. So, and I've been to enough talks around container security, and there are typically two things that are mentioned here.

very often as a factor of security, as in the arguments that makes containers more secure than VMs. Okay. Patching and immutability. So by how fast you can patch a container that makes it secure, I'm putting air quotes here, and because it's immutable, it's secure. I could understand the first one. I never understood the second one. Okay. Okay.

It's only immutable if you have enforcement in place to make it immutable. If you allow Telnet and SSH or other ways to interact with a container, because once it's deployed, no touch, no touch. You don't touch it. The second somebody touches it, if you detect that, that should be killed, right? It's dead to me.

that container, right? That's what it is. You have, there are security tools and mechanisms that you can use to check for that, right? That means when you say it's immutable, it's not immutable unless you treat it that way, unless you make it immutable. Like you don't let people log into production. You don't let people move around and interact directly with containers, right? Patching, I actually take issue with the term patching in reference to containers because

Because you really aren't patching. Patching implies that you're adding something to a system that's running and you're actually violating the immutability rule, right? What should happen is a redeploy. Like if you have to update something, say on a container image, you just redeploy it just like software, right? You just rebuild it, rebuild the container,

redeploy it out. So careful with that though, because so we did have an episode on container security at some point and we personally had shared a survey that

according to their data, 60% of people actually SSH in the containers. And yes, yes, yes. My reaction was, so this is like 66% of zoos have a dragon in them. Like the real number is zero, but like, why are you telling me this stuff? And that's stuck in my head for two years, three years, because we all can theorize that nobody's saying would ever do that. But then somebody shows up who does observability and says 60% of people do that. So how do you, well,

combine it with the real world? You know what? I'm going to take this back on one. So I'm thinking, wait, so there are two major use cases for Kubernetes. And I saw this, it's funny, you know, somebody, a data science team at Bank of America tried to hire me for this reason, because I could understand their use case, right? Which was bizarre. But the use cases are applications, like standard, your web stack applications, right?

and AI applications. And the data science kind of stuff is not the front end stuff, but the back end stuff. The data science is a totally different use case. And I remember having a conniption fit when the data science people were like, yeah, we're using Kubernetes. I went, what? No, no, no, that's cloud native. How dare you? I was so offended. I was so offended by it. And then

And then, you know, I talked to Cloudera. I talked to like all these different companies and I found out how they were trying to use it. And then I realized, oh, wait a second. Kubernetes is really cool for instrumentation. There are all these complicated parts for deploying data science applications. And you can use Kubernetes to do that.

Okay, that's kind of clever. I accept that. I now accept and bless that use case. But that means that you separate the two, right? So yes, you will log in. There really isn't this immutability concept with the data science use case for Kubernetes and containers. And I accept that.

Are you going to mix those two up? Are you going to mix the cloud native application use case with that data science one? Remember how I said you want to keep optimized compute of homogenous applications and how you separate them? You're not going to put those things together, right? That's really important. That's how I treated it when I would, oh, I'm going to use it like a verb, architecting, security architecting environments. Is that helpful?

Yes, people do it and they will do it. And I will let them get away with it. Also, if they have a dev environment, like in a strict dev, it's not stage test or prod. I will let developers as they're noodling and playing around, I will let them do that in a strictly dev environment. You want to log in, but I let them know, by the way.

I'm not one of those security jerks that tries to implement the exact same security levels and controls across dev, test, and stage, and prod. Which, by the way, reminds us of another thing which you alluded to, but maybe it's worth a very quick visit.

How about the argument that people misconfigure containers more? And that may come from the fact that they don't understand them. Well, because they have a sachet to them, I guess that's a good argument. So is there anything inherent in the system we're discussing that makes it more prone to misconfigurations? Yeah.

It's a layer of complexity. After all, there's orchestration layer. So what's your take on this misconfiguration proneness of containers? I don't think that's... I think it's more a case of... Don't be an idiot? No. I think it's more a case of not having the right tools deployed sometimes to check for immutability and drift. I think...

With VMs, you have Ansible and tools like that that can go in and harden, and they interact well at the VM level. We don't always have the same kind of tooling available for containers when we're deploying them. And honestly, I think there's also a sense of, I used to make a joke about Kubernetes, that it was a developer response. It's actually a social response to how...

The infrastructure folks took over the cloud from the devs who went, yeah, we'll teach you devs, you infrastructure folks, not to give us a VM quickly to make us wait for machines for, you know, three months. We developed this new thing called Kubernetes, and we never have to talk to you folks again. And then now the security people and the infrastructure people, they got wise. They went, wait a second. What?

So I think what happened is a lot of the security and governance processes in a lot of organizations, they just haven't caught up. You can do the same controls. It just maybe you need to get some specific tools. You need to change some processes. I think you can misconfigure both equally. Your processes just have to catch up.

I think I'll just add one more thing to that. I would be curious about a survey that asks people how many of you are not pinning your dependencies inside the container to a particular version and just using latest of whatever library, which means every time you're rebuilding your container, your code is changing and your container is drifting, right? I would be willing to bet it's a lot of people. If you're not, you know...

Hopefully you're using something in your pipeline that you're using SCA and software composition analysis and you're using, you have some kind of container aware security tool that's scanning it and telling you that information. What I usually do, by the way, is my pipeline, I'll typically have a software application security pipeline. I'll have an adjacent container security pipeline. And the things that you're doing

First of all, you're plugging in and you're scanning all your containers that are in your repository. You're checking as it goes to deploy to prod or wherever, test and then prod. You're checking, hey, have I seen this container before? Has the checksum changed? Oh, the checksum changed. No, no, no. Go back, get a scan.

And the scan is going to kick off because it doesn't know it. Now you're okay. Your CVSS is, is four or five. My rule says you can't be above eight in this environment. Oh wait, no, you're going to prod. You can't have anything. You need to be below three or something. You can go to prod, right? So you've got to have, I have something called DevSecOps decisioning principles, which I created. And again,

Security is really more about clearly establishing your policies and your trust boundaries and what those policies align with. Like, I will allow this in prod. I will disallow this. This is what is acceptable in stage or test. This is what is acceptable in dev. You establish that, then the material, as it goes through the pipeline to any of those environments, it either passes or fails.

those set of rules. Yes. So now that way it's the same and there are less arguments because everybody knows what the rules of the road are. I also really am a huge fan of release management and having really tight release management rules and having them automated as well. Same thing. And containers just fit into that as do, by the way, VMs. I treat VMs the same. I think a theme throughout our whole conversation here has been

that I think for a lot of us in our careers, containers have been kind of the new thing that's been up and coming. Of course, now containers...

Awesome.

Really? I'll give you a different answer. I am really excited about the potential of Wasm. I have not personally spent a lot of time with it. You know, I'm a dilettante with Wasm, just like I am...

It's the same thing. It's almost... Wasm is kind of in the same place as Service Mesh. I know there are a lot of places, a lot of huge places that use Service Mesh and use it very effectively. But a lot of times in the enterprise, everybody says, yeah, we're going to get some of that. And then it just hangs like nobody ever gets very far with it or they use one-tenth of its capabilities. Wasm is kind of in a somewhat similar place in that a lot of people got really excited about creating this...

I don't want, it's not a container, an artifact that is slimmer, that's really, really slim in performance. So you have a much smaller attack surface. Why? It's like distro-less and from scratch containers on steroids, right? Almost because you've got this like really slim artifact that can run and Kubernetes now understands it and can deploy it.

I don't know, honestly, you'd probably know better than I how many people are leveraging Wasm or if it's still kind of like, will it be the next IPv6? Everybody says we should have and, you know, everybody's still doing, you know, Nat and Pat with, you know, they're still doing translation at the edge with IPv6.

Is it going to be the next big thing? I would love to see it be the next big thing. It seems promising. I know there are a bunch of people out there who have jumped on it. I think that's what I'm excited about in terms of Kubernetes security. I like your comparison to IPv6. It does feel a little that way. So it makes me so mad because, you know, IPv6 should be here, but nobody wants to change. Yeah.

Anyway, do you think the future is us in the containers in five years or not? No. No. Good. Thank you for your optimism. Yeah. I mean, it's honestly, some people shouldn't be SSH-ing. I mean, like really you want immutability cloud is about immutability, right? And you want,

Like I worked at one place where we had this rehydration thing and, you know, containers should have the same. Like I think for rehydration, I think it was 30 days and a container, regardless of whether or not it has any vulnerabilities in it or whether or not it's set off a drift alert or

Some places will proactively just, no, we're going to expire them. After a week, let's replace it, just to be safe. Because you don't know. Sometimes you have zero days. You have your alerting systems may not pick up on anything. So that makes sense. Go ahead and proactively replace it. Now it's really immutable. Now you're looking at ephemerality and immutability perfection, right? Not everybody's there, obviously, but...

I'm going to just plug a shameless plug here. I have a talk on Wasm. I'm going to link on the show notes. So if people want to understand what I'm saying. I'm excited for that because I'd like to do more with it. I just haven't had the opportunity. And if you're not familiar with Wasm out there, it stands for Web Assembly. Originally, the concept was for browsers. It was about allowing you to use various different types of

programming languages across web browsers instead of just using JavaScript by doing kind of, to me, it feels kind of like a JVM-style compilation of code into this WebAssembly concept. That's funny. Really? In my head, I'm comparing it to JVM too, but, you know, we don't want to go there. Yeah, let's not go there. Yeah.

And the infrastructure people somehow saw this and decided this might be a good approach for making process isolation container-ish concepts even lighter weight. So it's actually a very interesting technology. I would recommend checking out a talk on it like Abdel's. I'm excited to see that because I'd like to know, I think people are using Wasm potentially to minimize the sidecar stuff.

That's what I had heard, you know, as much as I try to keep up with it. And I think that's great because a lot of times in Kubernetes, you know, your sidecars and your system, the Kubernetes control plane stuff itself is going to be just as vulnerable sometimes as the, you know, application people's containers, which is terrifying.

And to wrap us up here, I wanted to ask you, Michelle, about another talk that I'm excited to see, your session at KubeCon. Would you tell us a little bit about it? Sure. Basically, it's sort of a career talk about, it's called Why Perfect Compliance is the Enemy of Good Kubernetes Security. And I work on the CIS GKE benchmarks with the GKE team at Google, in Google Cloud. And

I've worked on, I've been a consumer of the benchmarks and I, now I've been a contributor. And the one thing over and over again, I end up being the mediator in the middle, usually between the platform engineering team and the security team. I don't know how I ended up in that role, but I'm usually the one that's like,

let's calm down and talk about what's realistic here. Do we really need this specific control? Does this apply here? What's the trust boundary? What's the attack surface like? How was the exposure? What's the risk involved? And I think that's the point of my talk. I want to create a more, and this is from experience. I remember walking in and I took my happy little CIS benchmarks. And I remember working with my friend Subendu at Bank of America. He's great. You should look him up. He works on OpenShift, right? Yeah.

the Kubernetes we don't like to talk about, I guess. I don't know. I would say, hey, turn on this because the benchmark said to. I kind of understand it, but, you know, and I turned it on on Minikube, but I don't know, or Minishift, and I'm not sure how this works. He's like, Michelle, that's an alpha feature. I don't think it's a good idea to turn that on right now. And those are the conversations that we should be having, right? We should be having...

And that's what the talk is about. Let's have better conversations about, hey, here's this feature. What's it really mitigating? What risk are we trying to address?

Let's be conscious of that when we plan out and when we try to secure it. It's not about turning the security tool to a unified green color, right? Stop freaking out about the red and yellow in your security dashboard because sometimes it doesn't understand context. I think that's the TLDR for my talk at KubeCon. Wonderful. So...

If you're attending KubeCon North America in Salt Lake City in November 12th to 15th, something like that, then make sure you check out Michelle's talk if you're interested in learning a bit about how to have those important conversations when you're developing a new feature about how you're going to secure them. Thank you so much, everyone, for being on today for this very special episode. Thank you, Anton. Thank you, Michelle. Thank you. That was fun. Yeah, thank you. Thank you.

So normally I would say, thank you so much for that interview, or you would say it, but today we were both on the interview. Yeah, first time we tried having two hosts on the same interview and it went well. Well, three. Oh, three, yeah. Well, four if you count them virtually. We went a little overboard on it this time, maybe, but I think it worked out really well. That was really fun. Yeah, I had a really good time talking to Michelle, a very knowledgeable person.

Very fun, very expressive. That's true. I'm really glad that we finally got to interview Michelle. We had actually talked about doing an interview like early this year, like months and months ago, and we never kind of got all of the pieces together to make it happen. So I'm glad that we finally did. Yeah. And I'm glad also we got to do it with Anton because I listened to the Security Podcast. So

So they are like great conversations. I like the dynamic that Anton has with Tim. Tim is on leave now, but the way they're bouncing against each other and talking about all sorts of topics is pretty cool. Awesome. I do love to listen to other podcasts for examples of how to do those things. Yes. So we talked a lot about Container and VM and Kubernetes security, which are all kind of different things, but can be talked about together. Yes.

I like the fact that we kind of focused the conversation itself on containers and container security. And there was quite a lot of myth debunking, in my opinion, about container security in the episode itself, right? Which there is still a lot of misunderstanding there. Oh, yes. Yes, definitely. There is one thing that I did not manage to sneak in. When Michelle was talking about containers doesn't isolate, they segregate.

The words I use quite a lot for this topic is containers doesn't contain. If you think about containers from the containment perspective, that's not what they do, right? Because of the shared kernel, as Michelle explained. Yeah, I have a comic where I describe this using dogs and apartment complexes. Okay. Talking about shared infrastructure. So, like, I give the analogy of...

A VM is kind of like giving your dog its own townhouse. Sure, it's shared infrastructure or like an apartment, but it's isolated enough that it basically can be treated like its own machine, its own space. It has all of its own resources and things like that. But a...

container is more like putting your dog in doggy daycare. It is sharing the space with all of the other things, but you still need to prevent dogs from getting into fights and things like that. You also need to prevent your containers from kind of stepping on each other's toes, using the same resources, having all sorts of problems like that. And so you use things like cgroups and namespaces, which are kernel level isolation tools. And I

Use the analogy in my comic of those being like making sure that dogs have their own toys, have their own food bowls. And in the worst case scenario, if you really need to just separate a dog, just separating them kind of into their own, like a kennel within the same space. That's a really good analogy. Yeah. Right? It's really useful, I think, for understanding how...

That's one thing that I kind of was worried about as we were going through it, is like, are folks going to understand what we're talking about here with the technical differences between how containers are implemented versus how VMs are implemented? And so I really like that analogy for giving folks a quick overview of how they're just fundamentally different approaches to isolation or segregation. Yeah. I really enjoyed the conversation because

Michelle also does not shy away from including the organization aspect of companies, right? True. As a factor into security, like you cannot have one without another. So like how do you organize yourself? How do you organize people? How do you teach people are also important aspects of how do you build security into the development process, right? Early in my career, I was...

I think as many people are intimidated by security topics, because it's like, this is very high risk and like you need to get it right. And so I don't know what the technologies are behind here. And so it's a little bit intimidating. But as I've met more and more security folks, I have learned that security is really all about just learning things and understanding things. And the better you understand something, the better you can deal with it effectively.

Yeah, definitely.

I think that my favorite part of this episode was getting Michelle to talk about patching and immutability because in my experience, that's always brought up as a prong for containers from a security point of view. But I think we have learned from the episode that that's not necessarily the case. Unless you enforce it, it's not going to be the case, right?

So that was a really good way of explaining it. I also loved that you were like, I've never understood the immutability argument here. And I feel like every time the word immutability comes up, I have to remind myself, like, what do they mean by that?

And in the container world, I loved how it just came out through our conversation there that usually what people are talking about with immutability with containers is like you make a container, you deploy it, you don't actively change the thing that's running. You will make a change to the image and then you'll redeploy it. And Michelle was saying like, that's how you should do things. You should generally just redeploy the thing rather than interacting with it directly. Yeah.

So I always have to remind myself that that is immutability, I guess. Yes. So in other terms, do not log in into your container and do things. I totally do that, though. Oh, yeah, I know. I mean, remember the survey that Anton was talking about? Particularly in development, though. It's not like I'm going to do that with some kind of production workload that's running. But if I'm developing something... I was doing this recently. I was working on a chat app that...

to explore some AI stuff, trying to build an AI chat app, because who isn't doing that? And I was trying to use some environment variables in my container, and things weren't going the way that I expected. And I was like, well, best way to debug this is to log into the container and see if the environment variables are there. Yes.

There are use cases. I think I'm willing to bet that the reason why people still do that, why people are modifying running containers, is because probably lots of times, like the development cycle is just too slow. So between you changing code, waiting for the container to build, waiting for it to run so you can see the changes you made, people think, oh, I'm just going to bypass all of that and just go straight into the containers to change things. So that's...

Like, if you're able to improve your developer experience, generally speaking, speed up the time it takes to get a change to a running thing, that probably could help as well to help prevent people from doing it. There is the enforcement part, which we talked about, but there is also the...

The human part, the human factor. Yeah, because development processes with containers can be real slow. I've had a lot of trouble with that recently with this chat app. Originally, I was doing things kind of the way that I always have, where I run everything from command line directly and edit all my files in command line and everything.

And I was realizing this development cycle is just way too slow for me to be able to solve the problems with my application. And so shout out to Cloud Code, which is Google Cloud's Visual Studio Code plugin. I ended up setting that up on my local Visual Studio Code. And that...

a scaffold file for me, which allows you to kind of set up your deployment pipeline. And I set it up so that every time I made a change to one of my files, it would redeploy the application. So still the deployment was taking a little bit.

Definitely that deployment cycle is slow. But when I have it set up so that it's redeploying every time I need it to, without me having to do each step of the deployment, it was not so bad to actually get things done. Yeah, Scaffold is still one of my favorite tools for doing developments. It's good.

Yeah, it's pretty good. Yeah. I did some talks on it before I showed people how to take the most out of it. But like, I think a lot of times also for your development cycle, like the tooling is not only the limiting factor. Sometimes it's like how long it takes you to pull dependencies. Do you cache these dependencies and all that stuff, right? Like, so there are ways to optimize this, I think, across the entire cycle, basically. That's what I'm trying to say. The considerations in the container world can go so broad. Exactly. Yeah.

Yeah, then we talked a little bit about misconfiguration. And that was interesting to have Anton chime in with the comments that he was saying. Like, how do we factor in misconfiguration? Yeah, that is such a common issue too. Like, it's going to happen. We're humans. You're going to misconfigure something at some point, especially when you have all of these pieces from development to production. Something's going to go wrong somewhere along the way. But I think...

It is more about those pipelines than the individual components. You can learn how to configure an individual component, but when you have to put them all together... Yeah. If there are any few things you have learned from this episode, it's do not not pin your dependency versions anywhere. You have to have them pinned because you want reproducible builds. You want your builds to be predictable when they are rebuilt. So to make sure that you don't have a container drift. Because if you're just relying on the latest tag everywhere...

you're going to have problems. Yeah, I have been using latest in my development cycle lately. But when I get a good one that's like working, and I want to make sure I save it, then I like rebuild it with a different tag so that I can come back to that one if I need to. Interesting factor. Yeah. And one other thing that I want to mention from the interview is Wasm. Oh, yes, yes.

I am very excited that Michelle talked about that. Yes, I found that fascinating. I have recently had conversations with folks in the community who think that Wasm is a fad that's going to go away. But in my experience, going to Wasm cons and things, it is still a very small community. It's not getting the kind of...

accelerated growth that Kubernetes and containers saw in their early days. But it is a very interesting technology that has real use cases. And so I thought it was great that she brought that up. Yeah. So I did some talks and content about it in the past, and I spent quite a lot of time actually looking into it.

just out of curiosity, really, because I was very curious about how that space is evolving. And I sort of dropped it a little bit to focus on other things. But what I can say is I've been to events where I met people who assured me it's still pretty much alive. It's probably not being put at the forefront of any conference that you go to as it used to be like two years ago, right? Exactly. Exactly.

I think that's the thing is that it's been kind of bumped out of the public consciousness by all of this AI shenanigans. I think so. Yeah, I think so. Yeah. Because I remember one of the KubeCon we've been to wasn't was like the thing that everybody was talking about.

And it does have interesting AI... Use cases, yes. Use cases. But I think everybody's so focused on, can I get enough GPUs, that they're not really worrying so much about... The name of the game now is GPUs. Yeah. Wasm is such a...

relatively early stage technology, but it has great potential for using those hardware resources more efficiently. So I think there is potential for it to catch on even in an AI world. But I think people are just so focused on other things right now that they're sleeping on it. Yeah.

Now, it was pretty interesting to hear WASM being brought up as part of a security discussion as a potential future thing, right? I also love how she was talking about how containers are this new thing and people don't understand it well enough when they're adopting it. And so sometimes they miss things. And then she's like, WASM. WASM is the future of security. Wait, that's the same problem. Exactly. Yeah.

Exactly. No, it was very nice. It's also a good point that it is yet another different take on how you isolate your workloads. And so it is a different take on security that might have very interesting implications for the future if it catches on. Yep.

And then we mentioned that Michelle will have a talk at KubeCon North America in Salt Lake City. And so don't miss that if you're at KubeCon. And also, I'm going to have a talk. And I think, Kathleen, also, you have a talk? Yeah, well, I mean, open source. Since I am a SIG co-chair, the SIGs generally each have their own talk. So if you're ever interested in contributing to open source and you're at a KubeCon and you...

that by the way, you were going to try to do that contributing thing. Try going to the SIG talks in the maintainer track. They put a lot of work into those and the maintainers would be absolutely thrilled to see you there. Especially if you come up and say hi and just, just say hi. Even if you don't really have a question, they would love to know that you cared about their work in an open source. Nice. Uh,

And yeah, so we'll see you in Salt Lake City. We're going to bring in cameras and we'll be shooting some content. So it'll be cool. Yeah, make sure you come find us. We'll probably hang out at the Google booth some. If you don't see us there, tell the folks at the Google booth that you're looking for us and they'll get word to us. Otherwise, we're just going to be all around the conference.

But please do try and come find us if you're there. We'd love to talk to you. Awesome. Well, thank you very much, Kathleen. That was great. Thanks, everyone. Go secure all the things. Yes.

That brings us to the end of another episode. If you enjoyed the show, please help us spread the word and tell a friend. If you have any feedback for us, you can find us on social media at KubernetesPod or reach us by email at kubernetspodcast at google.com. You can also check out the website at kubernetspodcast.com where you'll find transcripts, show notes, and links to subscribe. Please consider rating us in your podcast player so we can help more people find and enjoy the show. Thanks for listening, and we'll see you next time.

Thank you.