cover of episode The Smartest Conversation on Cyber in the Defense Department You've Heard in a Long Time

The Smartest Conversation on Cyber in the Defense Department You've Heard in a Long Time

2024/8/7
logo of podcast War on the Rocks

War on the Rocks

People
A
Alexis Bonnell
A
Anne-Marie Schumann
M
Melissa Griffith
T
Tyler Sweatt
Topics
Melissa Griffith博士探讨了其学术研究历程,从政治学到网络安全与国家安全的交叉领域,强调了技术风险与国家安全和国防之间的关联。她还分析了不同文化背景下对网络安全术语的理解差异,以及在快速变化的环境中,采用更具成本效益的系统策略。 Anne-Marie Schumann女士分享了她从情报分析员到海军部首席网络顾问的职业发展,并重点阐述了其作为授权官员的经验,强调了在风险规避和满足国防需求之间的平衡,以及团队合作和同理心在应对网络安全挑战中的重要性。她还分析了乌克兰冲突中网络行动的快速节奏,以及在和平时期做好准备的重要性。 Alexis Bonnell女士回顾了她从互联网协会到空军研究实验室首席信息官的职业生涯,并分享了其在推动组织适应变化和促进创新方面的经验。她强调了流程和检查清单对交付速度的影响,以及将网络安全视为一种服务而非合规性问题的必要性。她还介绍了空军研究实验室在评估人工智能技术时使用的方法,包括撰写新闻稿、进行预后分析和进行10-10练习,并探讨了在快速变化的环境中,快速启动、维护和停止项目的重要性。 Tyler Sweatt先生分享了他从军队到创业公司的职业经历,并阐述了其在促进国家安全和国防与商业软件之间关系方面的观点。他认为国防部在网络安全方面存在对变化的抵制,这阻碍了新能力和框架的采用,并导致了对变化的风险规避。他还强调了时间作为有限资源的重要性,以及在时间上获得对对手的不公平优势的必要性。

Deep Dive

Shownotes Transcript

Translations:
中文

You are listening to the War on the Rocks podcast on strategy, defense, and foreign affairs. My name is Ryan Evans. I'm the founder of War on the Rocks. In this episode, I sat down with four amazing professionals to talk about cybersecurity in the Defense Department. In order of introduction, as you're about to hear, Dr. Melissa Griffith, who's a lecturer in technology and national security at Johns Hopkins University School of Advanced International Studies, and Dr.

with the Alperovitch Institute for Cybersecurity Studies, Anne-Marie Schumann, the Department of the Navy's Principal Cyber Advisor, and in that capacity, she advises the Secretary of the Navy, the Chief of Naval Operations, as well as the Commandant of the Marine Corps on all cyber matters.

Alexis Bono, the Chief Information Officer and Director of the Digital Capabilities Directorate of the Air Force Research Laboratory, which is the primary scientific research and development center for the Department of the Air Force. And last but certainly not least, Tyler Sweat, the Chief Executive Officer at Second Front Systems. I hope you enjoyed the show as much as we enjoyed recording it. We did have a lot of fun.

How did you get to where you are? How did you end up becoming a scholar that studies all these cutting-edge technologies and how they intersect with war? So that, for me, was a journey through my PhD process. So I had joined the cohort at UC Berkeley as a political scientist. I was working on national defense alliances, projection of power warfare projects.

and found myself on the heart of Silicon Valley. And I was asking a lot of questions, or what are the future threats? What's the things we're not seeing coming? And at that time, cybersecurity was a very niche conversation. It wasn't being had in broader rooms or in public. It was like a very technical practice.

And I was working on a lot of that work technically, but also in support of economic sets of concerns and suddenly realized, wait a second, all this work I'm doing on national security directly intersects with technology and the types of risk and the cadence of technology that that means for national security and national defense. So then I went through that PhD process, not just looking at kind of what the U.S. was doing, but other countries around the world, what we can learn from them, what they can learn from us. Promptly moved to D.C. as one does. So I jumped that divide on the East Coast, West Coast divide.

Spent some time here in Think Tank Life, New Mexico. So kind of more in the center. I'm from LA, so we're sort of from the same... General neck of the woods. Not the same time zone, but... Not the same time zone, same general geography, maybe. Moved out to DC, spent some time in Think Tank Life, working a lot more on kind of the policy relevant components of that, and then defected back from Think Tank Life to academia for a bit to join John Hopkins. Yeah.

particularly John Hopkins SAIS, that's doing this really cool mix of research, teaching, and very practical application. You guys took over the Newseum building, which is pretty cool. We did. We have some very good geographic real estate. And how did you get into all this?

Actually, somewhat by accident. So I was an aspiring academic at one point who ran out of money to continue studying, ended up joining the Army National Guard, where I was an intel analyst and absolutely fell in love with intelligence, decided to leave my academic aspirations behind, join the intelligence community. And I started out by working on intelligence support to information operations, which

which is somewhat cyber adjacent, but from there, as the Intel community is want to do, it encourages generalization. So I switched over to counter drug, which I thought was really cool, got to go on a lot of really neat trips to Central, South America, Mexico.

And I was angling for a promotion in my counter drug line of work and realized I hadn't interviewed for a while. There was a job that was coming open as a senior analyst for cyber. I thought, I'll throw my name in an interview for that. And I got it. Shockingly, I got I got the job.

And then I had a new love and it was all cyber from there on out and I never looked back. So I jumped from the Intel community over into the Department of Defense policy world and spent some time on the joint staff after I did policy and now I'm the Department of Navy Principal Cyber Advisor. And you were telling, before we started recording this cool story about how you working for the J6 team.

during the reinvasion of Ukraine. Yes. That must have been a really interesting time for cyber stuff. That was absolutely an interesting time for cyber stuff. And it happened, I think, my first month in the job.

So it was a trial by fire into our joint processes. And as we were discussing before we started recording, really seeing the gloves of the bureaucracy come off and just how fast the Department of Defense can move when motivated by crisis and need. Can you tell us a cool counter-drug story?

You got anything? Did you like board any narco subs or anything like that? No, I was an Intel analyst. So, you know, that's the weenie job. I read the Jack Ryan novels. I don't know what's going on. No, no. You know, most of my time spent behind a computer. I did really enjoy, though, going to for three weeks, which if you know anything about that city is a lovely place where most of the narco traffickers keep. So there is a.

Quick interruption from Ryan here. She didn't say anything classified. It's just a little more fun and mysterious this way. It is an interesting place. You're like the only person who went there and didn't do drugs. Yes, probably. Alexis, how did you get into this crazy...

Sure. I feel like this role, being the chief information officer and head of digital at the Air Force Research Lab, is kind of like coming home. It's coming full circle. You can look back on your career, and sometimes you're like, it doesn't make any sense. And then all of a sudden, you get that job where it's like, oh, now it makes sense. And so back in the 90s, dating myself at this table, I was the third employee of the Internet Trade Association. And I think what was so critical was

about that opportunity was really my job was to go around to people and companies and say, this thing, the internet is going to change how you do business. And I will tell you, they did not believe, right? I mean, American Express, American Airlines, I remember having conversations with their executives that it's just hype, right? It's not really going to make a difference. And so I think cutting my teeth in that space that was about helping organizations and people adopt

to change, you know, kind of became the theme. So then in the very weird turn of events, got headhunted into the UN and, you know, went in about three weeks later into Afghanistan, Iraq, Somalia, Sudan, Pakistan, you know, Sri Lanka, post-tsunami work. And so really got to be there with the warfighter. You know, the UN has jobs in like Paris. You know, someone told me.

- You told me that once and I obviously was, I'm not very smart, right? Because I keep finding maybe the positions that are not glamorous, but I think matter. From there, interestingly enough, USAID had been, in addition to DoD, had been a big client of the work that I did for the UN. And someone who ended up becoming the administrator said, "Hey, come on over," and gave us the ability to start the US Global Development Lab, which was kind of the innovation and science arm.

And I kind of finished my role there as the chief innovation officer. And Google came calling. And I thought what was really interesting about the Google experience, maybe like some of the other speakers have said, is the intentionality about what I wanted to learn there, right? I really wanted to know what was real and what was, you know, hype in, you know, in technology, what needed to cost $200 million. How do you actually do security when you've been attacked by a nation state player and zero trust? But

But even more, I think, important questions on culture. What's different? What should we do? And then also being really, really in love with the customer. So now coming to the Air Force Research Lab, it's just an amazing opportunity to kind of combine the how do we look at the horizon? You know, how do we actually do the necessary? And then more importantly, how do we make space, you know, for the most innovative minds at the most interesting time I think that we've ever existed? Yeah.

and, you know, talk about a challenge. Like I think everyone at this table is kind of navigating that right now. Tyler, how did you end up as one of the most unstoppable CEOs in cybersecurity? One, thank you. It's super flattering. Also, thanks for having me here. Happy to bring down the relative IQ by like 25 points. Oh, like 30. Yeah, exactly. Exactly.

It was kind of by accident. Spent my 20s in the army, bunch of pumps back and forth to Eastern Afghanistan. I think anyone who's talked to me about it has heard me say I was in the army when we thought we were going to win in Afghanistan. So it was a long time ago. Spent about a decade after that doing technical advisory work, regulated industries, national security, all around how do we accelerate capability deployment, right? Like how do we confirm or deny if something works? I'm not going to do that on a quad chart. Not going to do that in a lab.

Get it out, break it, you can fix it and go. That led me into a couple startups, everything from, you know, how do we hack facial recognition in like 2018 and think about model risk management and AI security. We were way too early to coming over to Second Front on an idea that, hey, if we can transform the relationship between like national security and defense and commercial software in a meaningful way,

it's like a generational impact on national security. And three, four years later, I'm still here. I got promoted. And that's the reason I pop up every day is if we can make that shift and everything else will, will win. And you host your own podcast, of course, all quiet on the second front, which I highly recommend to our listeners. And since we of course have Tyler here and two govies, so industry friend or foe, I'm kidding. Let's one of the things that I

I know that everyone has to deal with in cybersecurity that I think probably doesn't get as much public attention is struggling with culture and trying to shape culture around cybersecurity. Would love to hear how that manifests itself in all your day to day. Yeah, I mean, I'll jump in. I think it's really interesting. One of the things I didn't realize I was taking on with my role was authorizing official, right? So you add that to kind of the four titles that I have to wear.

What does that mean for those of us who... So in essence, most technology requires an authority to operate. And so what the AO does is actually bears kind of the ultimate responsibility for saying yay or nay on that authority to operate. And I think what was so...

quite frankly, number one, I think everyone tries to avoid that role as much as they can. It is really thankless. You know, in some cases, the identity is very tied to kind of the last punch in the face, right, that someone has to go through as they navigate the process. But I

But I think for me, it's actually been really interesting to reflect on this role over the last year. Number one, I think it's highly underappreciated because you really are taking on an incredible amount of risk and a very low risk appetite framework that government and defense often are. But I think the other thing is it really makes you decide how you're going to tick every day when you wake up, right? And so

you know, I've had conversations, especially when I was starting this role with other cyber leaders. And the first thing out of their mouth was, you know, they've hired a bunch of people to go after AOs who make the wrong call. And then on the flip side, you know, I hear from Secretary Kendall, you know, we've got to bring it, we've got to bring it better every day. And so the question about which side of the bed do I wake up on? Do I wake up on the scared side, right? Or do I wake up on the

It's my job to get the best stuff through to the best of my ability side. And I would just end on the fact that, you know, asking people to be empathetic, that there's not a lot of incentive for me to wake up on the not fearful side. And every day I really have to decide to roll over an extra four times to try to get over and have such respect for my team, right, that have to navigate that every day. I just...

I forget his name, but he works at CDAO. He posted something on LinkedIn about how he just went through a bunch of mandatory training that was basically all the different ways he'll go to jail if he screws up. I mean, that is your most popular training, right? Is jail, you know, is door left, you know, door number one, jail is door number two, jail is door number three.

You know, jail is door number four. And by the way, adversarial advantage is door number five. But make sure you like kind of get on your hands and knees to crawl and find the door. On the other hand, we have these warfighters who really clamoring for the capabilities that labs like yours can deliver. So how do you think we can shift from.

this compliance mindset that we're stuck in right now to actually bringing the warfighters' needs to the front and warfighting risk rather than centering it around the risk of authorization or focusing purely on cybersecurity risk? How do we make it more warfighter-centric? I mean, I think two things that we were talking about before, you know, kind of coming online together. I think one is...

We have to be equally conscious about the equation of impact. When we start adding things like processes and checklists, I like to refer to them as toil. So it's really interesting in government because we don't seem to have a lot of barriers to adding toil or criticality. We have a lot of barriers, though, to doing, and we don't seem to understand the relationship. So I think the first one is that we should be equally thoughtful that if you're going to add three steps to a process or you're going to require a certification,

how much does that potentially slow down me getting something in the hands of a warfighter? And the question is, we should know what that acceptable answer is. You know, is two days acceptable? Is two months acceptable? And I think, you know,

You know, that's one question. And then the other is how do we actually and I think we have the ability to change this. But how do we actually go upstream to where the cyber work or the compliance work or the authorization can actually be the hero and not the villain? And so what I mean by that is, you know, we're looking at how do you actually put talent in play so that much like I think Tyler tries to navigate when someone or a warfighter has an idea.

right, or says, gosh, this would really make me more effective, or I could really kick butt with X. We should meet them in that moment of imagination and curiosity and say, okay, if you think that has value, let's figure out the fastest way to get you through the hurdles to that. And instead, we really make, you know, a lot of folks go through that, you know, the companies go through a whole bunch of things. And at the very end,

It's like a cliffhanger, right? You wait to see if you've been accepted. And I think there's so many things if you went upstream and thought about security as a service and not as a compliance issue that would allow us to culturally move the needle. I know someone whose company does that. Yeah, but it's bigger. Yeah, I mean, shameless plug. Second front's awesome. But I think it's a really interesting...

philosophical sort of conversation that I've been having with a bunch of folks, which is from that sort of security as a service and thinking about that as an enabler vice and inhibitor, there's a couple things at play, right? One, I think there is a broader sort of cyber leadership on like, what does the department expect from commercial? And I'll leave it intentionally broad.

from a like cyber design pattern to conform to, right? Absent that there is no perspective on cyber. It's a boogeyman. Right. You bring me a rock. Oh, that's not the right rock. Well, which rock is right? Well, not that one. And then you think about, and this is like sort of heretical for the space I'm in, but

If I look at the department and lag on like public cloud, right? Like CSPs, use Google, AWS and Azure and OCI, so pick four. I say, hey, are we actually going to get to a spot where I need security service or do I need certain cyber capabilities and artifacts that allow me to then have sort of like a federated consumption? Because...

what are we like 10% cloud penetration? So when I agree, right, if we're going to look at like industry best practice, we're going to think about how regulatory, like highly regulated sort of markets are doing it. There's precedent, there's playbooks, there's a ton of greatness you can do. You can abstract away a lot of sort of the complexity and the granularity, but you've got to sort of like have an opinion and make some bets. And I feel like

The department has gotten very comfortable working in 10-year cycles and like anything sub that makes them so uncomfortable that the risk is change.

And they're actually just rejecting change. And we see that as this. We experience that as a symptom in our daily worlds because we're all sort of trying to accelerate capability or cyber posture or bring new frameworks in. And we think it's a resistance to the capability or the posture of the framework. I think it's a resistance to change. And it's like a weird cultural self-preservation thing.

This is one of the things that definitely keeps me up at night, right? And it's around kind of timing, but also what does success look like? So I think that one of the things that is often lost in this mix is not what checkbox does you check? What is sort of the specific cost of something? What's that kind of immediate tact?

value add. It's how does that contribute to a broader posture, right? What is the success? How do you know that you accomplished what you were hoping to accomplish in these spaces? And I think that goes back to actually having to ask yourself, how much cost is too much cost? How much time is too much time? And that's going to differ. I think the 10-year cycle is okay and long, prolonged, big power competition of the past sort of

Those time horizons were quite long. And a 10-year cycle and a period of peace seemed adequate, I think, or turned out to be, whether that's an accident in history or not. But I think what we're seeing as we kind of advance our unfolding across Europe, but also in our own kind of planning, is that cadence is much faster.

in that space. And the kind of long, exquisite, incredibly secure systems that are deeply resilient are kind of the ideal world, but they're not the realistic world. And it's probably better to have a series of five cheaper, less secure systems that are deployable and easily replaceable than the one really exquisite that you can't use because you can't lose it.

So I think there's kind of those pieces that are worth teasing out. What does success look like? The other thing that's kind of present in this conversation around culture, and I see this, I think, heavily as an academic, both serving kind of in the Bay Area, but now in D.C., is the culture of the tech community and the culture of DOD, but also the culture of DOJ versus DOD versus kind of the United States versus France, that there's a lot of different kind of historical and bureaucratic forces

frames of mind that are coming to the table and they're not speaking the same language. And I don't mean some are speaking French and some are speaking English. They'll literally use the same terms to mean different things, right? So even just something like speed or timing, right, or indicators of compromise can mean radically different things in the room. And so I think one of the things that holds a lot of promise that I'm excited to see SAIS doing and I'm excited to see more and more institutions doing is teaching people to be fluent and proficient in multiple languages.

Because if timing really matters, you need to be able to listen to the warfighter, but also listen to the tech community. What do I have on shelf? What can I provide? What's the actual thing you want from me, right? Which is sometimes actually very hard to get to the bottom of. It's sort of, I think I'll know it when I see it, which is very hard to design to. And you can have a much faster cadence. But if you're having to start from sort of

bring me rocks and I'll let you know which rocks work. And then we'll have to go through a rock acquisition process. And 10 years from now, I'm not going to want to throw the rock because it's a thing of beauty. It's just too slow and the culture got in the way in that space. So the point you bring up is sort of this idea of disposability. And that is very anathema to the Department of Defense, right? We plan in 10 year or longer cycles, especially when you think about

are major investments, a ship, a tank, right? These are things that may be in our inventory for 50 or more years. So this idea that we might spend money on something that might only be good for a year because it doesn't survive

sort of cyber contact with an adversary, and that's okay. Did it serve its purpose in that year or two is a very different mindset, and I think is getting at that heart of the conversation we're having about the need to incorporate time into our risk management, which if I get super nerdy about risk and talk about the joint risk management framework, which is a process that the joint staff uses to try to calculate risk,

time is not a major factor in there. You can go look up that joint pub, but the speed of delivery or the speed of adversary action is sort of maybe implicit in that framework. And I think we're really talking about needing to make time explicit here in our decision making. Yeah. I mean, I think one of the things that was so surprising coming from Google into this was exactly this issue. And I remember about four months into it looking at it and saying, what is...

something's irritating me and I can't figure out what's irritating me. And it was this fact that like no one sees time as a weapons platform here. In almost any other industry, time has value, right? A minute maybe is important as a missile, but no one was treating time like a weapons platform. And I thought what was really one of the biggest people ask me what's different, right, about being here versus Google. And there's actually three things that we have to do as organizations really well. And exactly to your point, we have to start things. We have

We have to maintain things and we have to stop things. I challenge anyone listening right now to give me an example of something that if you're within, you know, DOD or government that you have intentionally stopped doing. And part of this is that ultimately, I think we're in an age now where people have to be comfortable with right for now things.

not right. And, you know, the last thing I would say is that while at Google, part of what we looked for, which was exactly what you raised, which was the time horizon pre-COVID was five to 15 years. It shrank generally to six months to nine months. In AI, it's two months to five months. So if you're talking about, you

The reality is we have to put things out and expect them. We have to navigate kind of change engines versus time capsules. Yeah, but I'm still kind of pissed off about you guys shutting down Google Reader. It's all not over yet. Hey, deprecation, man. You got to stop some things to make blood, sweat, and tears available for others. I'll say just on the time aspect, I mean, a violent sort of plus one for me, but I think

I think to yes and, it's a little... I think I see it different than a weapon system, and I see it as...

A finite resource that every decision we make should be us striving to gain an unfair advantage against our adversary from a temporal standpoint. Yeah. The only thing- Time and focus are the only two things you can slew off and guarantee failure. Well, and not only that, the only thing, the only thing that our adversary and we have the exact same amount of is time. Yeah, you want to flip it back to the-

you know, the old Afghan valleys of, hey, you have the watches, but we have the time. You know, how do we become the insurgent here and not build ornate, perfect mega cyber systems? I mean, if CrowdStrike hasn't shown everybody, like, it doesn't matter how perfect or omnipotent it is. The more omnipotent, the more risk. And I think that question of this

this assumption, and we saw it in innovation, and I feel like in government, we navigated it a bit better. But part of the aha in innovation is realizing that what you do now can be just as risky as what you're proposing to do. And I think a lot of times in government and technology,

this assumption that what we're doing now isn't risky is, you know, is probably our biggest weakness because the reality is the adversary knows what we do now. So it actually is our biggest weakness. And yet we want to hold, we're used to organizing it, to having process around it. It makes us feel in control. And the reality is we may not have the luxury of that type of control state as we look to the next decade. We have to be, you know, really,

emphasize things like curiosity, right? Or really asking how do we ask and answer what if faster than the adversary? Well, time is a weapon system, as we were sort of joking before, is that bureaucracy can use it. Slowness is the weapon system. And part of it is the incentives. There's very few incentives for people to take action, say yes to things. But one thing on the issue of time is

What are we seeing in the cyber domain from Ukraine and Russia and their sort of action reaction cycles to the extent you can talk about them, of course? I mean, I'm actually going to quote someone. I guess two weeks ago, there was the Development Innovation Board output and one of the

Defense Innovation, thank you, sorry, Defense Innovation Board, Alpet, and one of the gentlemen there made an observation that in the Ukraine context right now, there were Ukrainians and those who are supporting them will put out software that in essence is countered by Russia within two weeks. So now we're talking about not a five to 15 year change horizon, we're talking about a zero to two week change horizon. And if you start putting that up against the impact of time as a weapons platform or as a element of advantage,

We don't have time. And I think one of the things that the group was talking about before, which is really so powerful. And when I work with my team, one of the things I say is like, would we do this if we were at war? Like if the answer to, you know, to any question about toil or process is no, when you ask the question, would we do this if we were at war? Would we require this if we were at war? I think

I think is a very, very clarifying question, right? Because first of all, you know, people would say no, but then it's like, but we're not. Okay, but we might be far faster than we assume. And so I think it's a really healthy question and mirror to kind of turn on ourselves if we really care about national security. I will say on the other side of the time horizon and the fact that we are in

a phase where we still have time to prepare for future conflicts. When we look at Russia and Ukraine, one of the things that Ukraine did learning lessons from the previous Russian invasion was to prepare and to harden those systems. So while there is an acceleration, they may put out new software and it may be compromised within two weeks, we did see that those preparatory actions

were very important to their initial capability to defend and maintain some of their systems, particularly civilian critical infrastructure. So I would just caution that

that I think there can be a certain nihilism sometimes about, you know, why even try, right? But we have to try. And preparation and defense, I think, are a worthwhile endeavor, but it has to be done at a time and place where we're not in conflict because, as you said, the acceleration during crisis doesn't allow for that response. I think you're spot on. I

makes all the sense in the world. So there's a difference, I guess I would say, between activity and progress.

when you talk about preparation. I think the other thing is you're looking at events in Ukraine, and it is true, right? What you do in peace is different than what you do when you're at war. And if anything, time is more on our advantage, right? It's on our side now, and it won't be in a period of conflict. I do think, though, it's important when we look at kind of the cadence in Ukraine, it's not just the offense that's on an incredible pace and cadence, it's also the defense. The OODA loops are just incredibly quick on both sides. And that compromise doesn't always mean impact, right?

And so I think sometimes we look at the news and cybersecurity is like, how many devices? And you can get these, how many devices were hit by a wiper? And it looks really sensational. But the sort of impact of those things is questionable sometimes. And so sort of thinking that piece through, I think it is increasingly when things we see in Ukraine, it's increasingly difficult for Russia to generate impacts as the war progresses because they're sort of losing specific access. They're pushing to the edge. The cadence is much, much quicker.

in that space. And it's also harder for the Ukrainians to secure those systems as that cadence shifts. But I think this also goes back not just to time, but what is your desired end state? So in periods of peace, you're ideally hopefully planning for war so that there isn't one, but if there is one, you are victorious in that moment. In periods of war, the question is, can I maintain a functionality as long as possible at a certain cost? And so when we think about resilience, often in the US, that conversation is bad things don't happen.

That's kind of what we mean when we say resilience. Resilience of a system, and we see this in Ukraine, is under certain conditions, bad things don't happen for a certain period of time. And you can do that at the level of the asset, right? You can do that, for example, we saw this with Starlink satellites where you can push code, right, specifically to an asset. I want that asset to be secure and resilient. And there's ways of making that quicker, for example, being able to push code in quite rapid sequence. You can do it at the level of a system.

You see this increasingly in sort of cloud-based structures, but also the internet is a system-based resilience function. The PLEO, so proliferated low Earth orbit satellite constellations where we have thousands in orbit, they're system-based resilience. I can have a certain die-off and maintain a system. I can replace portions at high speeds and maintain a system. And then there's function resilience.

based resilience, right? And I don't think we give this one enough credit. We look at something like FIASAT, you can compromise satellite communications and any military worth its grain of salt has a PACE plan, right? It shouldn't be their only communication structure. And so we should be thinking about it not just is this asset secure, is this system secure, is this asset resilient, is this system resilient, but also what is our alternative for functionality, right? And can I kind of

OODA loop rapidly switch between those functionalities. So I'm losing systems, replacing systems. And that's sort of expected. So I think that timing is sometimes cutting both ways in really interesting ways. I was just going to say that we have a term we like to encapsulate that description in.

This graceful degradation, right? So can we gracefully degrade as a cyber attack, say, ramps up or other non-kinetic effect ramps up? Do we know what mission degradation looks like so that it's not just mission comes to a screeching halt, but we know which services are going to have to drop away first?

to maintain those core important functions, to your point. And I think it's core important functions, right? Like this is something I don't think we spend enough time talking about. You are going to have to decide not to defend certain things, right? Or not to defend them in the same kind of depth or scale that you would defend others. And I think this gets a little bit mired in the public cybersecurity debate because you are interested in

in all infrastructure within Ukraine, a lot of which could be civilian or alternative or military across that board. Not all of those are going to be impactful to the course of the war, right? Not all media is impactful to the course of the war. Blackouts in cities can be deeply traumatic, but it may not be impactful to the course of the war, as I think many people who remember the Blitz kind of have a sense of what that felt like in those spaces. So I think there has to be kind of a pop-up. So it's graceful degradation, and it's also, I don't need to defend that.

Right. I can accept kind of degradation in this area and not somewhere else in periods of war. Yeah. I think part of that gets to sort of like a return to green, right? Like meantime resolution, like, hey, capabilities,

nothing is impenetrable. So how fast can you recover from all of this? The takeaway that I think stuck with me was if called upon, is the US government capable of shipping code every two weeks, production code every two weeks? The answer is no. That should be a blinking red light and an alarm bell. Despite all the money we're spending, despite all the capabilities we have, we don't have, we don't

we don't have, you've heard me talk about Zalexis like the, hey, who owns this from mission set, right? We've got all these weird like semantic arguments now and like, do we need a force? Is it cybercom? Like does each service do their own thing? Well, I've got this factory or that factory and this program or that program. And at the end of the day, because the lights go dark tomorrow, like you're turning everybody outside of the industry to start shipping capability because we've not been building that

And even if it dies in two weeks, right, that's normal. That's context. That's conflict, right? We've got to be able to actually iterate that fast. And I don't know where that capability lies right now. Yeah. I mean, one of the things that I love that we brought up, and we do a specific exercise when it comes to looking at AI technology. So when we

when we sit down, you know, and that's one of the cool things we get to do in the Air Force Research Lab is people come to us and say, like, is AI right for this? And we get to sit down and say, well, let's figure it out. And there's three things that, you know, we tend to do, you know, to kind of get, and then I'll give an example of it. So one is we actually say, well, let's sit down and write a press release. And what that does is get everyone at the table to understand what is success. What do you want to be able to say? Did you give, you know, people three hours back on mission or 30,000 hours back on mission, right? Like,

What does success look like? And it's really important, I think, as we navigate these technologies to be intentional about that. And it gets everyone on the same page. The second, though, is a premortem. So to that point of...

of when would we stop this? Do we expect that this would stop because it's been overtaken by a different technology? Do we expect this would stop because it might go off the rails? Do we expect this would stop because we're not gonna have enough compute, right? And actually running from the very beginning, running through the process to say, to assume that you will stop at some point using or doing something and actually having everyone be on the same table

around the table to say, okay, here's where it gets up. And actually now we're all sensitive to if we see these things, right? That like maybe stopping is important. And when it comes to AI and the third thing we do a 10-10 exercise, meaning especially generative AI. If you're going to use it, you have to know 10 things, have 10 questions that you know it's gonna get right.

And if it doesn't get it right, you shouldn't use it, right? And then there should be 10 questions where if I can get these at speed and scale, and I don't have to go back and update my ERP, and I don't have to make a custom dashboard, well, then there's value here, right? And that gives us our toil impact calculation.

I say all that because I think your point, Tyler, of like, we're not ready for a two-week software cycle. My hope and what I have seen and what I believe in this nation is that recently a lot of people may have tracked that we put out Nipper GPT, right? So one of the first Gen AI tools through the ATO process.

And I got to say, that was built by a bunch of incredible volunteers, right? Who were like, we really think that there's a lot of knowledge out there that is going, is not at the table, right? Because it's not structured. It's not perfect. And, you know, we want to really empower the warfighter to say, well, what's the relationship with knowledge you want to have? Right?

right, on your terms. And most of our treasure is not perfectly structured or controlled, right? And so much of our cyber right now is responding to the information that we have intentionally gone and structured. But I always like to remind people that is often 6%.

of our knowledge, right? 6% of our knowledge have we actually structured and we now kind of protect. So all of a sudden, you know, for a group of volunteers to say, well, what would it look like to put all the knowledge on the table and to be able to ask questions and be able to exercise curiosity and build that kind of query relationship? But then the question starts to be, what is the cost to us of not having the knowledge on the table? And to me, this is the most interesting,

cyber question. I don't have an answer. I'm hoping others at the table do. But there's a trade-off at some point about protecting things almost to the detriment of ourselves. Meaning, what is all the information the warfighter cannot easily access right now that means that they are not going to make the crawl, not going to make the right decision? And if we have that in our universe and we are not unleashing

kind of that question of what is the relationship with knowledge we want to have now, then I think we're really missing one of the moments that we have in front of us. Let's put that question to the rest of the group. What do you think we're missing by sort of over-restricting things with cyber? A lot. So I think one of the ways that I've seen this play out, even pre-artificial intelligence, military, we love our exercises. It's one of the ways we look at our pace plans. We do

you know, planning and make sure that we're ready, resilient. And before anybody was talking about AI, the intel community, the operations community, we love to put things in little silos of protection, whether that's at a certain classification level or creating these lists of, you know, people, communities that can have access and others who can't have access to that information. And what we see across multiple military exercises is

is that when people don't have access to that information, right, planners may look at a scenario and say, oh, well, we have nothing we can use to fill that capability need. And then here comes somebody in with their special pot of knowledge and says, well, actually, we've been working in this closet on this capability, and they drop it on the table and go, and here it is.

But that doesn't do us any good

to be able to then say, well, that would have been nice to know six months ago when we were in the planning phase, but we are in execution in this war game. We don't know how to now incorporate that capability, which we didn't know existed until today, right? Or some special piece of intelligence that totally changes the understanding of the adversary's sight picture. These are things that if we don't know them,

We aren't on the same page of knowledge. We can't bring these things to the table and really have a fulsome understanding of how we should prepare for the future. But that's an old problem. I think AI accelerates the issue.

Because now we could have faster answers to these problems, but we're compounding sort of our information security processes and policies, which are all based on old file cabinet procedures, right? You're going to go and you're looking up these paper documents. Those...

policies that are in place to protect that information are very much based on not on the digital age. And so how do we update those for the advent of AI while baking in

some of the very necessary security. Yes, I won't deny that we do need to compartmentalize certain things, but we've got to do better with bringing that to the table in a risk and time-informed way. And I think you see that when you look to Ukraine, right? There's a couple, I think, hindrances there. A lot of what we look at Ukraine in terms of lessons that could

kind of translate and move more broadly is why was the defense so strong, right? So when you look at the instance of Ukraine, is Russia underperformed? Certainly. Did Ukraine overperform? Certainly on those outputs. But clearly the defense had a pretty impressive track record for cyberspace, right?

which is always a little bit of a low bar. We're all failing in unique ways in that space. It's a very challenging space, but impressive in that area. And I think one of the things that immediately is apparent there was this volunteerism culture, right? A lot of that success is this kind of the attacks were happening locally. The focus is happening locally within Ukraine and defense was global.

not just in the kinetic space, but in cyberspace, right? From tech companies. Even more in cyberspace. Even more in cyberspace, right? From tech companies actively patching systems to providing hardware and software to the battlefield, repurposing assets to migrating data, right? AWS and migrating data on snowballs and trucks right out of the country for continuity of government in the

aftermath of war, an incredible private sector kind of presence pouring in. But the vast majority of that, the vast, vast majority of that was volunteer. And it was based on having access and having information and being able to coordinate not just with government to sort of sit down in London and say, what data do you need? Right? Is it property records? Let's move it out of country. And having that conversation to the Ukrainian CERT, publishing a lot more information publicly that allowed private sector actors to pick up on that, but also within the private

sector, these are big companies, many of which were competitors to each other. And I think one of the kind of incredible things in that space that doesn't always happen is a lot of collaboration and sharing of that information. But I think it's important to recognize not just sort of that the government has a lot of classified information, but private sector

has a lot of telemetry in this space and had the best visibility into what was happening in Ukraine. And not all of that information is sort of public or well-known either. So there's kind of the importance of sharing that is if you want to be able to mobilize across a variety of silos, including your private sector, you need that transparency. But the transparency problem is not just a government problem. Yeah, I think...

the stovepipe explanation encapsulates what I'll get at here but was it like third offset it was data to decision and we were talking about all these like cool catchphrases where like if all of our data is just federated to oblivion nobody's actually generating insight there which means we're not gaining an unfair advantage there is no hyper empowered individual there's no super empowered individual there is no sort of

you know, the hyper empowered operator, all of that isn't there because we're not architecting in a way in forget like the, the legacy systems. And I get it like this data sits in different things on different metal. I got it. But like, we're not even taking a real like API strategy. And like, how are we going to make these accessible so that like break glass in case of, you know, and push gonculator three B four and it connects. Like, I don't think we really have that capability yet.

We don't even know what we don't know we're missing. And I think that's the scary part. So I think it goes back to the earlier comment about what relationship do you want to have with knowledge? What are those critical decisions we in the Department of Defense need to make? And then working backwards from there to say, to make this decision,

My decision making would be better if I knew and then looking at what data could support that decision making. Right. So that you're like my favorite person right now. I could that comment just because I think this is this is great. You know, this is great. Air Force, Navy. Right. We're going to bring it. We're going to bring it. But I think what's really interesting about what you're talking about right now is we, you know, to the point of adaptive identity.

We forget how much identity and how much of our conversation is around a tool, a system, a software, a platform. And, you know, that actually in itself is incredibly restrictive. And we do it to individuals and we do it to organizations. What I mean by that is, you know, when I first came into my role, I, you know, one of my team members got introduced as the SharePoint lead. And I was like, wow.

You as a human being contributing to our organization is not in my head going to be known as the SharePoint lead. And that's nothing against SharePoint. It's you are more than that, right? You are bigger than that. Your job is to bring together, to collaborate. You happen to use this tool right now. But if you think about it, if you want people to be able to change, to see value and change, and you lock their identity...

into something that they think their job security is their knowledge of a particular tool at this time, they're the least incented people to bring you a change. And I think this

this question, like you said, about, and this has been really interesting for me as a CIO, you know, getting to just work with some of these incredible leaders and labs, and they're so smart. And I think sometimes there's that first, you know, conversation where I say, you know, they expect me to talk about a tool or a platform or a system, or that's what they know. That's the nomenclature they're used to. And I sit down and say, what's the relationship with knowledge you want to have? And they're like, what do you mean? I was like, no,

no, really, what relationship with knowledge? And not only that, a question I love even more is how would your people be different or what behaviors would you want out of your people if you had the right relationship with knowledge? And in particular, we talk about it on kind of curiosity level one to 10. So,

So what I loved about what you said is right now there's a lot of that here's a rock, right? The commander, the general will say, I need X, right? Something they didn't need before that wasn't built in the system. And if you think about it, that's their curiosity level one. They've asked a question that is from the kind of gut or around what's kind of circulating. And so everyone then runs off and scrambles to answer curiosity level one. What's more interesting is to sit down and say, we can find you the answer to C1.

the question is what would you do with it right and so if you can actually go to c2 well what decision would you make differently which would lead you to whatever and i think sometimes we don't actually respect enough of this question of our relationship with knowledge to take a quick second

and say, what would that lead you to? It's almost like the five whys, right? And if you can get a leader thinking to curiosity level six or seven, I will tell you that the solution, fundamentally different when you're talking about that level of thoughtfulness than you are when you are in the C1 kind of rock chasing. If I was going to give you a counterpoint or a

an alternate perspective. I would say, hey, how do I get the staffs there? Because I go back to, you know, being a young lieutenant or a captain and I'd tell the company, hey, I need formation at eight and then first sergeant would have them at 730 to make sure they're ready and the platoon sergeants at seven and the squad leaders at 630, team leaders at six.

Next thing you know, the infantry troops are up all night when I like needed to have a five minute conversation. So if you think about the gating that is well-intentioned, like assign positive intent to it, but the gating that occurs at a knowledge level for primary staff and like principals, the amount of things they aren't even getting a chance to consider because, you know, some council of colonels or GSs or contractors have gated that. Yeah.

I don't know if we're capable of having that conversation until we change the mindset of the gates. So I want to shift gears to workforce, which is pretty much what we've already segued into. I have this sort of working hypothesis that point one, some of our problem is, is we treat too much cyber as arcane knowledge, is we're too focused on building cyber experts and focusing on the cyber experts rather than

ensuring that this is actually shared knowledge throughout the workforce in a world where, let's be honest, everyone needs to know not just something about cyber, but actually quite a bit about cyber. And I would love to hear about how all of you think about that in DoD, but also industry.

I mean, there was a great quote actually that one of your colleagues said at a conference at ITSEC a couple of years ago. He said, worrying just about the government part of your cybersecurity problem is like cleaning half of your pool. Yeah. So I think there's a couple of things on workforce. So to throw another quote in there that's one that I find entertaining, although I am not a fan of mayonnaise, so I wish we had picked a different condiment. But I am, yeah, when I say I'm not a fan, take that as like a British individual. Didn't have this on my bingo card for this conversation at all. No.

Not a fan of mayonnaise. But is Miracle Whip okay? No. Okay. Just checking. I mean, there's some people who... Aghast. Aghast. Get a good mustard. Anyway, I'm derailed. Not a fan of mayonnaise, but Jay Healy had this quote, and I think it's accurate, which is kind of cybersecurity is the mayonnaise of every sandwich. So whatever job you hold, whether that's within DOD or more broadly, cybersecurity has something to do with your job, right? It's a deep horizontal. And so I do worry about kind of the

push to create too many of its own verticals, to kind of specialize that knowledge? Do you think you need a base knowledge? In the same way that you need a base knowledge around risk or around hardware, depending on your jobs and supply chain, right? Dib defense, industrial base, what does that look like in regardless of your job? I think this is actually an area where we can kind of take some lessons from other countries, right? So particularly if you're looking to countries like Estonia or Singapore that have a conscription service, of course, and kind of a volunteer service that's quite robust, a

And Finland does the same thing. They educate their entire workforce because they touch their entire workforce through their kind of military. And they educate them on a base knowledge in the same way that like, how do you hold a rifle, right? What is some basic knowledge that you need to know to navigate that job? And then certain others specialize. And we don't have that same kind of military structure for obvious reasons in the country. But I do think there has to be a kind of a base inoculation, right? The mayonnaise in every sandwich, whereas I would say the mustard because it is the better condiment and the kind of supremacy of condiments. Right.

He hates mustard too. That's unfortunate. Well, we were together on mayonnaise. We're divided now over mustard. The culture divide is real. Are you going to insert sriracha here or is there like no? I love sriracha. Let's go. It's the sriracha in every sandwich. But yeah, so I think it has to have that base level and then you build from there. So I worry about that kind of lack of proficiency. I think the other piece is

Not everybody that's integrated into that workforce, and I will say I'm excited to see some of these efforts at a DoD, you can integrate and leverage talent in other areas without having to have them in your own kind of structure. And that's very helpful in terms of like industry relationships, but also academic relationships and standing up some of those pieces. We always sort of talk about it, like be the back bench, right? You can pick up the call. You have a lot more time. We have a lot more free time, academics, to think hard about some of these things.

and can feed in some of that specialist knowledge when others have to have much more robust and broad portfolios that they're dealing with. So I think broadening your understanding of workforce, not to just who's on your payroll, but how do you liaison and kind of harvest the best ideas elsewhere? And also recognizing every person needs some base level of these kind of technology questions. And I wonder too about that. I wonder how, again, to the point of disadvantage,

how much we've assumed that when we say everyone needs to know that, that that person must look like a STEM graduate to be useful, right? I think one of the things that's been the most impressive thing to me about, you know, bringing especially generative AI, you know, into the table is it really is a technology for the humanities. And so I think it's a real question of what does it look like, you

you know, to put all of your knowledge on the table, but then everyone be a knowledge worker or a cyber, you know, part of the cyber protection. And, you know, I think that for a long time, we assumed that that characteristic is only going to come in the form of your kind of typical hyper, you know, STEM graduate. And as an example, one of the things that, you know, in my research on generative AI, everyone assumes that it's the

engineer, the data scientist, et cetera, that's going to be the best wielder, right, of generative AI. And I will tell you right now, it is not. The lawyers, the PR people, but let me tell you the most surprising one to me is moms.

Moms are incredible queriers. And if you think about it, it makes sense, right? Because for me, I've got three kids, they're in their 20s, and it's like, let me ask you that question in another way, right? Like, I'm not sure I'm getting what I need there, right? And I say that because I think, you know, if we're going to ask what's the relationship with knowledge, if we all have a role there, I think that while we are always going to need this incredible, like, you know, profusion of STEM talent and

And we can do much more in that. I think it would be dangerous to assume that the wider swath, it's almost like innovation's everyone's job, right? You know, only you can prevent forest fires. Like, what is our modern take, you know, for the average person who is engaged in knowledge to understand that knowledge is power, knowledge is advantage, right? What does it look like to meet them at work?

their identity instead of forcing them to say, well, you've got to know these very specific technical things. And I actually think that this is a really interesting area of opportunity for us in how we can show up more collectively and how we can make space for, quite frankly, also diversity of thought.

when it comes to our cyber wars. And I really worry, you know, everyone talks about bias in AI, but I really worry that when we look at our future state technologically, that actually we are biasing out based on good assumptions, you know, from our history,

of who is actually going to be a strong contributor in our future state. And I wonder, you know, what that looks like. I would love to second sort of foot stomp this because I think cybersecurity is not just a technical practice, right? It's an organizational or bureaucratic, the social practice. It has strategic consequences, operational consequences, tax

And the diversity of voices in the room. So kind of an example of this, I was in a kind of closed door conversation with some friends and we were talking about what we were seeing in Ukraine. And in that room, you had folks that spent a lot of time thinking about this from sort of a more DOD perspective. And they were coming very much from looking at Russian activity saying, all right, it's about integration, right? This is about maturity of integration of forces. And we're not seeing it from the Russians. So they're failing.

and would we do better? And then those of us that didn't have that same background that had a very different kind of political science, a more academic background, we're going, yeah, they look like they're doing strategic bombing, but without the bombs, right? Like the kind of terror that comes with strategic bombing does not occur when you're hacking, but it's a sort of more indiscriminate, right? Hitting large populations, hitting critical infrastructure, but

Are they even trying to integrate? Is that even the point? And we had somebody in a kind of a Russian expert that was going, yeah, that's not consistent with Russian doctrine, right? They're doing a very different thing. And that conversation wouldn't have happened if one, it was only technical heads in the room because they would be asking technical questions or two, it was only tactical people in the room. They'd only be asking technical questions. And instead you had this, are we asking, are we observing the wrong thing, right? Are we indicating on the wrong variable and then designing our force structure incorrectly to adapt to it? I'd like to go back to your

Mayonnaise, sriracha, whatever condiment that we're on now.

on now. A blend. Let's go with, what is it, that aioli or something at that point? Dijonais. Some honey mustard on there. Grossing everybody out here. But this idea of, you know, what is the modern equivalent of our baseline training? So we have to move beyond our annual cybersecurity Greg and Tina one hour cyber challenge in the DoD. Yeah, don't click the link for the love of God.

We wouldn't put a sailor, a Marine, a soldier, an airman, a guardian in the field without knowledge of basic life-saving measures coming out of basic training. We wouldn't put them out there without knowing how to operate their rifle. That is basic equipment. And yet every single one of them, I don't care if you're a medic or a mechanic, you

We all use computers, and yet we're putting them in the field with these computers that present a major risk to our operations without a baseline understanding of what they should do about it to protect an organization.

Operate, yes, we grow up operating the equipment, but to protect it, right? As part of learning to shoot a rifle, you also learn to clear a jam and take it apart and clean it. I think that's an interesting pivot to seeing the technology as an ally, right? I mean, like this is one of the things to me that's been so interesting about AI is AI is a very intimate technology, right? It really allows an intimacy with the knowledge and an augmentation. And

And so, you know, to your point, having someone come in and saying, OK, this this is like a battle buddy. Right. But it's not in flesh and blood. Right. It is a battle buddy of a different sort. And I've got to know it. I've got to know the guy. Right. Or the gal like in that battle buddy that is fighting next to me. And I'm responsible for like life saving. And I'm also responsible, you know, for being able to say, like, are you in a good

state to make this call? I mean, so I think that question of being able to... And obviously, there's a lot of work in human machine teaming. But I think if

In some ways, if we actually blessed the idea of looking at technology like a really healthy augmentation that was a partner to you, we would actually stimulate more people to say, well, then I should probably know about it, right? I should probably have a different relationship with it. Instead of it being, you know, that it thing over there, I wonder, you know, what a different mindset would look like, especially given the technology we see coming down the pipe.

So I think the culture workforce side of this is one where we historically have put security and industry did this previously, right? So this is a typical government, a decade lag.

Where, you know, security isn't strategic, it's an afterthought and like you do a thing and then you bring it to security or legal to like bless it. Right. We very much have this weird sort of like adversarial relationship with security. Whereas in a I think a healthy example would be a world where mission owners are the ones that are accepting risk.

And they don't send that away to an AO somewhere or like a CIO or a cyber advisor. But like if I'm the mission owner and I'm a ground force commander or an MO and I can sign a contract, I'm sponsoring technology, I'm the MOU or the letter of support, you need to go do all this programmatic shit.

But then I'm incapable of saying you can turn it on. Right. I have a really hard time thinking I'm the mission owner. And then, you know, we talk about that as being healthy. Right. Seamless. Integrated. Right.

Right. Cyber is sort of an artifact of the healthy culture. I then juxtapose that with if that's what we're hoping for, then when we see, hey, we're going to stand up a cyber branch as a full separate freaking entity, we should be like, how the hell is that going to work? And what does that look like at scale? How do you all think about that without asking necessarily, especially two of you in government to to take a position on that? But how do you think about that?

whether or how we should stand up a separate cyber force, which is the debate on that is really heating up on the Hill and the department and the pages of War on the Rocks, among other places. Sure. So I think actually what I usually do when I look at any organization that says, you know, we should do this larger thing. I think right now the question I really ask myself is,

What mentality are you going to, what operational mentality are the people within that thing going to have? And what I mean by that is we've gotten really, really good over the past kind of decade at

and this is at any level individually, but even societally at nourishing and feeding the role of the critic and in fact, increasing the role of the critic. And so what I mean by that is you don't watch the news, you watch five people's opinion about the news or four people's opinion. You don't watch sports, you watch two people's opinion about sports.

And I think one of the things we have to keep in mind is that if you are a doer, if you are truly willing to be the gladiator in the ring, right, the mission owner, if you're really there...

We have to ask ourselves, and this is something I had to ask myself as I came into AFRL and to my work, is how many of my people's jobs are critics and how many of their jobs are doers? And being able to really make sure that we were always, always over-weighting to the doer side. And so I think when we think about anything, whether it's kind of cyber force, whether it's anything, I think the real question is,

what are they held accountable for? How are they expected to show up? And this would be true no matter what the description was. If they're expected to show up

and bring this incredible talent set by doing, by partnering up, by becoming the BFF of someone who's trying to bring something operational, et cetera, that's going to be amazing. If they or anyone else are coming up in an essence with the role of critic or slowing something down or pointing out the ways that something might be risky but not accountable to solve it,

I think, you know, that really becomes, you know, highly problematic with the way we have to meet the moment. And so I don't think, you know, for me, it's a any of these entities are a good or bad thing. I think they can be great things if the leaders require them to show up as doers and champions versus, you know, maybe critic only. And I think, again, that's there's a lot of talking about what something should be. And then there's how you incentivize others.

you know, and who you make the hero, right? And I think one of the things I'll just, you know, end on this is I don't think, and I really encourage people listening, pay attention to in your given day, how many people are getting your attention, your airtime, you know, your, you know, the nourishment of your moments that are pointing out something that is wrong or pointing out risk or what have you, but have no impact.

impetus to be a part of the solution. And if you are finding that that's over-indexed, right? Or if you are finding, then you've got to start seeking doers. You've got to start rewarding the people who are in the ring with your hours and attention, because otherwise we should not be surprised if we end up not only in the place we are now, but behind in what it is that we prioritize. I think the question is,

I think the question I always raise when we talk about standing up something new, whether it's in DoD or somewhere else, is what problem is it trying to solve? And it's not always well articulated or necessarily that sellable. And I think the reason that's really important is sometimes you're setting up something new because of the kind of bureaucratic structure of an organization that you need it for funding lines.

or jurisdictional questions, authorization questions. Those aren't necessarily sexy, but they can be deeply motivating, right? If sort of your ability to carry out mission depends on those kind of things. So there's sort of times bureaucratic reasons. But I think often the reason I am hesitant, I think there should be a high bar

I'll put it this way. I think there should be a high bar to standing up something new. I think anyone who's looked at an org chart at the US instantly thinks I maybe need a degree just to understand this. I think there should be a high bar to setting up something new. And I think it's really important in spaces like cyberspace that I think of as sort of like prime enablers. Every other domain and every other portion

relies on some of that for their critical functionality. Even if we look at something like Space Force, right? A huge portion of their conversation is a cybersecurity conversation. And given that it's a prime enabler, the kind of biggest challenge or the biggest threat, the kind of metric on which we will succeed or fail will be whether or not our adversaries play into our gap

or we build unnecessarily silos, right? So our ability to kind of bridge gaps, take down silos, our adversary's ability to play into those silos is going to be incredibly decisive. So I think there may be organizational bureaucratic reasons, but the question I would have front and center is tell me about your problem. What problem is this meant to solve? And does it unnecessarily build more silos and gaps?

to hearken on that too. Once you know the problem, then there's actually setting up the values and incentive structures to make sure that that is what is being solved, right? That is what is being done. That is what progress is being made on. And so, you know, I'm totally in agreement. I think, I think

These types of things are really exciting if they become additive and advantage gathering. And in some ways, you know, sometimes I look at these things and say, whose life will this make easier? Right. And I think if there's a clear answer to that and if those people understand that's part of the mission, then that's incredible.

I would flip this on its head, right? The question's not, should we get a cyber force? The question is, what problem does the DoD need solved? And cyber force is one of the answers. For example, right now, if we have been, you know, over-indexing on defensive versus offensive, if we have a new range of technologies like AI that we want to move through more quickly and we don't have enough, you know, people who really understand them, who are comfortable with them, then being able to have, you know, kind of additive champions is amazing. Yeah.

So to the point on what problem are we trying to solve or what opportunities are we trying to accelerate, the department does have, and this is my required disclaimer, the department does have a number of national defense authorization acts that ask us to look at exactly that. We're going after those options for force generation that could potentially solve some of these underlying problems.

And there's an effort called Cybercom 2.0 that wraps up a lot of that thinking and studying that has to happen before the SecDef can come to a decision on that force generation model. So all that said, I know we were talking about Space Force. I think it's important to note there that General Nakasone has come out since his retirement. General Mattis have also pointed at Space Force.

to say we should probably very carefully consider the administrative burden on all the services that the creation of a new force requires. So when we think about the stand-up of a cyber force, you can't think of it solely in terms of taking the existing cyber mission force at Cyber Command

plus potentially forces from the services and putting them together, it is not just that. It is administrators, it is lawyers, it is medics, and the services are all already struggling to recruit and retain those forces in addition to struggling to recruit and retain cyber forces. So there really has to be a careful consideration on the administrative burden

The one other thing I would point out is a careful consideration on how we potentially pull apart the defensive cyber operations. On the offensive side, I think it can be sort of excised much more cleanly if that was the choice for force generation, although there are still significant consequences that would have to be considered there. But on the defensive side,

The services will always maintain a requirement from the authorizing official level up to defensive operations to defend and secure our own networks. So that is a subject I haven't heard talked much about is how would we pull apart potentially those roles and responsibilities. That has to be considered really carefully, especially when we're talking about this period of concern between sort of now and 2027 and what's

what disruption that could potentially cause to the services and to cybercom. I love part of what you're raising strikes me and it's going to be a perfect handoff to Tyler. So get ready. But, you know, I think the first is, you know, to that point of like our job is to start, maintain and stop. So the interesting thing to me is anytime anyone wants to bring me something like, well, what does this allow me to stop doing as much as, you know, you want it to start?

But I'm struck by, when you think about the commercial merger and acquisition model, right? The intentionality around to your point, kind of bringing things together, there is incredible calculation

on very practical synergies, right? Very practical, no joke, what are we going to stop doing? What do we now have three of that maybe we only have one of or other things? And so I think that not just in this instance, but we can take some lessons from when you're looking at effectiveness or if you're looking at the private sector, what are the intentionalities around and what are the metrics, if you will,

around whether, in my case, starting anything, in this case, starting something potentially at a much grander scale makes sense. And I think that that thoughtfulness is always actually really helpful too, because for those of you who've kind of been around the merger acquisition, it kind of creates like a roadmap so that everyone who's involved in it understands

what's trying to be achieved and what is changed. And I think that story really matters and people's identity and how they show up in the current org or the new org really matters. And I think that my hope is that that intentionality is so powerful. I think I'll take us home trying to be brief on this one. I think if we just spent, I don't know, an hour or whatever, time and birds, both not real. So I don't know how long we've been here.

But a material amount of time talking about sort of the bureaucratic and administrative limb facts and the cultural challenges, I would question how we solve that with an additional layer of all of that just thrown on top of.

That to me feels like, hey, we're going to do the complete opposite of Occam's razor and we're going to assume four or five perfect acts of God are going to enable this. And to go back to my old days as like an Intel planner or an ops guy, like if you have to assume an act of God for your thing to work, your thing will not work. Right? So we're all leaving this there. Yeah.

Hope is not a course of action. That's right. This has been a great, great episode. I'd love to have you all back on the show to talk about the AI aspects of cyber, which we didn't get to, but this is a really great group. And thank you so much for joining the show. Absolutely. Thank you. Thanks so much for listening to this episode of War on the Rocks. Don't forget to check out our membership program at warontherocks.com slash membership. Stay safe and stay healthy.