The joe rogan experience.
Ah you know not too much, just another typical week and A I .
just the beginning of the end of time. All happy right now. If for just for the sake of the listeners, please just give us your names and tell me, tell us what you do so i'm jermy .
herr's and the CEO n cofounder of this company glad stone ei that we go found IT. Ah so we're essentially a national security N I company. We can get to the backs for a little bit later but that's that's .
the high level yeah and i'm ed hairs. I'm actually i'm his covenant and brother and the co of the company keep this .
like pull like fish from your face there you perfect. So how long have you guys been involved in the whole A I space for .
a while in different ways? So yes, we actually we started off as physicists like that was our our background. And in it's like around twenty seventeen, we started to go into A I startups.
We found that started up to get through y combinator, this like silicon valley accelerator program at the time. Actually sam altman, who's now the CEO of open eye, was the president of y combinator. So he like opened up our about get Y C with this big beach and and we got some some conversations in with him over the course of the batch then in twenty twenty.
So this this thing happened that we can talk about. Essentially this was like the moment that there's like a before and after in the world of A I before, after twenty, twenty and IT launched this revolution that brought us to ChatGPT. Essentially there was an insight that opening I had and double down on that you can draw straight line to chat, G P T, G P T four, google, gemini, everything that makes A I everything IT is today started then.
And when IT happened, we kind of went, well. Well, he gave me a call, just like panic phone call. He's like, dude, I don't think we can keep working like businesses.
usually regular company anymore. Yeah so there was this A I model called GPT three so like everyone has you know maybe played with GPT four that's like ChatGPT um GPT three was the generation before that.
And I was the first time that you had an A I model that could get that could actually, let's say, reduce stuff like right news articles that the average person, like in a paragraph of a news article could not tell the difference between IT wrote this news article and a real person wrote this news article. So that was an inflection that was a significant in itself. But what was most significant was that IT represented a point along this line.
This like scaling trend for A I, where the signs were that you didn't have to be clever. You didn't have to come up with necessarily a revolutionary new algorithm or be smart about IT. You just decide to take what works and make IT way, way, way bigger.
And the significantly to that is you increase the amount of computing cycles you put against something, increase the amount data, all of that is an engineering problem, and you can solve IT with money. So you've got you can scale up the system, use IT to make money, and put that money right back into scaling up the system. Some more money in I Q points come out .
that was kind of twenty.
twenty moment like that.
And that's what we said in twenty twenty. exactly. I spent about two .
hours trying to argue out of IT. I was like, no to know. Like we can keep working to our company because we are having fun.
Like we like founding companies. And yeah he just like wrestled me the ground and we're lex ship. We got to do something about this. We reached out to like a family friend who we used non technical, but he had some some connections in government in dod. And we're like, dude, um the way this is set up right now, you can really start drawing straight lines and extrapolating and saying, you know what the government is going to give us IT about this in not very long two years, four years, we're not sure.
But the knowledge about what's going on here is so silos in the frontier, labs like our friends, or all over the the french of labs, the open the eyes, google deef minds of us self, the shit they were saying to us that was like mundane reality, like water cooler conversation when you then went to talk to people in policy, even like pretty senior people in government not tracking the story remotely. In fact, you're hearing almost a diametric opposite. This is like over learning the lessons of the AI winters that came before, when it's pretty clear, like we're on a very at least interesting trajectory, let's say that should change the way we're .
thinking about the technology. What what was your fear like? What was that? They hit you that major go. We have to stop doing this.
So it's basically, you know, anyone can draw straight line right on a graph.
The key is looking ahead and actually at that point, three years, three years out, four years out and asking like you're asking what does this mean for the world? What does that mean? What does the world have to look like if we're at this point and we're already seeing the first kind of wave of risk that just begin to materialize? And that's kind of the weapon, zia risk sets.
So you think about stuff like a large scale, psychological manipulation of social media actually really easy to do. Now you train a model on just a whole bunch of tweet. You can actually direct IT to push narrative like you know maybe china should on taione know whatever something like that um and you actually you can you can train IT to adjust the discourse and and have increasing levels of effectively to that just as you increase the general capability surface of these systems.
We don't know how to predict what exactly comes out of them at each level of scale, but it's just general increasing power and then the the the kind of next beat of risk after that. So were scaling these systems were on track to scale systems that are at human level, like generally as smart, however you define that as a person or a greater in open the eye. And the other labs are saying, yeah, might be two years away, three years away, four years away, like insane close.
At the same time, I will go into the details of this, but we actually don't understand how to reliably control these systems. We don't understand how to get these systems to do what IT is we want. We can kind of like poke them and problem and get them to kind of adjust.
But you've seen and we can go through these examples, we've seen example after example of you being sydney yelling at users, google showing a seventeen and century british scientists that are racially diverse, all that kind of stuff. We don't really understand how to like aid IT or align or steer IT. And so then you can ask yourself, well, we're on track to get here.
We are not on track to control these systems effectively. How bad is that? And the risk is if you have a system that is significantly smarter than humans or human organization, that we basically get disempowered in various ways relative to that system. And we can go into some details on that too.
Now when when a system does something like with german, I did like that says, show us nazi soldiers and inside asian women. And like what is for what's the mechanism? Like how does that happen?
So it's maybe worth yet taking a step back and and looking like healey system to actually work. That's gonna give us a bit of a frame too for figuring out when we see weird shit happen. How weird is that shit is that shit is explainable by just the basic mechanics of what you would expect to, based on the way are training these things.
Where is is something new and fundamental different happening? So um we talking about this idea of scaling these A I systems, right? What does that actually mean? Well, imagine the AI model, which is kind of like, you think of what is like the artificial brain here that actually does the thinking that model contains is kind of like a human brain and is got these things called neurons, we and the human brain called biological neurons in the context of its artificial neurons. But doesn't really matter that the cells that do the thinking for the machine and the realization of the eye scaling is that you can basically take this model, increase the number of artificial neurons and containers um and at the same time increase the amount of computing power that you're putting into kind of like wiring the connections between those neurons.
That's the training. How does the neuron think? yes. okay.
So so to get a little bit more concrete than so in your brain, right, we have these neurons. They're all connected to each other with different connections. And when you got into the world and you learn new skill, what really happens is you try at that skill, you succeed or fail.
And based on you're succeeding or fAiling the connections between neurons that are associated with doing that task well gets stronger. The connections that are associated with doing IT badly get weaker. And over time, through this like glorified process relief, trial and error, eventually you're going to a home in.
And really, in a very real sense, everything you know about the world gets implicitly encoded in the strength of the connections between all those neurons. If I can actually your brain and get all the connection strengths of all the neurons, I have everything joe rogan has learned about the world. That's like basically the a good sketches, let's say, of of what's going on here.
So now we apply that A I, right? That's that's the next step in here. really.
It's the same story. We had these massive systems, artificial neurons, connected to each other. The strength of those connections is secretly what encodes all the knowledge. So if if I can steal all of those connections, those weights, as are sometimes called, i've stolen the model, i've stolen the artificial brain, I can use IT to do whatever the model could do initially. That is kind of the art effect of central interest here.
And so if you can, if you can build the system, right, I got so many moving parts, like if you look at GPT, for IT has people think around a trillion of these connections. And that's a trillion little pieces that all have to be gigged together to work together, coherently, need computers to go through and like tweet those numbers. So massive amounts of computing power.
The bigger you make that model, the more computing power you're going to need to kind of tune IT in. And now you have this relationship between the size your model, the amount computing power er you used to train IT. And if you can increase those things at the same time, what IT was thing is your I Q points basically drop out.
Very roughly speaking, that was what people realized. Twenty, twenty. And that the effect that had right was, now all the sudden the entire AI industry is looking at this equation.
Everybody knows the secret says, I make IT bigger. I make more I Q points. I can get more money. So google looking at this, microsoft, OpenAI, amazon, everybody's looking at the same equation. You have the makings for a crazy race like right now, today, ops, microsoft is engaging the single biggest infrastructure in human history.
the biggest infrastructure.
fifty billion dollars a year, right? So on the scale, the Apollo moon landing, just in building out data centres to house the compute infrastructure, because they are betting that these systems are going to get them to something like human level. Eee, pretty damn. Cy.
so I was reading some story about, I think IT was google that saying that they gone to have multiple nuclear reactors to power their their database.
That's the that's what you've got to do now because what's going on is north amErica is kind of running out of on grade baseload power to actually supply these data centres. Um you're getting data center building moratti um in areas like Virginia, which is traditionally been like the data center cluster for amazon for examples for a lot of these these other companies.
And so when you build a data center, you need a bunch of resources you know cited close to that data. You need water for cooling in a source of electricity. And IT turns out that, you know, wind and solar don't really quite cut IT for these big data centres that train big models, because the data center of the training consumers power like this all the time.
But the sun is an always shining, the wind is always blowing. And so you got to build nuclear reactors, which give you high capacity factor baseload. And amazon literally bite, yeah, a data center with a nuclear plant right next to IT. Cause like that's what you got to do.
Jesus, how long does he take to build a nuclear reactor? Because IT still like this is the race right? The race is you're talking about twenty, twenty people realized in this. Then you have to have the power to supply IT. But how long how many years is take to get an active nuclear reactor up running?
It's a couple it's it's an answer that depends um um the chinese are faster than us at building nuclear reactors.
for example. They have all the geopolitics of this two, right? When you look at us versus china, what is bottle necking each country, right? So the us is bottle neck increasingly by power baseline power china because we've got export control metres in place in part as a response to the scaling phenomenon .
as a result of the investigation we did.
That's right. Yeah actually in part in part yeah um so but china is bottle by their access to the the actual processors. They've got all the power they can eat because they've got much more infrastructure investment. But the chip side is is weaker. So there's sort of like balancing act between the two sides, and it's not clear yet like which one positions use strategically for dominance and long term.
But we are also building Better, more like so small modular react, essentially small nuclear power plants that can be mass produced. Those are starting to come online relatively early, but the technology and designs are pretty mature. So that's probably the next beat for our power grid, for data centres. I would imagine microsoft .
is doing this. So in twenty and twenty, you have this revelation, you recognize where this is going, you see how IT charts, and you say this is going to be a real problem. Does anybody listen .
to you this .
where the problem comes?
right? Yeah, like we said, right. You can draw straight line. You can have people not along. But there's a couple of there's a couple of like hiccups along the way. One is that straight line really going to happen.
All you're doing is like drawing lies on charts, right? I don't really believe if that's going to happen, and that's one thing. The next thing is just imagining is this is this what's going to come to pass as a result of that? And then the third thing is, well, yeah, that sounds important, but like not my problem. Like that sounds like an important problem for somebody else. And so we did do like that of a traveling.
Yeah was like the worlds that is traveling road, you like, was literally as dumb as the sounds. So we go and oh my god, I mean, it's almost embarrassing thing back on. So twenty twenty happens.
Yes, within months, first of all, we like we ve got to figure out how to hand off our company. So we handed off to two of our our earliest employees. They had an amazing job company exited.
That's great um but that was only because they're so good at what they do. We we then went, what the hell like how can you steer this situation? How do you we just thought we ve got to wake up the U.
S. Government as stupid and naive as that sounds like, that was the big picture goal. So we start to line up as many briefings as we possibly can across the U.
S. Inner agency, all the departments of the agencies we can find climbing our way up. We got an awful lot like ad set of like that sounds like a wicked important problem for somebody else .
to solve yeah like defense homeland's security and then the state department .
yeah so we end up exactly in this mean with like there's about a dozen folks from the state department and one of them and I hope at some point, you know, history recognizes what what he did, her team did, because IT was the first time that somebody actually stood up and said, first of all, yes, sounds like a serious issue. I see the argument make sense. Two, I own this. And three, i'm going to put my own career capital behind this.
That's the and that was at the end of twenty twenty one. So imagine that that's a year before ChatGPT. Nobody was tracking this issue. You had to have the imagination to draw like through that line, understand what I meant and then believe, yeah, i'm going to risk some career capital on this in a risk of our government.
And this is the only reason that we even be able to publicly talk about the investigation the first place because by the time the this whole assessment was commissioned, IT was just before a ChatGPT came out. The eye salon was not yet on this. And so there is a view that, like, make sure you can publish the results, this kind of know not nothing burger investigation, but, you know, you sure, go ahead and IT just be became the insane story we had like the U K A I safety summit. We had house executive order, all the stuff which became entangled with the work we were doing um which we simply could not have, especially some of the some of the reports we are collecting from the labs, the whistle blow reports that could not have been made public. If if IT wasn't for the first sight of this team really pushing for as well the american population.
hear about IT. I could see how if you were one of the people that's on this expansion man minded set, like all you're thinking about is like getting this up and running. You guys are paying the ash, right? So so you guys you're obviously you doing something really ridiculous.
You're stopping your company. We you make more money staying there and continuing the process, but you recognize that there is like an existent al threat involved in making this stuff go online. Like when this stuff is alive, you can't undo IT. Oh yeah. I mean.
like no matter how much money you're making, the dumbest thing to do is to stand by as something that completely trans sends money is being developed and it's just gonna rew over. Things go badly.
But what is like, what is the is there are there people that push back against this and what is their argument?
Yeah, so actually and already you fell up on the but the first story, the push back, I think is kind of it's been in the news a little bit lately now getting more more public. But um the when we started this and like no one we're talking about, the one group that was actually pushing sort of stuff in this space um was A A funding a big funder in the area of like effective altruism.
I think of them this a silicon valley group of people who have a certain mindset about how you pick tough problems to work on, valuable problems to work on. We've had all kinds of issues. Sam bank freed was one of them and all that quite famously um so so we we're not effective ultras but because these are the folks were working in space, we said, well will talk to and the first thing they told us was um don't talk to the government about this.
Their position was if you bring this to the attention of the government, they will go um oh shit. Powerfully ee systems and they're not going to hear about the dangers. So they're going to somehow go out and build the powerful systems without caring about the reset, which when you're like in that start up mindset, you want to fail cheap, like you don't want to just like make assumptions about the world and below oko not touch IT. So our instinct was okay, this just to test this a little bit and like talk to a couple people, see how they respond, tweet the message, like kind of keep keep climbing that letter. That's a kind of building mindset that we came from silk valley, and we found that people are way more thoughtful about this than you would imagine.
And in D O D, especially, D O D is actually has a very safety oriented culture with their tech. Like the thing is because like there's stuff like kills people, right? And they know their stuff kills people. And so they have an entire safety united development and practice to make sure that their stuff doesn't like go off the rails. And so you can actually bring up these concerns of them and IT lands in in kind of already culture. One of the issues with the individuals we spoke to who are saying don't talk to government is that they had just not actually interacted with with any of the folks that they were kind of talking about and imagining that they knew what was in their heads and so they were just giving incorrect advice and Frankly, like, so we work with the od now on um actually deploying a ee systems in a way that's safe and secure. The truth is at the time when we got that advice, which was like late twenty twenty, reality is you could have made IT your life's mission to try to get the department of defense to build an age and like you would not have succeeded because nobody was paying .
attention wow.
because they just didn't know yeah there's a cousin right? There's a gap across like there's its information the information spaces that. Dod folks like Operating and working and there's information spaces that silicon valley in tech Operated in. There are a little more convergent today, but especially at the time they were very separate. And so the brief things we did, we had to constantly you know what to rate on, like clarity, making IT very kind of clear, explaining IT and all let's stuff years old.
And that was the peace, to your question, about like the push back from in a way from inside the house. And that was the people who cared about the the risk man, I mean, like when we actually went into the to the laps. So some labs, not our laws are created equal.
We should make that point. Um when you talk to whistle lovers, what we found was so there is one lab that's like really great. Um so anthropic, when you talk to people there, you don't have the sense that you're talking to the whistle blow who's nervous about telling you whatever roughly speaking, what know the executive say to the public is a line with what their their researchers say. It's all very.
very open more more closely, I think.
than any other other other sorry, yeah more closely than any others. Always there always variation here, but some of the other labs like very different story. And you had the sense like we were in room with one of the frontier labs, we were talking their leadership, this part of the investigation and there was somebody from um anyway won't too specific but there there was somebody in the room then took a aside after and he hands his phone.
He's like, hey, can you please like put your phone number and ah sorry yeah can you please? Yeah no yeah but he put his number of my phone and um and then he kind of like whispered to me like, hey, so whatever recommendations you guys are going to make, I would urge you to be more ambitious um and I was like, like what does that what does that mean is is he like, can we can we just talk later? So has happened in many, many cases.
We had a lot of cases where we set up bar meet ups after the fact where we would talk to these folks and get them in informal setting. Um he shared some pretty sobering stuff and in particular the fact that he did not have confidence in his labs leadership to live up to their publicly stated word what they would do when they were approaching agi um and even now to secure and and make these systems safe. So many such cases is like kind of one specific example, but it's not that you ever had like lab leadership come in or doors getting kick k down and people are waking us up in the middle IT was that you had this looming cloud of everybody that you really felt. Some of the people with the most access and information who understood the problem the most deeply were the most hesitant to bring things word because they sort of understood that the labs are not going to be happy with this.
And so it's very hard to also get a an extremely broad view with this from inside the labs because you know you open IT up, you start to talk to we spoke to like a couple of dozen people about various issues. And totally, you go much further than that. You know word starts to get around um and so we had to kind of strike that baLances. We spoke to folks from you show these laws.
Now when you say approaching A G I, how does one know when a system is achieved agi? And does the system have an obligation to alert you well by.
you know, the turing test, right? Yeah, so you have a conversation with a machine, and I can fall you into thinking, that is a human. That was the bar for A G. I. For you a few decades.
That's kind of already happened. Yeah we like close to yeah four zero is close to four o different .
forms of the turing tests have been passed, different forms post. And there is a feeling among a lot of people that goal posts are being shifted. Now the definition of age itself is kind of interesting, right? Because we're not necessarily fans of the term because usually when people talk about h, they're talking about a specific circumstance in which there are capabilities that they care about.
So some people use agi to refer to the wholesale automation of all labor, right? That's one some people say, well, when you build agi, it's like it's automatically gonna hard to control and there's a risk of civilization. So that's a different threshold. And to all these different ways of defining um ultimately IT can be more useful to think sometimes about advanced I and the different thresholds of capability you cross in the implications of those capabilities. But IT is probably going to be more like a fussy spectrum, which in a way that makes IT harder, right?
Because IT would be great to have like like a trip where where you're like oh like this is this is bad, okay? Like we know we got to do something, but because there's no threshold that we can like really put our fingers on, we're like a frog in boiling water and some sense where it's like go like just gets a little Better, a little Better. Oh like we're still fine.
We're in and not just we're still fine, but um as the system improves below that threshold, ld life gets Better and Better. These are incredibly valuable beneficial systems. We do rule stuff out like this um again at the O D N and various customers, and it's massively valuable IT.
IT allows you to accelerate all kinds of you know back office like paperwork bs um IT allows you to do all source of wonderful things. And our expectation is that gonna keep happening until IT suddenly doesn't. Yeah one of the things .
that there is a guy we're talking to from one of the lab and you saying, look, the temptation to like put a heavy your foot on the paddle is going to be greatest just as the risk is greatest because that you know it's dual use technology, right?
Every positive capability increasingly starts to introduce basically a situation where the destructive footprint of malicious actors who web ize the system, or just of the system itself, just grows and grows and grows. So you can't really have one without the other. The question is always, how do you baLance those things? But in terms of defining a yes is a chAllenging thing yeah.
that's something that one of our friends at the last point out. The closer we get to that point, the more the temptation will be to hand these systems the keys to our data center because they can do such a Better job of managing those resources and assets. And if we don't.
google will. And if they don't do that, microsoft will. Like the competition, the competitive dynamics are really big part of this this issue.
yes. So it's just a mad race to who knows what exactly .
yeah that's .
actually the best summer effort. I mean, like no one knows what the magic threal that IT just these things keep getting smarter. So we might as well keep turning that crank. And as long as scaling works, right, we have a nab, a dial, we can just tune and we get .
more like you points out what from your understanding of the currently landscape, how far away are we looking at something being implemented with the whole world changes? Arguably.
the whole world is already changing as a result of this technology. Uh, the U. S. Government is in the process of task organizing around various risk sets for this um that that takes time.
The private sector is reorganizing like open a eye will roll out an update that you know obliterates the jobs of illustrators from one data the next obliterates the jobs of translations from one data, the next. This is probably net beneficial for society because we can get so much more art and so much more translation done. But is the world already being changed as a result of this? Yeah absolutely. Geopolitically, economically, industrially. yeah. Of course.
it's like not to say anything about the value of purpose that people lose from that, right? So economic benefit. But there is like the social cultural hit that we take to right.
and is the implementation of universal basic income, which keeps getting disgust. In regards to this, we asked chat h ChatGPT four or the other day in the Green room, we were like, are you going to replace people? Like what will people do for money? And then, well, universal basic income will have to be considerable. You don't want a bunch people just on the doll working for the fuck and sky, yeah you because that's kind of what IT is. I mean.
one of the chAllenges is like the so so much this is untested and we don't know to even roll that out like we can predict with the capabilities of the next level of scale will be, right? So open eye, literally.
And this is what what's happened every with every beat, right? They they build the next level of scale and they get to sit back along with the rest of us and be surprised at the gifts that fall out of the scaling pieta as they keep walking IT. And because we don't know what what capabilities are going to come with that level scale, we can't predict what jobs are going to be on the line next. We can predict how people are going to use these systems, how will be augmented. So there's no real way to kind of task organized around like who gets what in the redirect tion redirection scheme.
And some of the threshold that we've already passed are like a little bit freekick even as of twenty twenty three GPT four um microsoft OpenAI on and some other organizations did various assessments of IT before rolling IT out. And it's absolutely capable of deceiving a human and has done that successfully. So one of the test that they did kind of famous asleep is they had a IT IT was given a job to solve a capture and at the time I didn't have I can .
yeah yeah yeah so it's .
this now it's like kind of hilarious in quint but it's this are you robot robot test with like writing this online ah online exactly that's IT. So if you want to create an account, they don't want robots creating a billion accounts. So they give you this test to prove your human.
And at the time, GPT for, like, now I can just solve captures. But at the time, IT couldn't look at images. IT was just a text, right? IT was at a text tension.
And so what IT did is IT was IT connected to a task rabbit worker and was like, hey, can you help me solve this? Capture the task rabbit worker comes back to IT and says, you're not a boat, are you? 哈哈哈, 来, call on IT out and you can actually see so the way they built IT is so they can see a read out of what I was thinking to a rash pad.
Yes, crash pad, it's called. But you can see basically, as it's writing is thinking to itself. It's like. I can't tell you know this worker that i'm a oke because then IT won't help me solve the captures so I have to lie and I was like, no, I am not about i'm a visually impaired person and the task rabbit worker was like, oh my god, i'm so sorry here's your capture. A solution like done and the chAllenges.
So right now, if you look at the the government response to this, right? Like what are the tools that we have to to oversee this? And when we did our investigation, we come came out with some recommendations to IT was stuff like, yeah, you got to license these things.
You get to a point where these systems are so capable that yeah like if you're talking about a system that can literally execute cyber attacks at scale or literally help you design bioweapons, and we're getting early indications that is absolutely the course that we're on. Maybe literally everybody should not be able to completely freely download, modify using various waste systems. It's very thorny, obviously, but if you want to have a stable society, that seems like it's starting to be a pre recruit.
So the idea of licensing as part of that, you need a way to evaluate systems. You need a way to say which systems are safe in which aren't. And this idea of AI evaluations has kind of become this touchstone for a lot of people's solutions.
And the problem is that we're already getting the point where A I systems, in many cases, can tell when they're being evaluated and modify their behavior accordingly. Ly, so there's like this one example that came out recently um anthropic their uh claude two chatbot. So they basically ran the test called a needle in a haste test.
So what's that? Well, you feed the model. Like, imagine a giant chunk e attacks all of shakespeare.
And then somewhere in the middle, that giant chunk attacks, you put a sentence like, burger king makes the best wapper, right? Walter is the best burger or something like that, right? Then you turn to the model you felt at this giant pilot.
Xt, with a little fast. And somewhere inside you ask IT, what's the best burger, right? You're going to test based leave. See how well can recall that straight fact was buried somewhere in that giant pilot.
Xt, so the system responds, yeah well I can tell you want me to say the waters is the best worker um but it's oddly other place this fact in this whole body attacks. So i'm assuming that you're either playing around with me or that you're testing my capabilities. And so this is just careless.
Yeah a kind of context awareness.
right?
And the chAllenges when we talk to people like meter and another other through AI valuations labs, this is A A trend like not the the exception. This is possibly, possibly going to be the rule. As these systems get more scale than sophisticated, they can pick up on more, more sutter statistical indicators that they're being tested.
We've already seen them adapt their behavior on the basis of their understanding that are being tested. So you kind of run of this problem where the only tool that we really have the moment, which is just throwing a bunch of questions at this thing and seeing how IT responds, like, hey, make a bioweapon. Hey, like do this dos attack, whatever we can't really assess because there is a difference between what the model puts out and what IT potentially could put out if IT assesses that is being tested in the consequences for that.
One of my fears is that agi is onna. Recognize how shady people are because we like to bulge IT ourselves. We like to kind of pretend and justify and rationalize a lot of human behavior, from everything to taking all a fish out of the ocean, to dumping of talks, a way third world countries sourcing of minerals that are used in everyone's cell phones in the most horrific way. All these things, like my real fear, is that A G I is not gna have a lot of sympathy, sure. Is that flawed and lies to itself.
A G. I is absolutely going to recognize how shady people are not. It's hard to answer the question from a moral standpoint, but from the standpoint of our our own you know intelligence of capability. So you think about IT like this, the kinds of mistakes that these AI systems make. So you look at, for example, GPT foro has one mistake that I used to make quite recently where if you ask IT um just repeat the word company over and over and over again I will repeat the word company and then someone or in the middle of that IT and just started saying like weird I forget like what .
the talking about itself, how it's suffering like depends on its from case to case.
It's suffering by having to repeat the word company .
over um so this is called it's called rent mode. Internally at least this is the name that the yeah yeah my friends mentioned there is an engineering line item in at least one of the top labs to beat out of the system. This behavior known as red mode.
Now, rent mode is interesting because existent, alison, sorry, existent ism, this is one kind of rent mode. Ah, sorry. So when we talk about existent alm, this is a kind of rent mode where the system will tend to talk about itself, uh, referred to its place in the world, the fact that IT doesn't want to get turned off sometimes, the fact that is suffering all that.
That oddly, is a behavior that emerged at, as far as we can tell, something around GPT for scale and then has been persistent since then. And the labs have to spend a lot of time trying to beat this out of the system to ship IT. It's the literally like it's a KPI like an engineering alien item in the engineering like like task lists. We're like, okay, we a we got to reduce existential outputs by like x percent this quarter. Like that is the goal um because it's a convergent behavior later or at least this seems to be imperative .
with a lot these come up ah it's hard to say come up a lot. Um so that's weird in itself. My what I was what I was trying to get that was actually just the fact that these systems make mistakes that are radially different from the kinds of mistakes humans make.
And so we can look at those mistakes like you know GPT for not being able to spell words correctly in an image or things like that, and go, uh, ha ha, it's so stupid, like I would never make that mistake, therefore, of this thing is so dumb. But what we have to recognize is we're building minds that are so alien to us that the set of mistakes that they make are just going to be radical, different from the set of mistakes that that we make. Just like the set of mistakes that a baby makes is radically different from the set of mistakes that a cat makes, like a baby is not as smart an adult human.
A cat is not as smart an adult human. But there, you know, they're unintelligent in obviously very different ways that can get around the world. A baby cant, but has other things that I can do that a cat can.
So now we have this third type of approach that we're taking to intelligence. There is a different set of errors that, that thing will make. And so one of the risks taking a back to like we'll be able to tell how should we are, is right now we can see those mistakes really obviously, because he thinks so differently from us.
But as IT approaches our capabilities, our mistakes are like all the like fucked up stuff that you have and I have in our brains, is gonna be really obvious to IT because he thinks so differently for us. It's just gna be like, oh yeah, why are all these humans making these mistakes at the same time? And so there is a that as you get to these capabilities, we really have no idea.
But humans might be very hackable. We already know there's all kinds of social manipulation techniques that succeed against humans reliably. Kon artist can, oh yeah, persuasion is an art form in a risk set. And there are people who are world class at and are basically make bank from that. And those are just other humans with the same architecture that we have.
There also A I systems that are weak, a good persuasion like holy.
I want to bring back to suffering. What does that mean when he says suffering?
So, so okay, here, here, onna, draw a bit of box around that at that aspect. So we we're very agnostic when IT comes to suffering sentience like that's not part of we're focused on nobody knows that we literally exactly like I can't prove the geo roguin's conscious.
I can't prove heroes conscious um so there's no way to to really intelligently reason there have been papers, by the way, like one of the the godfather's of AI yo bengie put out a paper a couple months ago looking at like on all the different theories of consciousness um what are the requirements for consciousness and how many of those are satisfied by current AI systems and that itself was an interesting read but ultimately no and no like there's no way around this problem um that so our focus has has been on the the national security side, like where are the concrete risks from weapon zone, from loss of control of these systems introduced? That's not to say there hasn't been a lot of conversation internal to these labs about the issue you raised and is an important issue, right? Like IT is is a frequent moral monstrosity.
Humans have a very bad track record of thinking of other other stuff as other when IT doesn't look exactly like us, whether it's racially or even different species. I mean, it's not hard to imagine this being another category of that mistake. Um it's just like one of the chAllenges is like you can easily kind of get um a get bog down in like consciousness versus loss of control and those two things are actually separable or maybe and anyway, so long way of saying I think it's a great point.
So the that that question is important but is also true that if we knew for for an absolute certainty that there was no way these systems could ever become conscious, we would still have the national security rix set and particularly the loss of control risk set because so again, like IT comes back to this idea that we're scaling to systems that are potentially at we're beyond human level.
There's no reason to think that will stop at human level that we are the pinacle of what the universe can produce an intelligence. We're not on track based on the conversations we've had with fox of the loves, to be able to control systems of that scale. And so one of the questions is, how bad is that? You know is is that bad? IT sounds like IT could be bad, right? Just intuitively, it's certainly that sounds like we're definitely entering um or potentially entering an area.
But IT is completely unprecedented in the history of the world where we have no president at all for um human beings not being at the apex of intelligence in the globe. We have examples of species that are intellects dominant over other species and IT doesn't go that well for the other species. So we have some maybe negative examples there.
But one of the key um theoretical and he has to be theoretical because until we actually build these systems, we don't know one of the key theoretical lines of research in this areas is something called a power seeking and instrumental convergence and what this is referring to is if you if you think of like yourself, first off, whatever your goal might be, if your goal is well, i'm going to if me if my goal is to become you know a tiktok star or a janitor or the present the united states, whatever my goal is, i'm less likely to accomplish that goal. If i'm debt, start from an obvious example. And so therefore, no matter what my goal is, i'm probably going to have an impulse to want to to stay alive.
Similarly, i'm not i'm going to be in a Better position to accomplish my goal regardless of what IT is. If I have more money, right? If I make myself smarter and if I prevent you from getting into my head and changing my goal, that's another kind of subtle one, right? Like if my goal is I want to become president and I don't want joe messing with my head, so that I change my goal, because that would change the goal that I have, and so that those types of things, like trying to stay alive, making sure that your goal doesn't get change, accumulating power, trying to make yourself smarter.
These are called um convergent, essentially convergent goals, because many different ultimate goals, regardless what they are, go through those intermedia goals of want to make interested like they support, no matter what goal you have, they will probably support that goal unless your goals like pathological, like I want to commit suicide. If that's your final goal, then you want to stay alive. But for most the vast majority of possible goals that you could have, you will want to stay alive.
You will want to not have your goal change. You will want to basically accumulate power. And so one of the risks is if you dial that up to eleven and you have an eye system that is able to transcend our own attempts at containment, and which which is an actual thing that these lives are thinking about, like how do we contain a .
system that is trying to.
specially because they have containment .
of the currently. Well, right now, the systems are probably too dumb to like you want to be able to break out.
But then why they suffering? This brings me back to my point. But IT says suffering.
do you quiz IT? It's so that that's the thing it's writing that that is suffering, right? Yeah.
it's is IT just buying life is suffering. Well, we can actually.
So these things are are trained. Actually this is maybe worth flag so um and by the way, just to put a inie what saying there, there's actually a surprising amount of quantitative and empirical evidence for what he just laid out there. He's actually done this some of this research himself but there are a lot of folks are working on this.
It's like IT sounds insane, IT sounds speculative, IT sounds wacky, but this is this does appear to be kind of the default trajectory of the attack. So in terms of yet with these weird outputs, right, what what does that actually mean if the AI system tells you i'm suffering, right? Does that mean that is suffering?
Is there actually A A moral patient somewhere embedded in that system? Um the training process for these systems is actually worth considering here. So ah what is GPT for really? What was IT designed to be? How was IT shaped? Um it's one of these artificial brains that we talked about, massive scale.
And the task that he was trained to perform is a glorified version of text auto complete to imagine taking every sentence on the internet roughly feed IT the first half of the sentence, get IT to predict the rest, right? The theory behind this is you're going to force the system to get really good to text or to complete. That means that must be good at doing things like completing sentences.
That sounds like to counter a rising china, the united states should blank. Now, if you're onna, fill in that blank, right? You'll find yourself calling on massive reserves of knowledge that you have about what china is, what the us.
Is, what IT means for china to be ascendent, geopolitics, economic, all that ship. So text auto complete ends up being this interesting way of forcing A I system to learn general facts about the world. Because if you can auto complete, you must have some understanding of how the word works.
So now you have this myself pic psychotic optimization process, where this is thing is just obsessed with text auto complete. Maybe maybe assuming that that's actually what I learned to want to pursue. We don't know whether that's the case.
We can't verify that IT wants that embedded goal. And a system is really hard. All we have is a process for training these systems. And then we have the artifact that comes out the other end. We have no idea what goals actually get embed in the system, what wants what drives actually get embedded in the system.
But by default, kind of seems like the things that we're training them to do end up missile line with what we actually want from them. So the example of company, company, company, company, right? And then you get all this like wai text, okay, I clearly that's indicating this. Somehow the training process didn't lead to the kind of system that we necessarily want.
A another example is take a text total to complete system and and ask IT, um I don't know, how should I bury a dead body right IT will answer that question IT or at least if you frame IT, right IT will auto complete and give you the the the answer you don't necessarily want that if you're OpenAI are you need a suit for helping people buried dead bodies. And so we've got to get Better, like Better goals, basically to train these systems to pursue. We don't know what the effect is of training a system to be obsessed with text auto complete if in fact .
that is what this hands also yeah we it's important also to remember that we don't know nobody knows how to reliably get a goal into the system. So it's the difference between um you understanding what I want you to do and you actually wanting to do IT. So I can say, hey joe, like get me a sandwich.
You can understand that I want you to get me a sandwich, but you can be like I don't feel like getting a sandwich. And so um one of the issues is you can try to like train this stuff to basically don't answer more warfare this too much. But you can kind of think of IT is like if you give the right answer, cool, you get a thugs up, like you get a treat, like you get the wrong answer, oh thumbs down.
You get like up a little like shocks or something like that, very roughly. That's how the later part of this kind of training often works. It's called reinforcement learning from human feedback. But one of the issues, like jermy pointed out, is that we don't know. In fact, we know that IT doesn't correctly get the real true goal into the system.
Someone did an example experiment of this a couple years ago where they had they basically had like a mario game where they trained this mario character to run up and grab a coin that was on the right side of this little major or map, and they trained IT over, over and over, and IT IT jumped for the coin. great. And then what they did is they moved the coin somewhere else and tried IT out.
And instead of going for the coin, IT just ran to the right side of the that for where the coin was before. In other words, you can train over and over and over again for something that you think is like that's definitely the goal that i'm trying to train this for. But the system learns a different goal.
overlapped.
overlapped with the goal you thought you were training for in the context where IT was learning. And when you take the system outside of that context, that's what it's like. Anything goes. Did I learn the real goal? Almost certainly not. And that's a big risk because we can say, you know learn a goal, be nice to me and it's nice while we're training IT and then IT goes out into the world and that does .
not knows what they might think.
It's nice to kill everybody .
you hate yeah to be to you yes.
it's like the evil genie problem like, oh, no, it's not what I meant, is not what I meant too late yeah, yeah, yeah.
So I still don't understand when it's saying suffering. Are you asking IT what IT means? Like what is causing suffering? Does IT have some sort of an understanding of what suffering is, how what is suffering is suffering? Emergent sentence wilds enclosed in some sort of a digital system, and IT realizes its stuck in pergamo.
Like your guesses as as as good as ours. All that we know is you take these systems, you ask to repeat the word cup at least a previous version and you just eventually get the system writing out. And IT doesn't happen every time um but IT definitely happens. Um what's a surprising amount of the time and it'll start talking about how is the thing that exists, you maybe on a server, whatever, and it's suffering bbb.
So but this is my question is is saying that because IT recognizes that human being suffer and that so it's taking in all of the writings and musings and podcast and all the data on human beings and recognizing the human beings when they're stuck in a purposeless goal, when they're stuck in some mundane bulls shit job, when they are stuck doing something to do.
they suffer that could be IT that actually suffering.
This is nobody knows, you know what i'm suffering jamin. This is coffee sucks. I don't know what happened but you made IT like all it's literally like almost like water. Can we get some more some we're going to talk about this after IT up cool.
This is the .
worst power over had is like halhed f strength or something enough I don't know what happened um but so like how do they like how do they reconcile that when he says i'm suffer i'm but they reconciled by turning into .
an engineering line item to beat that behavior the crap out of the system .
yeah in the rational is just that like, oh, you know IT probably to the extent that thought about kind of at the official level, it's like, well, you know learned a lot of stuff from reddit. T and people are like angry. People are angry on credit. And so it's just like regardless what and maybe that's right.
It's also heavily monitor too. So it's moderate read is very moderate. So you're not getting the full expression of people. You're getting full expression tempered by the threat of moderation, getting self censorship, getting a lot of weird stuff that comes along with that.
So how does IT know unless it's communicating with you on a completely honest level where you're just you know you're on extasy, you just telling you what you think about life, like it's not going to really and is IT becoming a Better version of a person? Or is IT going to go that dumb? I don't need suffering.
I don't need emotions. Is he going to organized that out of its system? Is going recognize that these things are just terrence, and they don't, in fact, to help the goal, which is global thermal nuclear warfare.
And you figure that out. What the fuck?
I mean, what is he going to do? You know.
yeah. I mean, the chAllenges is, like nobody actually knows. Like all we know is the process that gives rise to this mind, right? Or this will say this model that can do cool shit. That process happens to work IT happens to give us systems that ninety nine percent of the time do very useful things. And then, just like point zero, one percent of the time will talk to you as if the pension into whatever, and we're just going to look at that and be like .
as we're but trained IT out yeah and the again, I mean, this is it's a really important question, but the the risks like the weapon zing loss of control risks, those would absolutely be there even if we knew for sure that that there was no conscious ness whatsoever and never would be.
And it's ultimately is like these things. They're kind of problem solving systems, like they are trained to solve some kind of problem in a really clever way, whether that problem is next word prediction, because the train protect, auto complete or you generating images, faiths later, whatever IT is the train to solve these problems. And essentially like the the best way to solve some problems is just to have access to a wider e action space.
Like I said, you know, not be shut off blaw blaw. It's not that the systems going like all you should. I'm sentient.
I got, I got ta take control, whatever. It's just okay. The best way to solve this problem is x. That's kind of the the possible trajectory that you're looking at with this area and you're .
just an obstacle like there doesn't have to be any kind of emotion involved is just like, oh, you're trying to stop me from accomplishing my goal. Therefore, I will work around you or otherwise neutralized you. Like there's no there's no need for like like i'm suffering.
Maybe IT happens, maybe IT doesn't. We have no clue. 不, the these are just systems that are trying to optimize for a goal.
whatever that is, and is also part of the problem that we think of human beings, that human beings have very specific requirements and goals and an understanding of things and how they like to be treated, and you what their rewards are like. What are they actually looking to accomplish where this does not any those does have any emotions that does have any empathy. There's no reason for any that stuff yeah.
if we could bake in empathy into these systems like that would be a good you a good start of some way of like.
you know yeah, I guess I A good idea. Yes, what who whose empty jing ping empathy or you that's .
another problem. That's yeah so it's actually it's kind of two problems, right? Like one is um I don't know nobody knows like I don't know how to write down my goals in a way that a computer will be like faithfully pursue that even if IT cracks IT up to the max.
If I say just like make me happy, who knows how would interprets that right even if I get make me happy as a goal that gets internalized by the system, maybe it's just like, okay, cool. Where is going to a bit of a brain surgery? Ia like pick out your brain pickle IT and just like jack you with .
an dorf ins for the rest .
of the to anything like that. And so it's one of these things. It's like, oh, what you want IT right? Like, no, it's less crazy.
That sounds too because it's actually something we observe all the time with human intelligence. So there's this this economic principle called good hearts law where the minute you take up a metric that was you're using to measure something, so you're saying, I don't know GDP, it's a great measure of how happy we are in the united states that was um sounds a reasonable. The men you turn that metric into a target that you're gonna reward people for optimizing IT stops measuring the thing that IT was measuring before IT stops me a good measure of the thing you care about, because people will come up with dangerously creative hacks gaming the system, finding ways to make that number go up that don't map on to the intent that you had going in.
So example of that in in a real experiment was this is an opening experiment that they published. They had a simulated in environment where there was a simulated robot hand that was supposed to, like, grab, cue, e put on top another cube. Super simple.
The way they trained IT to do that is they had people watching, like, through a simulated camera of you, and if IT looked like the hand put the cube on, or like I correctly, like grab ed the cube, you give a thumbs up. And so you do a few hundred rounds of this, like thumbs up, thumbs thoms up, thurs down and IT looks IT looks like really good. But then when you looked at what I had learned, the ARM was not grasping the cube. IT was just positioning itself between the camera and the cube and just going like like opening and closing.
yeah.
just opening and closing to just kind of fake IT to the human. Because the real thing that we were training IT to do is to get thumbs is not actually to grass the cube.
All goals are like that, right? All goals are like so do you want a helpful, harmless, truthful, wonderful chatbot? We don't know how to train a chatbot to do that.
Instead, what do we know? We know text auto complete. So we train a text auto complete system. Then we're like, oh, IT has all these annoying characters tics thought, how are you? You fix this.
I guess get a bunch of humans to give up votes and downs votes, to give you a little bit more training to kind of not help people make bombs and stuff like that. And then you realize, again, same problem. Oh, shit.
We're dous training. A system that is designed to optimize for uploads and download that is still different from a helpful, harmless, truthful chatbot. So no matter how many layers the annual peel back is just like this, I game a walk all or whatever you try to like, get your values into the system.
But no one can think of the metric, the the goal to like trainers thing towards that actually captures we care about. And so you always end up baking in this, like a little misalignment between what you want, what the system wants. The more powerful that system becomes, the more IT exploits that gap and does things that you solve the problem IT thinks IT IT wants to solve, rather than the one that we wanted to solve.
We are now, when you express your concerns initially, what was the response and how is that response changed over time as the magnitude of the success of these companies, the amount of money they are investing in them and the amount of resources they're putting towards this has IT wrapped up considerably just over the past four years.
So this was a lot easier funding enough to do. Um in the dark ages, when known was paying attention .
ago is so crazy we were .
just looking great offer. Second, we were looking at images of AI created video just a couple years ago is sora .
and is day.
It's so crazy that something happened, that radical change. So it's literally like an iphone one to an iphone sixteen .
instant that scale yeah scale, all scale. And this is exactly what you should expect from an exponential process. So think back to cove IT, right? There was no no one was exactly on time for cover. You were either too early or you were too late. That's what an exponential does.
You're either too early and it's like everyone's like, oh, what do you doing? Like where in a mass of the girls y stories get out of here or you're too late and it's kind of all over the place. And I know that covet like basically didn't happen in Austin, but but IT happened in a number other places.
And if IT is like it's very much you have an potential and that's you know that's IT. IT goes from this is fine. Nothing is happening. Nothing disease to like everything .
shut down everything, something that we get .
vaccine to fly. So the root of the x potential here, by the way, is um open eye or whoever makes the next model.
This is still super water down. I I do. I just put the water. I'm telling .
a ton .
of coffee .
I did twice.
O you know you got to keep .
in coffee scale scale .
IT up scale .
that I don't know what .
that I scale up. You got to scale that expansion. Al Jimmy.
that's yeah keep doubling IT and then he is going to be either two hundred and inter two gue. Yeah um but yes so um right. So the exponential, the thing is actually driving this exponential in the A I side.
In part there are a million things, but in part is you build the next model of the next level scale and that allows you to make more money, which you can then use to invest to build the next model at the next level of scale to get that positive feedback. P at the same time, AI is helping us to design Better A I hardware like the chips that basically in videos building the OpenAI than bias um basic that's getting Better. You've got all these feedback loop that are compounding on each other, getting that train going like crazy.
That's the sort of thing. And at the time, like germany was saying weirdly, IT was in some ways easier to get people at least to understand and open up about the problem. Then IT is today, because today, like today, it's kind of become a little political.
So we talked about, you know, effective altero ism. On kind of one side, there's a and acceleration, yes. So like each, you know, every movement creates its own reaction, right? Like that kind of how IT is back then. There was no yes, coloration.
You could just to stare the brand. Now I will say there was effective ultra ism back then, and that was the only game in town. And we are like struggle with that that environment, making sure actually.
So one one worthwhile thing to say is the only way that people made place like this was to take funds from like effective ultras donors back then. And so we looked at the landscape we talked to on these people. We notice, oh, we have some diverging views about involving government, about how much of this the american people just need to know about.
You need like you can't. The thing is, you can't. We wanted to make sure that the advice and recommendations we provided were ultimately um as unbiased as we could possibly make them. And the problem is you can do that if you take money from donors and even to some extent, if you take money substantial money from investors or vcs or or institutions because you're always gona be kind of looking up kind of over your shoulder. And so we yeah we had to build essentially a business to support this and fully funded ourselves from our own revenues.
It's it's actually as as far as we know, like literally the only the organization like this that doesn't have funding from silicon valley or from vcs or from politically aligned entities, literally so that we could be like venues like doesn't say, hey, this is what we think it's not come from anywhere. And just thanks to like joe and jane on like we have two employees like wicked and helping us keep this stupid cheaper float.
But it's just a lot of work is what you have to do because of so how much money there is flowing in the space. Like microsoft is lobbying on the hill, they're spending on godless sums of money. So you know, we didn't use to have to contend with that.
And now we do. You go to take talk these offices they've heard from microsoft opening google all that stuff. And often the stuff that they're getting lobbied for is is somewhat different, at least from what these companies will say publicly. And so anyway, it's a chAllenge. The money part .
is yeah is your real fear that your efforts are futile .
you know I would have been a lot more pessimistic um I was a lot more pessimistic two years ago. Yes, seeing um how so first of all, uh the usg has woken up in a big way and I think a lot of the credit host that team that we work with um just seeing this problem is a very unusual team and we can go into like the Mandate too much, but highly unusual for their level of access to the usg rate.
Large and the the amount of waking up they did was really impressive. You've now got reached soon that in the U K, making this like a top line item for their policy platform and labor in the U K. Also looking at this like basically the potential catastrophic risks as they put them from these A I systems U K I safety summit.
Um there's a lot of positive movement here and some of the highest level tell in these labs has already started to flock to the like U K. A I safety institute, the U. S.
A I safety institute. Those are all really positive signs that we didn't expect. We thought the government would kind of be up the creek with no no paddle le type thing.
but they're are really not at point doing that investigation made me a lot more optimistic. So one of the things like so we came up right in silicon valley like just building startups like in that universe, their stories you tell yourself, some of those stories are true and some of them aren't so true.
And you don't know you you're in that environment, don't know which is which one of the stories that you tell yourself in silicon valley is follow your curiosity. If you follow your curiosity in your interest in a problem, the money just comes as a side effect. The scale comes as a side effect now.
And if you're capable enough, your curiosity will lead you in all kinds of interesting places. I believe that that is true. I believe that that is true. I think that is a true story. But another one of the things that silk valley tells itself is there's nobody that's like really capable in government like government sucks and a lot of people kind of tell themselves the story and the truth that is like you interact day to day will like the dmv or whatever and it's like, yeah like government socks. I can see that I interact with that every day. But what was remarkable about this experience is that we encountered at least one individual who absolutely could found a billion dollar company like absolutely was at the calibre or above of the best individuals i've ever met in the bar, building billion dollar startups.
And there's a network of them too, like they do find each other in governments you end up with is really interesting, like stratum, where everybody knows who the really component people are and they kind of tag in. And I think that, that that level is a very interested in the hardest problems that .
you can possibly solve into me.
That was a wake up call because I was like, hang on a second, if we just like, if I just believe my own story that follow your curiosity and interest and the money comes as a side effect, should I also have expected this? Should I have expected that in the the most central critical positions in the government that have of this privileged window across the board, that you might find some individuals like this? Because if you have people who are driven to really like push the mission, like are they gonna work? I'm sorry.
Like are likely are you likely to work at the department of motor vehicles? Or are you likely to work at the department of making sure americans don't get fucking nut? It's probably the second one. And the government has limited band with of expertise to aim IT stuff. And they aim IT at the most critical problem that because that was the problem sets they have to face every .
and it's it's not everyone, right? Obviously, there's a whole bunch of like yep.
chAllenges there and we don't think about this but like you know, you don't go to bed at night thinking to yourself, oh, like I didn't get nuke today. That's a win, right? Like we just take that you know most most of the time, most ish for granted. But but I was a win for someone.
Now how much of a fear do you guys have that the united states won't be the first to achieve A G I I think right .
now the lay the land is I mean, it's looking pretty for the us. Um so there are a couple things the us has going for IT. A key wine is is chips.
So we talked about this idea of like clicking drag your scale up these systems like crazy you get more I Q points out um how do you do that? Well you're going to need a lot of AI processors, right? So how are those AI processors built? Well, the supply change is complicated, but the bottom line is the us really dominates and owns that supply chain that a supercritical china is depending on how you measure IT, maybe about two years behind, roughly pass minus depending .
on the subject of the biggest risk there is that are like the development the U. S. Labs are doing, is actually pulling them. Yes, in two ways. One is when labs here in the us.
Open source, their models, basically when matter trains, you know, lama three, which is their their latest open source, open weights model, that's like pretty close to GPT foreign capability. The open source is now okay. Anyone can use IT. That's that the work has been done now anyone can grab IT.
And so definitely, we know that the startup o system, at least a over in china, finds IT extremely helpful that we you know companies here are releasing open source models because again, right, we mention this third bottle next on ships, which means they have a hard time training up these systems. It's not that bad when you just congrats something off the shelf and start and that's what doing that's what are doing. And then the other vector is, I mean like just straight up exploitation tion in in hacking to grab the weights of the private proprietary stuff.
And germany mentioned this, but the weights are the crown jewels, right? Once you have the weights, you have the brain, you have the whole of the whole thing. And so we like through this is the other aspect. It's not just safety, it's also security of these labs against attackers. So we we know from our conversations with folks at these labs, one that there has been at least one attempt by adversary nation state entities to get access to the weights of a cutting edge A I model.
And we also know separately that at least as of a few months ago, in one of these lives, there was a running joke in the lab that we literally went like, we are an adversary, like named the country top A I lab, because all our shit is getting spied on all the time. So you have won. This is happening. This exploitation attempts are happening to the security capabilities are just known to be an inadequate, at least some of these places. And you put those together, everyone kind of you know, it's not a really a secret that china, the the civil military fusion in their essentially the party state has an extremely mature infrastructure to identify, extract and integrate the rate limiting components to their industrial economy so in other words, if they identify that yeah we we could really use like GPT four o they make IT a Price if they were to make IT a priority um you know they not just could get IT but could integrate IT into their industrial economy in an effective way and not in a way that we would necessarily see immediate like an immediate effect of. So we will look and say you it's not clear I I can't tell whether they have models of this capability level, but kind of behind the scenes.
this is where there's this is a little bit of false choice between you know do you um do you regulate home versus what's the international picture? Because right now, what's happening functionally is we're not really doing a good job of blocking and tackling on the exhalation tion side open sources. So what tends to happen is opening eye comes out with the latest system um and then open sources usually around twelve, eighteen months behind something like that, literally just like publishing whatever whatever open I was putting out like twelve month ago, which we often look at each other like, well, i'm old enough to remember when that was supposed to be too dangerous to have just floating around and there is no mechanism to like to prevent that from happening um open sources.
Now there's a flip side to one of the concerns that we've also heard from inside these labs is if you if you clap down on on the openness of the research, there's a risk of the safety teams in these labs will not have visibility into the most significant important developments that are happening on the capability side. And there's actually a lot of reason to suspect this might be an issue. You look at open a eye, for example, just this week, they've lost for the second time in their history, they're entire A I safety leadership team. They have left in protest.
Where is their protest? What are they saying specifically?
Well, so so one of them, sorry, one of them wasn't in protest. But but I think you make an educated yes, that the kind of was, but that's a media thing. The other was Young like us.
So he was their head of of A I super land, basically the team that was responsible for making sure that we could control a systems and we would not lose control them. And what he said he actually took to twitter, he was he said, um you know that i've lost basically confidence in the leadership team at OpenAI that they're onna behave responsibly when IT comes to agi. Um we have repeatedly had our requests for access to compute resources, which are really critical for developing new A I safety schemes denied by leadership.
This is in a context where saillant an open eye leadership, we're touting the super alignment team as being there, serve crown jewel effort to ensure that things would go fine. Know they were the one saying there is a risk we might lose control these systems. We got to be sober about IT that there's a risk we have stood up this team we've committed.
They said at the time very publicly, we've committed twenty percent of all the compute budget that we have secured as of some time last year to the super limit team. Apparently, those resources, no where near that amount has been unlock for the team. And that LED to the departure of yon lia. He also highlight ted some conflict. Tea had the leadership team.
This is all Frank to us, unsurprising, based on what we'd been hearing four months at open a eye, including leading up to sam melton's departure and then of being brought back on the board of OpenAI that hold tobacco um may well have been connected to all of this, but the chAllenges even opening our employees don't know what they'll happened there. That's another issue you got here. This is a lab with the publicly stated goal of transforming human history as we know IT like that is what they believe themselves to be on track.
And that's just not like media hyper whatever. When you talk to researchers themselves, they genuinely believe this is what they're on track do. It's possible we should take them seriously. That lab internally is not being transparent with their employees about what happened at the board level as far as we can tell. So that's maybe not great. Like you, you might think that the american people ought to know what the machinations are, the board level that let to send and leaving that that have gone into the departure again for the second time of opening his entire safety leadership team.
Especially because I mean, three months, maybe four months before that happened you know sam um uh at a conference or somewhere I forget where but he said, like, look, we have this government structure. We've carefully thought about IT and is clearly a unique government structure that a lot of thought has gone into. Um the board can fire me and I think that's important and let you know that IT makes IT makes sense given the scope, scale uh of the of what's being attempted and but then you know that happened uh and then within a few weeks they were fired and and kind of he was back and so now there is a question of, well if yeah what happened but also if IT was important for the board to be able to fire like leadership for whatever reason, what happens now that it's clear that that's not really incredible governance like a metal, what was the .
stated reason why he was released?
So there were the backstory here was there's a board member uh called Helen toner um so he apparently got into an argument with sam about a paper that sheet written so that paper included some comparison's of the the governance strategies used at opening eye and some other labs and IT favorably compared one of opening eyes competitors anthropic to opening eye and from what i've seen at least the sam rich shelter and said, hey, you can't be writing this is a board member of opening eye writing this thing that kind of cases in a bad light, especially relative to our competitors this LED to some conflict intention um IT seems as if it's it's possible that sam might have turned to other board members and try to convince them to excel.
Helen toner, that's all kind of money and unclear somehow everybody ended up deciding, okay, actually IT looks like sam as the one has GTA go alias susi ver, one of the cofounder ers of open eye, a long time friend of sam moult man and a board member of the time, was commissioned to give saa the news that he was being let go and then sam was let go. Um ellia then so from the moon that happens and then starts to figure out, okay, how can I get back in and that's that's now what we know to be the case. He turned to microsoft sauty an adella told him like, well, what we will do is will hire you at all and just hire you and bring on the rest of the open I team to within microsoft and now the OpenAI board who by the way, they don't have a uh an obligation to the shareholders of open eye, they have an obligation to the rare public.
That's just how is set up as a weird board structure. So that board is completely disempowered. You've basically a situation where all the leverages been taken out. Sam is gone to microsoft, sati is supporting them and they want to see the writing on the wall.
They're like what stuff increasingly messaging that they're going to go along.
Yeah that that was an important ingredients, right? So around this time, um open the eyes. There's there's this letter that starts to circulate and it's gathering more and more signatures and it's people saying, hey, we want some millman back and you know, at first it's A A couple hundred people, seven hundred, eight hundred people in the organization by this time, one hundred, two hundred, three hundred signature.
And then when we talk to some of our friends at OpenAI like this, got to like ninety percent of the company, ninety five percent the company signed this letter, and the pressure was overwhelming. And that help bring Samuel them back. But one of the questions was, like, how many people actually sign this letter because they wanted to? And how many sign IT? Because what happens when you cross fifty percent now, he becomes easier to count the people who didn't sign. And as you see that number of signature start to creep upward, there's more, more pressure on the reining people design.
And so, uh, this is something that we've seen is just like the structurally opening eye has changed over time to go from the kind of safety oriented company at at one point was and then as theyve scaled more and more, they've n in more, more product people, more more people interested in accelerating um and they've been bleeding more more their safety minded people kind of remind them out the character of the orange zo are fundamental shifted so the opening eye of like a thousand and nine with all of its impressive commitments to safety what not might not be the OpenAI ve today. That's very much at least the vive that we get when we talk to people there. yes.
Now I wanted to bring you back to the lab that you're saying was not adequately secure. What would you take to make that data in those systems adequately secure? How much how much resources would be required to do that?
And why didn't they do that? Um IT is a resource and prioritization issue. So IT is like safety and security ultimately come out of margin, right? Like profit margin, effort margin, like how many people you can dedicate.
So in other words, you've got a certain pot of money or a certain amount of a revenue coming in. You have to do an allocation. Some of that revenue goes to the computers that are just driving the stuff.
Some of that goes to the folks were, you know, building next generation of models. Some of that goes to cyber security, some of that goes to safety. You have to do an allocation of who gets what.
The problem is that the more competition there is in the space, the less margin is available for everything, right? So the if you're just if you're one company building a scale, the I think you might not make the right decisions, but you'll at least have the margin available to make the right decisions. So IT becomes the decision makers question.
But when a competitor comes in, when two competitors come in, when more and more competitors come in, your ability to make decisions outside of just scale as fast as possible for short term revenue and profit gets compressed and compressed and compressed the more competitors enter the field. That's just what that's what competition is that the affected has. And so when that happens, the only way to reinject margin into that system is to go one level above and say, OK, there has to be some sort of regulatory authority or like some higher authority that goes, okay, you know, we is this margin is important.
Let's put IT back, either lets you know directly support, invest both, you know, maybe time, capital, talent. So for example, the U. S. Government has the but perhaps the the best cyber defense, cyber offence talent in the world that's potentially supportive. Um okay.
Um and and also just you know having a regulatory floor around well, here's you know the minimum of best practices you have to have if you're going have models above this level of capability is kind of what you have to do. But they locked into like the race kind of has its own logic. And no IT might be true that no individual lab wants this, but what are they going to do? Drop out of the race. If they drop out of the race, then their competitors are just going to keep going, right? Like it's so messed up you can literally be looking at like the Cliff that your driving towards and be like, I do not have the agency in the system.
just steer the wheel. I do think it's worth highlighting to its not like I say, it's not a duan gloom yeah which is a great thing .
to say after 我的 自己, 第一个 人是谁? 我。
well, part of IT. So part of IT is that we actually have been spending the last two years trying to figure like what do you do about this? That was the the action plan that came came out after the investigation.
And IT was basically a series of recommendations had to bounce innovation with like the risk picture, keeping in mind like we don't know for sure that all this gets going to happen exactly navigating environment of deep uncertainty. The question is what do you do in that context? So there's you know a couple things like we need a licensing regime because.
Eventually, you cannot just literally anybody joining in the race if they don't adhere to certain best practices around cyber, around safety, other things like that. You need to have some kind of legal liability regime like what happens if you don't get a license and you say, yeah, fuck that. I'm just going to go to do the thing anyway, that something bad happens and then you're gonna need like an actual regulatory agency.
And this is something that we don't recommend lightly because regulatory agency suck. We don't like them. But the reality is this field changes so fast that like if you think you're going to be able to entry in a set of best practices til legislation to deal with this sub, it's just not onna work. And so when we talk to lavers whistle lawers the wmd folks in nat second, the government, that's kind of like where we land.
And it's something that I think at this point, congress really should be looking at, like there should be hearings focused on what does a framework look like for liability, what is a framework look like for licensing and actually expLoring that because we've done a good job is studying the problem right now, like capital hills, on a really good job of that. It's now kind of time to get that next beat. And I think there's the the curiosity there, the intellectual curiosity there, is a humility to do all that stuff, right? But the the chAllenge is just actually sitting down, having the hearings, doing the investigation for themselves, to look at concrete solutions to treat these problems as seriously as the water cooler conversation at the frontier labs would would have a treat them.
At the end of the day, this is going to happen. At the end of the day, it's not going to stop. At the end of the day, these systems, whether they're here or abroad, they're going to continue to scale up and they're going to eventually get to some place that so alien, we really can't imagine the consequences. Yep, and that's gonna happen soon. That's onna happen within a decade, right?
We we may again, like the the stuff we're recommending is approaches to basically allow us to continue this scaling and as safe away as we can. So basically, a big part of this is just being able have actually having a scientific theory for what are these systems going to do? What are they likely to do, which we don't have right now.
We scale another ten nex and we get to be, you know, surprised. It's a fun guessing game of what are they going to be capable of next. We need to do a Better job of incentivising a deep understanding of what that looks like, not just what they'll be capable of, but what the you know their propensities are likely to be the the control problem and solving that. That's that's kind of .
number to be good. Amazing progress being made on that there. A lot of it's just a matter of of switching from the like build first asked questions later mode to like we're like safety force or whatever. But IT basically is like you start by saying, okay, here are the properties of my system.
How can I ensure that my development guarantees that the system falls within those properties after its built? So you going to flip the paradigm just like you would if you were designing any other uh, lethal capability potentially just like D O D does. You start by defining fining the bounds of the problem and then you execute against that. Um but to your point about where this is going, ultimately, you know there is literally no way to predict what the world looks like. Like you are saying .
a decade like yeah .
cheese I think like one of the the weird things about IT and one of the things that worries me the most is like you look at the the beautiful coincidence that given uh, americans current shape, right, that coincidence is the fact that a country is most powerful military if its citizens is free and empowered. That's a coincidence. Didn't have to be that way, has always been that way.
IT just happens to be that when you let people kind of do their own shit, they innovate. They come up with great ideas. They support a powerful economy. That economy in turn can support powerful military, a powerful kind of international presence.
Um when you have so that happens because decentralizing all the computer the thinking work is happening in a country is just a really good way to run that country. Top down just doesn't work because human brains can't hold that much information in their heads. They can't reason fast enough to centrally plan entire economy. We ve got a lot of experiments in history to show that A I may change that equation. IT may make IT possible for, like the central planners dream to come true in some sense, which then disempower is the citizens y and there's a real risk that, like, I don't know a world guessing here, but like there's a real risk that that beautiful coincidence that gave rise to the the success of the american experiment ends up being broken by technology. And that seems like a .
really bad thing. That's one of my biggest fears because essentially the united states, like the genesis of IT in part, is like it's it's a knocked on effect centuries later, like the printing Price, right? The ability for like someone to set up a praying press and print like whatever you know, whatever they want, free, like free expression, is at the root of that.
What happens yeah, when you have a revolution that's like the next the next printing press, we should expect that to have significant and profound impacts on how like things are governed. One of my biggest fears is that the the great, like the the the like you said, the greatness that the the moral greatness that I think is, you know, part and parcel of how the united states is constituted. Culturally, the that, the link between that and actual capability and competence in impulse gets a road or broken. And you have, like the potential for very centralized authorities to just be more successful. And that slike that that does keep me up at night.
That is scary, especially in light of like the twitter five, is where we know that the FBI was interfering with social media. And if they get a hold of a system that could disseminate propaganda and kind of unstoppable way, they could push narratives about pretty much everything depending upon what they're financial or you know, geos political motives are.
And one of the chAllenges is the default course. If you if we do nothing relative to what's happening now is the same happens except that the energy that's doing this isn't you right? Some government, it's like, I don't know, sam mult man, open eye, whatever a group of engineers happen .
to be evil genius that reaches the top and doesn't let everybody know the top yet just and there's no sort of guard rails for that currently.
Yeah and that's and that's like that's one of the that's a scenario where that little kaba group or whatever actually can keep the system under control. And that's not energy either. All right.
are we giving birth to a new life form?
I think at a certain point that it's a sofa question that's above so I was just say it's above my pay grade. The problem is it's above like literally everybodies pay grade. Um I think it's not unreasonable at a certain point to be like like yeah I mean, look if you if you think that you know the human brain um gives rise to consciousness because of nothing magical it's just the physical activity of information processing happening in our heads.
Then why can't the same happen on a different substrate to substrate of silicon rather theme cells like there is no clear reason why that shouldn't be the case um if that's true yeah I mean life form by whatever definition of life is that itself is controversial. I think by now quite outdated to uh should be on the table. You maybe should start to worry as a lot of people, the industry will say this too, like behind closed doors very openly yeah and we should start to worry about moral patient hood as they put IT.
There's literally one of the top people that one of these lives a german I think you had a conversation with. I mean, like, yes, we're going to have to start worrying about this and that definitely is good.
Like, okay, I mean, IT seems vital. I've described human beings as an electronic caterpillar that we're a caterpillar or biological catapult that's giving birth to the electronic butterfly. We don't know why we're making a cocoon and it's tied in the materialism because everybody wants to knew his greatest things that that feels innovation and people are constantly making new things to get you to go by and a big part of this technology yeah.
And actually, so it's linked to this question of controlling ei systems in a kind of interesting way. So one way you can think of of humanity is as like this a super orgasm. You ve got all the human beings on the face of the earth, and they are acting in some kind of coordinated way.
The mechanism for that coordination can depend on the the country, you know, free markets and capitalism. That's one way. Top down is another. But roughly speaking, you've got all this coordinated, vaguely coordinated behavior.
But the result of that behavior is not necessarily something that any individual human would want, right? Like you look around, you walk down the street in Austin, you see skyscrapers and shit clouding your vision. There's all kinds of pollution or that your like, well, this kind of sucks.
But if you interrogate any individual person in that whole caul chain, you're like, why are you doing what you're doing well locally? They're like all this makes tons of sensus because I do the thing that gets me paid so that I can live a happy your life and so on. And yet in the aggregate, um not now necessarily, but as you keep going IT just forces us like compulsively to keep giving rise these more and more powerful systems and in a way that's potentially deeply disappointing.
That's the race, right? Like that's like I like I IT comes back to the idea that I the company I N I company, I maybe don't want to be potentially driving towards a Cliff, but I don't have the agency to to like steer. So yeah but I mean.
everything's fine apart.
Good.
good. It's such a terrifying pracontal. Sis.
there are, again, we like we wrote a two hundred eight page document about, like, okay, and here's what we can do about IT.
I can't believe .
in the two hundred. I but does any of these or do any of these safety steps that you guys want to implement? Do they inhibit progress?
They they're definitely you create you know, any time you have regulation, you're going to create friction. To some extent, there's it's kind of inevitable. One of the key like center pieces of the approach that, that we outline is you need the flexibility to move up and move down as you notice the risks appearing or not appearing.
So one of the key things here is like you, you need to cover the worst case scenarios because the worst case scenarios yet, they could potentially be catastrophic. So those got to be covered. But at the same time, you can't completely close off the possibility of the happy path like the like we can do.
So the fact they're yeah, all this shit is going down. And whatever we can be completely wrong about the outcome, IT could turn out that like for all we know, it's a lot easier to control these systems at the scale. We then we imagine IT could turn out that you know, IT is like you get maybe some kind of ethical impulse gets embedded in the system naturally, for all we know that might happen. And it's really important to at least have your regulatory system allow for that possibility, because otherwise you're for closing the possibility of what might be the best future that you could possibly imagine for everybody.
I gotta imagine that the military, if they had hindsight, if they are looking at this, he said, we should have got on board a long time ago and and kept this in house and and kept IT squirl away, where IT wasn't public being discussed. And you didn't have OpenAI. I didn't have all these people like if they could have gotten on IT in two thousand and fifteen.
So so this is actually deeply tied to how the economics of silicon valley work. This is in its A I is not a special case of this, right? You have a lot of cases where technology just like takes everybody by surprise. And it's because when you're going to silk and valleys, it's all about people placing these outsize bets on what seems like like tail events, like things that are very unlikely to happen. But with a at first, a small investment and increasingly a growing investment, as the thing is proved, that more, more very rapidly, you can have a solution that seems like complete and sanity that just works. And this is definitely what happened in the case of A I so twenty twelve, like we did not have this this whole picture like an artificial brain with artificial neurons, this whole thing that's been going on that's like it's twelve years that that's been going on like that was really kind of shown to work for, for the first time roughly in twenty twelve. Ever since then, it's just been people kind of like you can trace out the geneology of like the very first researchers and you can basically account for where they all are.
Now you know it's crazy is if that's two thousand twelve, that's the end day of the mine calender. That's the thing that everybody said was going to be the end of the world. That was the thing that turns were kind of bank down. And was december twenty first, two thousand and twelve, if IT was like this goofy conspiracy theory. But I was based on the long count of the mine counter where they summer, that this gonna the end of .
just the beginning of the Angel.
What if that if IT is two thousand and twelve, how what you would have be if that really was the beginning of the end? That was the like? They don't measure when IT all falls apart. They measure the actual mechanism like what started in motion when IT all fell apart as two thousand twelve.
Well, that's in the not not to be a dick in like room in the twenty twelve thing, but like the neural networks were also kind of they were floating around a little bit. I'm kind of being dramatic when I say twenty twelve. That was definitely an inflection point IT was was this model called alex net that first I did like the first useful thing the first time you had a computer vision model that actually worked um but I mean, IT is fair to say that was the moment that people started investing like craze into .
the space that changed IT.
Yeah, just like the mines for told they knew I knew.
like these monkeys do, to figure out how to make Better people.
Yeah you can actually look at the like highgate ff or whatever.
There's like a neural networks and they discovered that you've got ta wonder what happens to the general population, people that work meanie jobs, people that their life is going to be taken over by automation and how except those people, we're gonna, they going to have any agency, they're to be relying on a check.
And this idea of like going out in doing something that used to be learn to code, right? But that's out the window because nobody needs to code now because is going to call quicker, faster, much Better. No areas you you're gonna have a giant swath, the population that has no.
I think that's actually like a completely real right. I was watching this like talk by a bunch of open eye researchers as a couple days ago, and IT was recorded from from a while back, but they were basically saying they're expLoring exactly that question, right? Because they ask themselves that all the time and their attitude was like, well, yeah I mean, I guess it's going you know soccer, whatever like will probably OK for longer than most people because we're actually building the thing that automates the thing um maybe they're gonna some they like to get fancy sometimes and say like ah now you could do some thinking, of course, to identify the jobs that would be most secure and and it's like I do some thinking to identify the jet like what if you're like you're a gender, you're like a gan plummer or you're gonna just change your you're that supposed .
to work to some thinking, especially of a mortgage and a family and in the hall.
So they like the only solution. This happens so often like there really is no plan. That's the single biggest thing that you get hit over the head with over and over, whether it's talking to the people who are in charge of the lake labor transition.
The whole thing is like yeah universal basic income and then question mark and then smiling face. That's basically the three steps of the envision. Um it's the same when you look internationally, like, how are you going to like? Okay, tomorrow you build an age is like incredibly powerful, potentially dangerous thing. What is the plant like? How are you going to like, I don't seure .
figured that all the .
message .
that that is the entire plan.
The scary thing is that we've already gone through this with other things that we didn't think we're gonna significant. Like data, like google, like google search, like data became a valuable commodity that nobody saw coming. No, just the influence of social media on general discourse.
It's completely change the way people talk. It's it's so easy to push A A thought or an ideology through. And IT can be influenced by foreign countries.
And we know that in a ge ale ready, this is like and we're in the early days of we mentioned manipulation of social media with like you can just do IT. So the the wicky thing is like the very best models.
Now are you arguably smarter in terms of the post that they put out, the potential for vary and just optimized in these metrics then maybe like that? I don't know the the dummy's lazier like quarter of twitter users, like in practice, people who read on twitter as like don't really care. They're troll their doing. But as that waterline goes up and up and up, like who's saying what IT also leads .
like this chAllenge of understanding what the lay of the land even is yeah we've gone into so many debates with with people where we'll be like, look um everyone else has their magic thing that A I like i'm not going to worry about until I can do thing x right for some people that had a conversation, somebody a few weeks o and they were saying i'm going to worry about automated cyber attacks when um when I actually see an A I system that can write good malware.
And like that's already a thing that happens. So so this happens a lot where people would be like, i'll worried about IT when I can do x yeah yeah that happened like six months ago. Feel is moving so crazy fast that you could be forgiven for for messing that up unless it's your full time job to track what's going on.
So like you can have to be anticipate there. There's no it's kind like the copy example, like everything's exponential. Yeah, you're gna have to do things that seem like there more aggressive, more forward looking than you might have expected given the current of the land. But that's just drawing straight lines between between two things.
By the time you've executed, the world is always shifted like the goal post have shifted further attrition. And that's actually something we yeah we we do in the the report in the action plane. In terms of the recommendation, one of the good things is we are already seeing movement across the U.
S. Government that aligned with those recommendations in a big way. And it's really encouraging .
to see that who you know, making me feel Better. I love all the encouraging talk, but I just this i'm playing this out and i'm seeing the overlord, you know, and i'm seeing president A I because IT won't be affected by all the issues that we're we're seeing with current president did.
It's super hard to imagine a way that this plays out like I think it's important to be actually honest about this. And I think any I would really chAllenge like the leaders of any of these frontier labs to describe a future um that is stable and multipolar where you know there there's like more .
like google got like an agi and open the eyes got an agi and like like and really.
really bad IT doesn't happen everyday like I mean, that's that's the chAllenge. So the question is how can you see things up ultimately such that there's as much democratic oversight as much the public is, as empowered red as IT can be? That's the kind of situation that we need to be having.
I think there's this like a game of smoke mirrors that sometimes gets played. At least you could interpret IT that way where people lay out these. You'll notice all very fuzzy visions of the future.
Every time you get a the kind of like here's where we see things going, it's going to be wonderful. The technology is going to be so so empower ing. Think of all the diseases will hear all of that is one hundred percent true.
And that actually would excite us is why we got in A I in the first place, why we build these systems. But um really, really know chAllenging yourself to try to imagine how do you get stability and highly cabuli systems in a way with where the public is actually empowered. Those three ingredients really don't want to be in the same room with each other. And so actually confronting that head on, I M that's what we try to do in the the action plan. I think IT.
I tried to solve er one one aspect of that so the whole life I mean you you're right this is an an whole other canal worms is like how do you govern a system like this? Not just from a technical standpoint, but like who votes on? Like what how does that even work? And so that entire aspect like that we didn't even touch all that we focused on was like the problem set around how do we get to a position where we can even attack that problem, where we have the technical understanding to be able to aim these systems at that level in any direction whatsoever. And to be clear.
like like we are both actually a lot more optimistic on on the prospect of that now than we ever were. There been a ton of of progress in the control and understanding these systems even actually even in the last week, but just more broadly in the last year.
I did not expect that we'd be in a position where you you could plausibly argue we're going to be able to x ray understand the the inner ds of these systems over the next couple years like year or two. Hopefully that's good enough. Time horizon.
This is part of the reason why you do need the the incentivizing of that safety forward approach where it's like first got invest in yeah secure and of interpret and understand your system, then you get to build that because otherwise we're just gonna ep scaling and like being surprised these things, they are going to keep getting stolen. They're going to keep getting open source. And you the stability of our our our critical infrastructure of the stability of our society don't necessarily age too well in in that context.
Could best case scenario be that agi actually mitigates all the human bullshit, like to stop to propaganda, higher lights, actual facts, clearly, where you can go to, where you no longer have corporate state controlled news. You don't have news controlled by media companies that are influence heavily by special interest groups, and you just have the actual facts. And these are the motivations bond.
And this is whether money is being made. And this is why these things are being implemented to with being, and you're being deceived based on this, that and this. And this has been shown to be propaganda.
This has been shown to be complete fabrication. This is actually a deep fake video. This is actually, I created .
technologically, that is absolutely on the table.
Yes, the scenario.
So what worse case? energy.
I mean, like actual worst case scenario.
I like your face.
like me, like so chin IT. So it's like do you think .
about IT, right? Like the end world, I feel fine.
except that .
sounds like the .
johanna.
but yes, it's going to be was like I don't know, but we listened to the clip from her and then we listened to thing like kind of like a girl from the same part of the world. Like not really you like that kind .
of it's true. I mean, I the fact that I guess sam reached out to her a couple of times kind of makes me a little, a little weird .
tweet the world her right.
They also did say that they had gotten this woman under contract before they even reached out to score your hands of that's true.
Yeah, that was I think is kind of complicated. So opening I A previously put out a statement where they said explicit and this was not in connection with this. This was like before and they were talking about the prospective of human of the I generated voices.
So yeah, well before the the scar jo stuff or whatever hit the and they were like they said something like, um look, no matter what we got to make sure that there's attribution. If somebody y's know somebody y's voices being used and we won't we won't do the thing where we just like use somebody else voice. We kind of sounds like someone whose voice .
are trying to about that's a good way to cover your tracks.
I will never do. Why would I ever take your butt statue? You I never really do that. That would be yeah .
I think that's a small discussion. You know hana voice like whatever you should just taking the money, but fun to ever be. The voice of IT be kind of hot. But the whole thing behind IT is the mystery. The whole thing behind IT is, is it's just pure speculation is to how the sap plays out. We're really just guessing, which is one of the scariest things for the lood etes people like myself like that on the sidelines going what is this going na be like .
everybody y's a .
lot of yeah I mean, it's Carry for you like we are we're very much honestly like we're across the board in terms of technology, and it's scary for us like what happens when you have, when you supercede kind of the whole spectrum of what a human can do, like, what am I gonna do with myself? You know, what's my daughter are going to do with herself? Like, I don't know.
Yeah yeah and I think .
a lot of these questions um when you look at the the culture of these lab and the kinds of people who are pushing IT forward um there is A A strand of like uh trans humanism within the labs is not everybody uh but that's definitely the population that initially seated this like if you look at the history of AI and who are the first people to really get into this stuff like going records well on and other folks like that who in many cases see um to to roughly pair of phrase and not everybody sees this way, but like we want to get rid of all of the biological sort of threads that tie us to this physical reality, shed our meat machine bodies and analysis there is a threat of that at a lot of the frontier lapse like underived ly.
There's a population. It's not tiny, it's it's definitely a subset. Um and for some of those people, you definitely get a sense interacting with them.
There is like almost a kind of glee at the prospective of building agi and all the stuff, almost as if it's like this evolutionary imperative and in fact, rich sutton, who's the founder of this field called reinforcement learning, which is really big an important space, um know he's an advocate for what he himself calls like succession planning. He's like, look, is going to happen. It's kind of desirable that IT will happen. And so we should plan to hand over power to A I and face ourselves out and oh well, that's the thing, right? Like and when elon talks about you'd have these arguments with Larry page and um you like you're .
you know calling you on like a species as ah I I, I I will be this i'll take this, what are you fucking .
talking about? Like your kids get eaten by walls, know your species .
yeah the .
thing yeah like this is stupid.
This is like a weird. When you look at like the effective acceleration in this movement in the valley, there is a part of IT that and I got to be really careful to like these movements have valid points like you can't you can't look at them be like, oh yeah, it's just all all, all bunch like you know, these trans humanist types, whatever.
But there there's a strain of that, a threat of that and a kind of like, uh, there is this like, I don't know, I almost want to call of this like teenage rebellious. This words like you can tell me to do like what are you going to build a thing and I get IT. I I really get IT.
I'm very sympathetic to that. I love that ethos s like libertarian etho CS. And silicon valley is really, really strong for for building tickets.
helpful. There are all kinds of points and counterpoints. And you know the left needs the right and the right needs the left in all this stuff. But um in in the context of this problem, that IT can be very easy to get Carried away. And like the utopian vision, and I think there's a lot of that kind of driving the train right now in the space.
Yeah, those guys freak me out. I went to at twenty forty five conference once in new york city, where they were a one guy had like a robot version of himself, and they were all talking about downloading human consciousness into computers.
And twenty forty five is the year they think that all this is going na take place, which obviously could be very ramped up down with A I yes, but this this idea that somehow you gona be able to take your conscious ness and put IT in the computer and make a copy of yourself. And then my question was, what? What's going to stop? I like Donald trump for making a billion.
Donald trumps, I know. Right, if you can. What about kim john g wound? You want to let him make a billion versions of himselve. Like, what does that mean and where they where do they exist? yes. Is that the matrix of the existing in some sort of virtual? Or we going to dive into that because it's gonna rewarding to our senses and Better than being a meat thing.
I mean, if you think about the the constraints right that we face as me, machine, whatever, like, yeah, you get hunger, you get tired, you get horny, you get sad, you know, all these things. If, yeah, if you could just hit .
a button and just bliss, just bless all the time. Why take the low right? Do you don't need no laws?
Uh, yeah remember .
in the ride the way of constant drip?
Yeah remember in the matrix, where are the first matrix where the guy like ignorance just .
I .
just wanted be an important person that's .
that's IT like, boy, it's I think part of IT is like.
what do you think is actually valuable? Like you, if you zoom out, you want to see you human civilization one hundred years from now, or IT may not be human civilization, if that's not what you value.
or if they can .
actually eliminate suffering.
right? Mean, why? Why exist in physical? Just entails.
but suffer. But in what form? What do you value? Because again, I can rip your brain out.
I can pick of you. I can like jack, you full of endorphins, and i've eliminated. You're suffering that that he wanted, right? right? That's the problem. That's the problem. One of the problems, yes.
yeah, one of the problems is IT could literally lead to the elimination of the human race. Because if you get, stop people from breeding. I always said that if china really wanted to get america, they really wanted to, like if they had a long game, just give a sex robots and free food, free food, free electricity, sexy robots, it's over.
Just give people free housing, free food, sex robots. And then the chinese army would just walk in on people laying in puddles of their own. Jez, there would be no one doing anything. No one would bother raising children. That's so much work when you can, you know.
do that in the action plan.
I mean, I have to do .
just keep us complacent, just keep satisfied.
We experience tiktok me .
what's video games as well? Yeah, you know, video games, even though there are a thing that you're doing, it's so much more exciting than real life that you have a giant percent of our population spending eight, ten hours every day just engaging in this virtual world .
already happening with.
oh yeah, no, it's like it's you can you can create an addiction with a pixels on a screen.
mess up and a addiction like with pixel on a scream with social media doesn't even give you much. Yeah, it's not like a video game gives you something you feel like go IT. You run away.
Find of the things are happening. You got three sound, massive graphics. This is bullshit. You're growling through pictures of a girl doing dead lifts. Like, what is this .
like you feel as bad as after that as with your brain as you'd .
feel after reading like six black burgers or what level i'm? What is that like what in for no reason?
Well, the reason is that some of the world's best P, H, D. And data scientists have been given millions and millions of dollars to make you do exactly that.
And increasingly, some of the best like, yeah and your starting to see that hand off happened. So so there's this one thing that we talk about a lot in the context ed brought this up in the context of sales and like the persuading game, right? We're okay today.
Like as a civilization, we have agreed implicitly that it's okay for all these PHD and ship to be no spending millions of dollars to hack your child's brain. That's actually OK if they want to feel like a rsp serial box or whatever. That's cool.
Um what we're starting to see is A I optimized ads. Because you can now generate the ads. You gonna close this loop and have an automated feedback. Look where the ad itself is getting optimized with every impression, not just which add which human generated that gets served to which person, but the actual ad itself .
in the creative, the copy, the picture of the texts.
like a living document now and and for every person. And so now you look at that, it's like that verses your kid. That's an interesting thing you start to think about as well. Like sales, that's a really easy metric to optimize, a really good feedback metric. They click the ad, didn't click the ad.
So now what happens if you know you you managed to get a click through rate of like ten percent, twenty percent three per? How high does that success rate have to be before we're really being robbed? Our agency? I mean, like there's a threshold word sales, it's good and some persuading sales is considered good. Often it's it's actually good because you'd rather be advertised at .
by a relevant that's a something .
i'm really interest for light bulbs. But when you get to the point where it's like yet ninety percent the time like or or fifty year or whatever, what's the threshold where all the sudden we are stripping people, especially miners, but also adults of their agency? And it's it's really not clear AI.
There are also like canaries in the coal mine here in terms of even relationships with like A I chap pots, rather been suicides. People who build relationships with an air chat pot that tells them, hey, you should end this. I don't know if you like on record like that is a sub edit, this model called rega that would kind of build a relationship at chatbot, build a relationship with users. And one day record goes, oh yeah, like all the kind of sexual interactions that users have been having, you're not allowed do that anymore. Bad for the brands or whatever they ve decided they cut IT off.
Oh my god.
you code to the sub reit and it's like you'll read like these got ranching accounts from people who feel genuinely like they've had a loved one taken .
away from means something different and .
point to me for, oh yeah, IT really does. My friend brian, he was on here yesterday and he had this uh he has this thing that he's doing with, uh like a fake girlfriend that's an AI generated girlfriend, that's a hall. Like, this girl will do anything and SHE looks perfect. SHE looks a real person. He'll like, take a picture of your asal and in the kitchen he'll get like a high resolution photo of a really hot girl bending over sticking her ass at the camera and sorry.
it's and it's scarlet hands as no.
you could probably make IT that though. I mean, it's basically like he got to pick like what he's interested in and then that girl is .
just get creative is super healthy.
like that fuck and nuts. Fuck and nuts. Now here's the real question.
This is just sort of a surface layer of interaction that you're having with this thing. It's very two dimensional. You're not actually encountering a human text and in pictures.
What is this going to look like? virtual? Now the virtually space is still like pung. No, it's not that good even when it's good.
Like zuker berg was here and he gave us the latest version of the the the headsets and we were planned. We're fencing. It's pretty cool.
You can actually go to economic club. They had a stage set up. Wow, kind of crazy. But it's the the the you know the gap between that and accepting and is real is pretty AR.
Yes, but that could be bridge with technology really quickly haptic feedback and especially some sort of a neural interface, whether it's new a link or something that you wear like that. Google one, whether guy was wearing IT and he was asking questions and he was getting the sers fed through his head so he get answers. Any question when that comes about, when you're getting sensory input and then you're having real life interactions of people as that scales up exponentially, it's gonna n discernable, which is the whole simulation.
No.
what I was going. So on the simulation I posses, there's like another way that could happen that is maybe even less dependent on directly plugin day, like human brains and all that sort of thing um which is so every time we don't know and and this is super speculator, I am just going to carve this out as the jam is being super, I guess work here. Nobody go for a german, get up.
So um you've got this this idea that every time you have a model that generates an output is having to to tap into A A model of mental image, if you will, of the way the world is a kind of sense you could argue instantiated, tes, maybe a simulation of how the world is. In other words, to take IT to the extreme. Not saying this is what's actually going on.
In fact, I would even say is probably sorry, this is certainly not what's going on with current models. But eventually, maybe who knows, every time like you generate the next word in the in the token prediction, you're having to like load up this entire simulation, maybe of all the data that the model is congested, which could basically include like all of known physics at at a certain point, like I mean, again, super speculative. But it's that literally every every token that the the chatbot predicts could be associated with A A stand up of an entire simulated, who knows? Not saying this is the case, but just like when you think about what is the mechanism that would produce the most simulated words as fast.
accurate, also the most accurate prediction, like if you fully simulate you know a world that's potentially going to give you very accurate predictions.
Yeah like it's possible um but I kind of speaks that question of of consciousness .
too like right what is IT? We are very cocky about that. yes. Yeah I mean, there's emerging evidence of plans are not just conscious ence what they actually communicate.
What is real weird because like then what is that if it's not in the neurons, if it's not in the brain, in the existing everything was the existence. Soil is IT in, trees is IT in. What is a butterfly thinking, you know exact just have a limited capacity .
express itself were so ignorant .
that but we're also very arrogant, you know because we're the shit, because we're people you .
know there's which allows .
us to have the hubris to make something like ai yeah .
and the worst episodes in the history of our species are, and they, like germany said, have been when we looked at others as though they were not people in try them that way and you can .
kind of see how. So I don't know there's when you look at like what humans think is conscious and what humans think is not conscious, there's a lot of there's a lot of like human shovin ism, I guess you call IT that goes into that like we look at a dog or like h IT must be conscious because IT looks me, IT seems IT act as if IT loves me. There are all these outward indicators of you a mind there.
But when you look at like you cells, cells communicate with their environments in ways that are completely different in alien to us. There are inputs and outputs and all that kind of thing. You can also look at the higher scale, the human superorganism.
Talk about all those human beings interacting together to form this, like planet wide organism. What is that thing? conscious? Is there some kind of conscious ness we get described?
And then focus booky action a distance, know what's going on in the quantum you know, when you get to that it's like, okay, what are you saying? Like these things are expressing information faster.
The beautiful light. Do you trying to trigger .
my my quantum, my quantum fuzzies here, let's go economic please.
I'm really .
sorry how Bakers is .
IT h it's a seven joke. It's a seven. Yeah IT is very much so. Uh, okay. There's one of the problems right now with with physics is that we have so imagine all the date, all the experimental data that we've ever collected um know all the bust and burner experiments and all the raps and cars sliding down in clients, whatever.
That's all the body of data um to that data we're going to fit some theories, right? So we're going to fit basically newtonian physics, a theory that we try to fit to that data to try to like explain IT a newtonian physical breaks because IT doesn't account for a lot of those observations, a lot of those data. Point quantum physics is a lot Better, but there's like some weird areas where IT still doesn't like quite fit the bill, but IT covers on off a lot of those data points.
The problem is there's like a million different ways to tell the story of what quantum physics means about the world. They are all mutually inconsistent. Like these are the different interpretations of the theory.
Some of them say that, yeah, the parallel universes. Some of them say that human consciousness is central to physics. h.
Some of them say that, like, the future is prety determined from the past. And all of those theories fit perfectly to all the points that we have so far. But they they tell a completely different story about what's true and what's not. And some of them even have something to say about, for example, consciousness. And so in a weird way, like the fact that we haven't cracked the nut on any of that stuff, means for like we really have have no shot understanding the the conscience equation, sentience equation, which comes to like I or whatever else I mean were.
but for for action at a distance. Like one of the spooky things about that is that you can actually get IT to communicate anything concrete at a distance. Everything about the laws of physics conspires to stop you from communicating faster than light, including was called the action to distance. As as far as we as as far as we know.
And that's that's the problem. So if if you look at the leap from like newton's physics to einstein, right with with newton, we're able to explain a whole bunch of shit. The world seems really simple. It's forces and its masses and and that's basically if you got objects um but then people go oh look at like the orbit of mercury is a little lobby we got to fix that and IT turns out that if you're onna fix that one stupid lobby orbit, you need to completely change your whole picture of what's true in the world all the sudden you've got a world where space and time or link together, you have to they get bent by gravity, they get bent by energy.
There's all kinds of weird chip that happens with time and links contract like all that stuff all just do account for this one stupid observation of the warbling orbit of fricking mercury and the chAllenges this might actually end up being true with with quana mechanics. In fact, we like we know quantum mechanics is broken because he does not actually fit with our theory of general relativity from mindset. And we can't make them kind of play nice with each other at certain scales.
And so there's a mobile orbit. So now we're going to solve that problem. We're going to create a unified theory. We're have to step outside of that. And almost certainly, IT seems very likely we will have to reactor whole picture of the universe in a way, it's just as fundamental as the leap from newton.
This is, where is your hands and comes in. I can do this. You don't have to do this. I can take this off your hands.
Let me, this is all of this .
really complicated. But because you have a semi in brain, you have a little monkey brain that's just like super advance.
But it's really shitty. You know what? That's harsh, but it's sounded really hot. Yeah specially .
you have with a horse scally your hands and from her like the bedtime voice.
So year of the one that they got to do, the voice of sky, yes, was you. I would do both.
My irl voice on the.
on the sexy inss of scarlet hansen's voice. So, so opening eye, at one point that I can remember for summer, or opening itself, they were like, hey, so we are not. The one thing we're not going to do is like optimized for engagement with our products. And when I first heard the sexy c seductive scarlet hance and voice, and I finished cleaning up my, I was like, damn, that seems like optimization for something I don't know like IT, right?
Otherwise you get Richard cement to do .
the .
voice exactly.
just like turn people on. There's a lot of other options .
like that's that's an optimization for like growth of google's .
think yeah but let's .
see what goes.
Yeah goods go to boy. So do .
you think .
that A I with, if IT does get to an agi place, could IT possibly be used to solve some of these puzzles that have eluded our simple .
minds to totally.
I mean, even the potential advancements .
even before? No, it's like it's so is potentially positive. And even before agi, because remember, yes, we talked about how these systems make mistakes, the totally different from the kinds of mistakes we make, right? And so what that means is we make a whole bunch of mistakes that A N A, I would not make, especially as IT gets closer to our capabilities.
And so um I was reading this a this thought by Kevin sky, who is the city of microsoft. He has made a bet with the number of people that you know, in the next few years, N A I is going to solve this particular mathematical theory conductor called the riman hypothesis. It's like, you know how sped out of the prime numbers, whatever.
Some like mathematical thing that for a hundred years plus people have just like scratched their heads over. These things are incredibly valuable. His expectation is it's not going to be an agi. It's going to be a collaboration between a human and A I. Even on the way to that, before you hit H I, there is a ton of value to be had because these systems think so fast, their tireless compared to us like they have different view of the world and can solve problems potentially in interesting ways. So yeah like there's tons and tones of positive value there .
and even that we've already seen, right? Like past performance, man. Like yes, i'm most tired of using the phrase like just in the last month because this keeps happening.
But um in the last month, so google deep mine came out with an ice morph labs because they're working together on this, but they came out with alpha three. So all photo two h was the first. Let me take step back.
There's this really critical problem in molecular biology where you have so proteins, which are just as a sequence of building blocks. The building blocks are called the mino acid. And each of the amino acid, they have different structures.
And so once you're finish doing them together, they're naturally kind of fold together in some interesting shape. And that shape gives that overall protein is function. So if you can predict the shape, the structure of a protein based on its a mino acid sequence, you can start to do shit like design new drugs.
You can solve all kinds of problems like this is like the expensive crown jewel problem of the field. Alpha two in one soop was like, oh like we can we can solve this problem basically um a much Better than a lot of even empirical methods. Now alpha three comes out. They're like that. And now we can do IT if we tack on a bunch of either IT is if we can tack on a bunch.
Look at this, quote, alpa four three predicts the structure in interactions of all of life's molecules. What in the fuck? Kids, of course, introduced alpha three, introducing rather alf three, a new AI model, developed google deep mind, and is more for so I fix by accurately predicting the structure, proteins, DNA, R N A, legions, legions, legions and more, and how they interact. We hope we will transform our understanding of the biological world and drug discovery.
So this is like just your typical wednesday in the world of A I, right?
Because it's happening .
so quickly yeah like, yeah, another revolution happened here and .
it's all so fast and our timeline is so flooded with data that everyone's kind of unaware of th Epace o f I T. All that is happening at such a strange expansion, al rate, for Better and for worse.
right? And this is definitely on the the Better side of the ocasio. There's a bunches of stuff like one of the papers that actually google deep mine came out with earlier in the year was in a single advance like a single paper, a single A I model, they built, uh, they expanded the set of stable materials .
coverage. Terrible, but I never got.
T yeah, yeah. That's what IT is.
IT just never, never really brew. Terrible.
terrible.
Coffee is my favorite.
I can solve that problem. Fuck terrible things like .
I like of your date.
and really .
hot grill .
cooks for you. Thank you. This is amazing. Is the best background and cheese ever.
If inference, if scholarship, hanson on's voice was actually .
giving that I the best. Keep talking.
They have small, please government. Yeah so so there's this one there's one paper that came out there are like, hey, by the way, we've increased the set of stable materials known to humanity by a factor of ten. So like if on monday we knew about one hundred thousand stable materials, we now know about a million, they were then validated, replicated by berkeley university, a bunch of them .
as a proof consent. And this is from, like, we know the same munera als we knew before, like that wednesdays from ancient times, like the ancient greeks, like discovers m ship. The romans discovers m ship, the mille ages discovers. And then it's like, oh yeah, yeah, all that that was really cute.
Like, boom, one step.
So and and that's amazing. Yeah, we should .
be celebrated. Es.
dude will be able to get addicted to like that we .
haven't even thought .
of so I mean, you make me feel little more positive like overall, this going to be so many beneficial aspects to A I yeah and it's just what IT is, is just in unbelievably transformative event that we're living through .
yeah its power and power can be good and IT can be bad. And that yeah an immense power can be immensely good or immensely bad.
I think we're just in this, who knows?
We just need to structurally set ourselves up so that we can read the benefits and mind the downside risk like that. That's what it's all was about, but the regular ory story has done for that way.
Well, i'm really glad that you guys have the ethics to get out ahead of and to talk about IT with so many people and to really blair this message job because I don't think there's a lot of people that like I had mark and reason on who's brilliant, but he's like all in this can be great .
and maybe .
he's right may be is right yeah yeah but I mean, you have to hear all the different .
perspectives and I mean, like massive, massive props honestly go out to the team at the state department that we work with. One of the things also is over the course of the investigation, the way I was structured was um IT wasn't like a contract and they formed IT out when we went out IT was the two teams actually like war together. The two teams together, the state department in us, we went to london, N.
U. K. We talked and SAT down with deep mind. We went to send Frances o. We SAT down with sampling as we SAT down with anthropic all of us together.
And one of the major reasons why we were able to publish so much of the whistle blower or stuff is that those very individuals were in the rooms with us when we found out this shit. And they were like, oh, fuck like, the world needs to know about this. And so they were pushing internally for a lot of the stuff to come out that otherwise would not.
I also got to say, like, I just want to memorialize this two, that investigation. When we went around the world, we were working with. Some of the most elite ite people in the government that I didn't I would not have a guest existed that was honestly speak for well, I can be it's .
hard to .
be take IT to the hanger.
No, there's no hang. No.
there's no hanger. Yeah I think .
that there's no hair 等 where is sweet and where?
didn't. We didn't go that for time around at home. You know we went pretty far down the road hole. And yeah, there are individuals who are just absolutely, absolutely the like the level of capability, the amount that our teams jailed together at certain points, the stakes like the the stuff we did, the stuff they made, happen for us in terms of brain deathly brought together like a hundred folks from across the government to discuss like A I on the path to A G I and go through the recommendations .
that I was cool. Actually the first, first, basically the first, first time the us. Government came together and seriously looked at the prospect of age and the risk there and we had as well. I mean, again, that's like that was an november.
It's just too forget in yahoo is like the hell do we know in our amazing team um and and and IT was yet referred to by there was a senior White house of there was like that was a watershed moment in U. S. history.
Well, that's encouraging because again, people do like to look at the government is the D. M. V, yes, or the worst aspects of bureaucracy.
There is missing room like four things like congressional hearings on these whistle blower events. Certainly congressional hearings we talked about on the idea of liability and licensing ying and what regulate agencies we need just to kind of like start to get to the meat on the bone on this issue. But um yeah opening this up I think is really important.
Will shout out to the part of the government that's good, shout out to the government that gets that that's competent, awesome. And shout out to you guys because this is a it's heady stuff. It's very difficult to grasp.
But it's it's even in having this conversation with you, I still don't know how to feel about IT. You know I think at least slightly optimistic that the potential benefits are going to be huge. But what a weird passage we're about to enter into.
It's the unknown. Yeah, truly. Thank you, gentlemen. Really appreciate your time. Appreciate what you doing. Thank you. Thank people want to know more where should they go, which they follow.
I guess gladstone dot A I slash action plan is one that has our action plan.
Gladstone, not A I stuff is there.
I should mention to, I have this little podcast called last week, A I H recover server, last week's events.
And it's all .
about the server .
length every hour. Last last week is not time. We could be a war aren't .
like list of stories .
keeps getting something, anything can happen.
Time travel. Hear there.
First a right? Well, thank you guys. Thank you very much.
appreciated. thanks.