Do you worry that maybe a guy who's got a lot of money builds an army of 200,000 robots that'll be stronger than the military that we have? Absolutely. I went to the conference recently where a guy who had a lot of money was talking about building 3 billion such robots. Capable of doing what? Everything we can do but better. And this was not Elon Musk.
Warren Buffett said you have to kind of look at AI as like the nuclear bomb, you know, it's like atomic bombs. That's also the fear because the fear is like let's not accelerate AI in our country and robots, but somebody else does and then the military drops. So then what happens? People who don't use AI get replaced by people who do. If a Chinese company builds superintelligence, China will not be run by the Communist Party. It'll be run by the superintelligence. Don't think 10 years.
Think of the next two years. Crazy things are going to happen. The technology is here to stay and it's going to blow our minds. Are you for it or against it, Patrick? Maybe, maybe not. One dirty secret, we have no idea really how it works. And if we do this right with AI and use artificial intelligence to amplify our intelligence to bring out the best in humanity, humanity can flourish. So in other words, you believe the future looks bright.
Thank you so much again. Appreciate you. Yes, please grab a seat. Okay, so I'm going to get right into it, guys. I think we have to talk about a soft, subtle subject with AI. Do you worry that maybe a guy who's got a lot of money builds an army of 200,000 robots that'll be stronger than the military that we have?
Do you worry about that at all? Absolutely. I was at a conference recently where a guy who had a lot of money was talking about building 3 billion such robots. Building 3 billion robots? Yeah. Capable of doing what? Everything we can do but better. And this was not Elon Musk. Who was this guy? I'm not... This was one of those conferences where you're not allowed with Chatham House rules. Is it a person that we would know or not really? Yeah, you'd know. I mean, it's only a few people that can afford 3 billion robots. So what happens if, let's just say...
Because you know, Warren Buffet said, you have to kind of look at AI as like the nuclear bomb, you know, it's like atomic bomb. So he's a little bit more afraid of what AI is going to be doing. But how should something like that be concerned about? Because the risk is many different facets of risk. I wrote a few things down.
One is war. How many guys fear what could happen with AI with war? Military, robots. Okay. A lot of people. Next one I wrote, humanity. When Elon talks about he's a humanist. He wants to make sure he can protect humanity. Business disruption. Pharma disruption. You know...
Criminal justice, you know, using AI, like that movie Minority Report. By the way, every one of these things, there's a movie for it. War, The Creator, I don't know if you've seen The Creator, I thought it was a great movie, just came out. Or iRobot, Humanity, Matrix, or even the movie Her. Have you guys seen that terrible movie Her with Joaquin Phoenix, which was so weird that a lot of people liked? I wasn't one of them. Can I defend Her just very briefly, though?
I cringe almost always when I watch sci-fi films. I knew we were going to get into a fight after two minutes. But one really nice redeeming feature of Her was it didn't have robots in it. And I think that was a really important message. It's so easy to immediately put an equal sign between scary AI and robots. And Her shows how just intelligence itself can really give a lot of power too. Are you married, Max? Yes. You're married. Okay.
So if you're sitting locked in a room and you're really smart and you can't go out and touch the world but you're connected to the internet, you can still make a ton of money, you can hire people, you can do all your job interviews over Zoom. And if some super intelligent AI starts doing that, it can start having a lot of impact in the world. Positive or negative? That's the thing, you know.
When I gave the definition of intelligence as the ability to accomplish goals, I didn't say anything about whether those were good goals or bad goals. And intelligence, in that sense, is a tool. It's a tool that gives you superpowers for accomplishing goals. And tools and tech, right, they're not morally good or morally evil. If I ask you, what about fire? Are you for it or against it, Patrick?
I'm for it. You're probably for it. I'm against who uses it. An awesome barbecue, but you're probably not against using fire for making... You probably are against using fire for arson, right? So whenever we invent some new tech, we also try to invent an incentive structure to make sure people use that tech for good stuff, not for bad stuff. And it's no different with AI. We have to, if we build these powerful things...
make sure that we have all the incentives in place to make sure that people use them for the good stuff, not for the bad stuff. How do you do that, though? Like this guy that you're talking about who wants to build 3 billion robots, okay? Do you think he's a good guy? I'm not asking for a name, but do you think he's a good guy? I think he thinks he's a good guy, but, you know...
I'm pretty sure Stalin also thought he was a good guy. Fair enough. Okay. So now, so, you know, do you in your mind at all see a future where robots don't exist?
Or is it, no, listen, we have to accept the fact that within the next whatever amount of time, we're going to be coming to places where robots are going to be doing customer service, military is going to be customer service, robots are going to be in the military, cops are going to be robots, I'm going to go to a restaurant placing an order from a robot. Do you see that as a near future, that that's going to be happening, whether we like it or not? Yeah, but I love what you said there earlier this morning about dreams. And my dream is that we build robots
really exciting AI, including a fair number of robots, and not just plop them into society randomly and see what people choose to do with them, but rather that we use a lot of wisdom to guide their use. If you have... You asked how can we influence how people use technology, right? So how do we influence how people use fire, for example? First of all,
We have social incentives. If you get a reputation as the guy who always burns down people's houses, you're going to stop getting invited to parties. And we also have a legal system we invented for that very reason. So if you do that and you get caught, you get maybe a couple of years to think it over on very boring food. And when we make these technologies...
If they are technologies that can be used to cause a lot of harm, we want to make them such that it's basically impossible to do that. There was a guy named Andreas Lubitz, for example, who was very depressed, and he crashed his German wings aircraft into the Alps, killed over 100 people. You know how he did it? He just told AI, autopilot, to change the altitude dramatically.
from 30,000 feet to 300 feet over the Alps. You know what the ice said? Okay. It's like...
What engineer puts that in there? When we educate our kids, we don't just teach them to be powerful and do stuff. We also teach them right from wrong. We have to make sure that every autopilot in an aircraft will just refuse crazy orders like that. If you put good AI in a self-driving car and the driver tries to accelerate into a pedestrian, the car should refuse to do that.
There's a lot of technical solutions like this where you just make it impossible for random nutcases to do harm. Another big success story, I think, is basically every other technology that could cause harm, we have safety standards. That's why we have the Food and Drug Administration. If someone says, hey, I have this new wonder drug that's going to cure all the cancers...
FDA is going to be like, "Okay, well, where is your clinical trial? Show us that the benefits outweigh the harms." And until then, you can't sell it. It makes sense to apply... We do the same thing for aircraft, for cars. It makes sense to do that with AI systems that could cause massive harm. That way, all the wonderful products we're going to get are going to be safe, kind of by design. And we've given incentives to the companies
to really have a race to the top and make safe AI because whoever does that first, they're the one who gets the market share. Yeah, but who regulates it? So, so, so...
to create that 30,300 that the guy just go three, zero, zero, and maybe he wanted to do 3,000, he forgot one zero and it went from 30,000 to 300? - Yeah. - Where maybe like the, you know, it can't go less than 10% in a 15 second increment. Okay, that's technology, I get it. But what I'm asking right now is, so this guy that wants to build three billion robots. - Yeah. - With today's regulation. - He really got to you.
He really got to you. Did he not get to you? Yes or no? Everybody, are you kidding me? I bet he got to everybody. So this guy that wants to build 3 billion robots, what regulation do we have right now to prevent him from being able to do that? Nothing, basically. But this is actually changing. In all other areas, there was also a lot of resistance to regulation. When scientists started saying, and engineers started saying, let's put seatbelts in cars.
The auto industry was dead against it. They're like, no, that's going to kill the car market or whatever. So they passed the seatbelt law in the US anyway. And did it kill the car industry? No. The amazing thing is that car sales skyrocketed after that because people started to realize that driving can be really safe. And they bought more cars. And so we similarly just need to get past this knee-jerk reaction from some tech folks that
that they're different from all other technology and should be forever unregulated. There's a big food fight in California now, some of you might have followed, there's this law called SB 1047 which was just passed by the California Assembly. It's very light touch, it says stuff like, "Well, if your company causes more than half a billion dollars in damages or some sort of mass casualty event where a lot of people die, you should be liable." If they had a law like that for airplanes, people wouldn't bat an eye.
But you have all these, you have a lot of people now taking to Twitter saying, "This will destroy American AI industry," or whatever. If we just treat AI like other industries, we have safety standards, here they are, level playing field, free markets, once you meet the standards, you can make money.
then we'll be in very good shape. The European Union already passed an AI law last year. China passed an AI law. What's European Union's AI law? It's called the EU AI Act. They're very similar to product safety laws for medicines, for cars, stuff like that. And as soon as we get something like that in the US, first maybe in states like California and then federally, I think we'll be in a much better place.
And then someone can't just come along and build 3 billion robots without first having to demonstrate that they meet safety standards. You don't want to sell robots where the owner of the robot can be like, hey, here's a photo of this guy, you know, go kill him. This guy that wants to build 3 billion robots, is he an American? Yeah. He's an American. Yeah. Who are the top 10 richest people in America? Yeah.
By the way, I'm not going to ask you who because I would never question you like that. That's not my style. Top ten richest. So you got, let's use the process of elimination. Let's kind of go through it. Let's have some fun with this. Listen, if we're going to have some, we may as well have some fun here. All right, so here we go. Elon Musk. He said it's not Elon Musk. Could it be Jeff Bezos? Maybe. Maybe.
Are you going to put light? That's why you're putting bright spark lights in my face here. Could it be Mark Zuckerberg? Could it be Mark Zuckerberg? Look, I think it's not about individual people. It's really about creating the right incentives for all entrepreneurs so that they realize that the way they get rich is by doing stuff that's actually safe. I know. I'm not trying to make you feel uncomfortable. I don't want to make you feel uncomfortable.
So you got Ellison, I don't think it's Buffett, Cates, Baltimore likes basketball, Page, Sergey Brin. Well, let's talk about those guys. So here's what the Google guys said, if I'm not mistaken. Larry Page wants an AI god, is what he said.
Okay? And he's got the money to do it. And he called people who are against that, he uses a word called species. Which is a derogatory term, if I'm not mistaken. Mm-hmm. Mm-hmm. Yeah, I was actually right there when he said that famous quote. I'm the first-hand witness. I think it got out because I wrote about it in my book. So this is not bullshit. He actually said it. So what do you think about Larry Page calling us speciesists?
Look, I don't judge people. We do. This environment is very judgmental. We just are. Everybody's entitled to their own dreams. My dream is that my children will have, and your children will have, a wonderful future. And a future where they don't need to compete against machines. It would be rather where AI and other machines...
make their lives more enjoyable, better, more meaningful. Larry might say now I'm specious, I should feel sorry for the machines or whatever, but hey, we're building them. They wouldn't exist if it weren't for us. Why shouldn't we exercise that influence we have to make it something good? I'm a very ambitious guy, and you are too, and I really respect that. To me, it's utterly unambitious if we're like, well, you know,
We think we're so lame that the only thing we're going to try to do is build our successor species as fast as possible. Where's the ambition in that? Why would we do something so dumb? It's almost like Jonestown, some sort of suicide cult. I love humanity. I think humanity has incredible potential and I want to fight hard for us to realize it. So I want us to build technology that
gives us a future we and our children really want to live in. Call me specious, but that's my dream. We're the same. You and I are the same. No question about that. And I think a lot of people here, our family folks, are the same as well. So in other words, let me actually ask all of you guys in the audience also. So raise your hand if you're excited about us building more powerful AI and robots that can really...
You're excited about us. That can really empower us and help us flourish in the future. Raise your hand. Okay. Now, raise your hand if you're really excited about building AI, which will just replace us. It's a little hard to see with the spotlight, but I don't see any hands at all. So I think we're all on Team Human here. And hey, fellow speciesists. But let me ask you a question. Let me ask you a question. So, for example, so now, what if... Because it's always the...
the government's going to say you can't build robots, okay? But then... No, no, no. Pushback. The FDA doesn't say you can't develop new drugs. They just say that in order to sell them, you just have to do the clinical trial and show that the benefits outweigh the harms. Similarly, the law would say, yeah, sure, you can build robots, but before you can sell them, you just have to make sure they meet the safety standards we've all agreed on.
So, for example, you can't sell a robot if it enables the owner to just go tell it to do terrorism or murder people for you, right? That's a safety standard. We can specify it. And it's perfectly possible at the technical level, actually, just like we teach our children what they can do, what's good, what's bad, do that with machines also. But the question I'm asking is the following. Do you think Iran has a nuclear weapon?
They say they don't. Do you think they have it? Maybe. Maybe, maybe not. Okay. How would we know that they don't? Iran is a massive country, bunch of desert. How can we know they don't have a nuclear plant? We don't, I think, know for sure. But look at it, it comes back to incentives again. Suppose they do have a nuclear weapon. Why haven't they nuked us?
because they have an incentive not to, because then they would get nuked too, right? And similarly, why do companies have an incentive to build airplanes that don't crash? Because ultimately it's bad for them, right? So that's the whole point really of
of having safety standards. There didn't used to be an FDA actually, and this company, which I will not name, sold this drug called thalidomide and said it's great for mothers to take during pregnancy if you're feeling a bit stressed and have headaches. And they didn't mention that there was early research suggesting it causes a lot of kids to be born without arms. And they sold it, they sold it, it was a horrible tragedy.
And eventually they got shut down here and then they started, when it was banned in the US, they started selling it in Africa. So that's what happens if you have the wrong incentives, right? Whereas if you have safety standards that companies have to meet, then when they try to maximize their shareholder value, they're actually going to do the good things and not the bad things. Yeah, but again, let me ask this maybe last question and we'll transition to a different topic.
So we could have regulations for America that, hey, these are the standards you need to go through until you do X, Y, Z. However, in America, we allow lobbyists to buy up politicians. So what if somebody's building a robot company
has massive expenditure now for lobbying a few billion a year, 10 billion a year, and they're able to go past certain laws to give them the leverage to continue growing. And then, hey, don't come out with these laws. And maybe even other countries who don't have to abide by our rules and guidelines, who have their own money and they spend a bunch of resources, and we don't.
And come 10, 20, 30 years from now, all of a sudden the military in another country is stronger than ours. And we fell behind. That's also the fear. Because the fear is like, let's not accelerate AI in our country and robots, but somebody else does. And then the military drops. So then what happens? You understand the concern? Totally, totally, yeah. So...
That's a very real concern. AI can be very persuasive now and it's getting more persuasive by the day. This morning I was reading about some new AI cults they're forming on the internet and so on. It's super important to have safety standards also for systems that are out there on the internet talking to people to make sure we understand what's going on there. We're lucky in that we in the US have the strongest AI industry in the world.
That gives us the opportunity to make sure that bad things aren't done with our tech. Interesting, we should do it regardless of the rest of the world for our sake. And if you worry about China, for example, so Elon told me this really interesting story. He was in a meeting with some top people from the Chinese Communist Party, and he started talking to them about superintelligence. And he said, you know, if a Chinese company builds superintelligence, after that,
China will not be run by the Communist Party. It'll be run by the superintelligence. And Elon said that he got some really long faces, like they really hadn't thought that through. And then within a month of that, they passed their new AI law. So, you know, why would China... They were afraid of the... Of Chinese companies doing crazy shit. So, you know, in other words...
China, they put in place their own regulation on drugs, not to help us, but to protect Chinese consumers. They are reining in their tech companies for doing crazy stuff because they want to stay in control. So here again, incentives, incentives, incentives. They kind of align. Each company has an incentive to make sure that none of their companies lose control over their tech. And then once that happens, once you...
at a grassroots level in individual countries. There is a very strong incentive now for different countries to talk to each other. You know, the European FDA, the American FDA, and the Chinese one, they talk to each other all the time to harmonize the standards. So someone who develops the medicine in the US can get quick approval elsewhere. We have these AI safety institutes, which have just been created now in the last year. America has one.
England has one, China just started one. These nerds are going to be talking to each other again, comparing notes. And that's how we're going to eventually get some global coordination too. It's a mistake to be... I want to add some optimism here, because some people are so gloomy these days. And they say, oh, you know, well, we're screwed because we're never going to get... We don't have the same goals as China, we're never going to get along. Hey, you know, we were not best buddies with Brezhnev from the Soviet Union either. Right?
didn't have very aligned goals, but we still made all sorts of deals to prevent the nuclear war because both sides realized that everyone would lose if we just lost control of that. It's exactly analogous here. All the top scientists
across these countries, if they tell their governments that everybody loses if we lose control, there's an incentive, whether they love each other or don't, for them to just coordinate. And this can be done. There's definitely hope there. - Okay, awesome. In regards to a lot of folks who run businesses, they're entrepreneurs, small business owners, from a millionaire to a billionaire in top line revenue, from two employees to 10,000 employees,
what would you say on the opportunity side, specifically for the business side? So the next three, five, 10 years, we're already seeing it all over the place. Open AI, you got Grok, you got NVIDIA, you've seen all these different things happening. But as a small business owner, how should I have my, what should I view my relationship with AI? Great questions. I have a lot to say on this. First, don't think 10 years. Think of the next two years. Crazy things are going to happen. And
If you make a plan for what you're going to do in nine years, it's going to be completely irrelevant because you want to be nimble is what you want to do. And look at what can you do right now that's going to help you in the next 12 months and then go from there. First thing I would talk about is hype. There is a lot of hype about AI. It's a strong brand right now, so people will try to sell you a glossed-up Excel spreadsheet and call it AI. Don't fall for the hype. But there are two kinds of hype.
The first kind of hype is what we had with cold fusion, like where the whole technology is just complete dud. Then there's a second kind of hype. Who remembers the dot-com bubble? Yeah, so that was a lot of hype, right? But would it have been, and a lot of people lost a lot of money, but is the right lesson to draw from the dot-com bubble that...
The Internet and the web never amounted to anything. Would a smart thing to do for a company then to be like, "No, we're never going to have a website." Of course not. So the hype there was not the technology itself was in fact going to take over the world. The hype was about certain companies that were giant flops, right?
The kind of hype we have with AI now, I feel, is exactly the dot-com kind of hype. There are a lot of companies that are very overvalued and are going to go bust, but the technology is here to stay, and it's going to blow our minds. So what do we do with our own personal business here in such a risky environment? Well, first of all, look at your existing business. Instead of dreaming about some pie-in-the-sky, completely new thing you could do...
that might be a giant flop. Look at what are you doing right now across your company that AI might be able to greatly improve the productivity of. Usually what happens first in your companies will not be that you just replace a person by AI, but rather enhance your staff with AI. So you look at someone who's doing certain tasks and you realize you can give them some tools that they can use to do 40% of their tasks much better.
much more productivity for the same headcount, much lower risk now, right? Because they already know what they're doing. If the AI writes the first draft of that report or whatever, they will read it before they send it out. So you don't take a risk of being like that lawyer who filed a court case where there was a case law they cited from ChatGPT that was just completely made up and the judge didn't like that.
So if you take things you're doing, basically, and empower your staff to have AI do first drafts of things, et cetera, but the humans are in charge, there's still quality control, very low risk, huge productivity gains. A second thing I would say is it's tough, you know, when you're, especially if you're a small business and lack a lot of in-house expertise to not get ripped off by companies who are trying to sell you a bill of goods with an AI sticker on it. So it's...
A really good idea, even for relatively modest-sized companies that you have, to just get at least some in-house expertise. Even just one person is really quite knowledgeable. That person can then go around and talk to other key staff across your organization, learn what they're doing, and advise them on how they can automate certain things, enhance the productivity, and get things done in a way where you get all the upside and
And no downside. What would be the position of that? And by the way, in about eight minutes, I'm going to come to you guys to ask questions. We're probably going to get two or three questions. So if anybody wants to ask Max any questions, go line up by the mic. We'll come to you guys momentarily. So when you're looking at hiring somebody that has AI expertise...
When you're interviewing a CTO or you're interviewing somebody that's a CIO, you're bringing somebody that's a BI, you're a business analyst. What types of questions are you asking to make sure they have background in AI? Ask them what they've built before. This is very much not about being able to talk the talk, but to be able to actually walk the walk and build systems, make things work. They should have a track record of having built things before.
I mean really nerdy, sit at the keyboard, install stuff, get real productivity. But don't put them in charge of making business decisions. They are automation engineers, they're people who humbly go and interview other people in the company and ask them, "Hey, tell me about your workflow." And if those other people feel, "Yeah, this would be great if you could give me a tool that writes the first draft of this thing,"
That person you've hired could either do it themselves or contract it out to someone else to do it, to provide the expertise. There are many pitfalls. I mentioned Knight Capital, for example, with this trading disaster, because that's kind of in a nutshell what you don't want to do. Put in place some AI system in your company that you haven't understood well enough. It might not even screw you over by crashing. It might screw you over by...
giving your proprietary data to whatever company owns the chatbot, right? And maybe you don't want that. It might screw you over by being very hackable. And so suddenly you come into the office and there's ransomware on all your computers. It's really important to tread carefully. But if you do tread carefully like this, there's just spectacular upside. Knight Capital is the one that he said $10 million per minute for 44 minutes. Yeah.
Did you tell them already or no? Did you guys hear about the story, what happened with them? Yeah, I mentioned it in the beginning. $440 million in 44 minutes. So, one dirty secret I have to tell you about the state-of-the-art large language models and a lot of the Gen AI stuff is we have no idea really how it works.
But that's not necessarily a showstopper. If you have a co-worker, you don't understand exactly how their brain works either, but you can have someone else check their work and so on. And if there's an important business decision, you have the final call and you ask them some tough questions first. But you have to treat any AI systems you have as the output from them, as some work you got from some temp worker that you have no reason to trust at all.
So if you can find a way of just verifying that what they did is actually valid and correct, that takes much less time for you than it would have taken to actually create that thing in the first place, you're on. Max, do you have kids? I do. Okay. So with kids, I got a 12-year-old, 10-year-old, 8-year-old, and a 3-year-old. Anybody have kids? Raise your hand if you got kids. A bunch of people here has got kids. So how do you...
not only have the conversation with your kids about AI, but also career planning, positioning, where traditionally you're like, hey, son, you're going to grow up to be a this. How do you manage career planning with them at a young age, knowing what direction AI is going? I just said to you, what do you do with, you know, your two-year, three-year, five-year, 10-year plan with AI? You said, here's what you don't do. No 10-year plans, right? You should go to, what, 12 to 24 months. So imagine. Yeah.
12 year old, what's it going to happen in six years? So how do you manage that with career planning and kids? Yeah, you know, I have a little guy who's just going to be two years old in December here. And it's tough. It really keeps me awake at night thinking about this. I think one obvious message is you have to be nimble to live in the future and prosper.
The idea that you spend 20 years studying stuff and then some career and then you do that for 40 years, forget about it. That's so over. You need to be nimble and have the idea that you're going to constantly be innovating, learning new things and going where it makes sense to go. The second thing is whatever field you're in or your children are going into, even if it seems like it has nothing to do with AI, it's crucial
that they're up to speed on how AI is influencing and will influence that industry, right? Because what's going to happen then is not that your kid is going to be replaced by an AI, but rather people who don't use AI get replaced by people who do. And you want your kids to be in the second category, the ones who are the early adopters, who become more productive, not the ones who are in denial and just get replaced. How soon do you introduce them to them?
Oh, there is no too soon. I mean, okay, little Leo, we keep him away from screens altogether. And if you have someone, any kids in school these days, they're always using ChatGPT even if you think they're not. So no need to skip the introductions. But it's really important to get kids thinking about how they can make the technology work for them, not against them.
And both in the workplace, actually, and in their private lives also. It makes me really, really sad when I walk around a beautiful place like this and see all these good-looking teenagers around the table, and they're not looking at each other. They're staring into little rectangles like the zombie apocalypse came or something like that, right? So this isn't just business. This is also about our personal lives. How can we make sure...
We control our technology rather than our control technology controlling us. Coming back to this idea, we are team human. Let's figure out how we can make technology that brings out the best in us.
so we can let our humanity flourish rather than trying to turn ourselves into robots and compete in the losing race against machines like John Henry against the steam engine. Since we're out of time, can I just end on a positive note? -Please. -Since you said that you're a doomer, a gloomer, I want to remind us all about something incredibly inspiring and positive about all this. Our planet has been around for 4.5 billion years.
And our species has been around for hundreds of thousands of years. And for most of this time, we were incredibly disempowered, like a leaf blowing around on a stormy September day, you know, very little agency. Oops, we starved to death because our crops failed. Oh, we died of some disease because we haven't invented antibiotics, you know. And what's happened is that technology and science have empowered us.
We started using these brains to figure out so much about how the world works that we can make technology to become the captains of our own ship. And we've doubled our life expectancy even more. And every single reason for why today is better than the Stone Age is because of technology. And if we do this right with AI...
and use artificial intelligence to amplify our intelligence, to bring out the best in humanity, humanity can flourish, not just for the next election cycle, but for billions of years, not just on this planet, but if you are really bold, even out in much of our gorgeous universe out there. We don't have to be limited anymore by the intelligence that could fit through mommy's birth canal. It's an incredibly inspiring and empowering future that is open for us, if
We don't squander it all by doing something reckless and dumb. And this is what I want to leave you with here. We want to make sure to keep AI safe, keep it under human control, not because we're a bunch of worry warts, but because we are optimistic. We have a dream for a great future. Let's build it. So in other words, you believe the future looks bright.
We have the power to make it bright. I'm with you. But it's going to take work. Max, appreciate you for coming on. Make some noise, everybody. Max, take more. For the last four years, every time we do podcasts, I have to ask Rob or somebody, hey, can you pull up this news? Can you pull up that? Which way do these guys lean? Can you go back to the timeline of eventually after asking so many questions, I said, why don't we design the website that we want aggregated? We don't write the articles. We feed all of it in.
Using AI, so nine months ago, eight months ago, I hired 15 machine learning engineers. They put together our new site called vtnews.ai. What this allows you to do when you go to it, if you go to that story right there that says Trump proposes overtime pay, click on it, it'll tell you how many sources are reporting on this from the left. If you go to the right, Rob, it says left sources, click on it.
Those are all the left sources. If I want to go to right sources, those are the stories. If I want to go to center, I go there. Now, if I want to go all the way to the top and I want to find out a lopsided story, a story that only one side is reporting on, either the left or the right. So if you notice the first one, we'll say Zelensky announces release of 49 Ukrainians from Russia. Notice more people on the left are reporting on that than the right. If I go to the middle one, same thing. If I go to the right one, same thing. You can see what stories are lopsided.
And if I pick one of the stories, pick the first story, click on a Trump one proposes overtime tax cuts to the right on the AI. I can ask any question I want, but click on the first question that has it. It says, what is the political context and potential motivation behind the tax cuts?
Trump's new tax cut proposal. Click on the question mark. It explains exactly what the motives are. So for you to use, whether you're doing a podcast, you're in the middle of a podcast, or you just want to know it for yourself, you're busy like myself. And last but not least, this is all AI doing. This is machine learning engineers. Go all the way to the top. I can go to timelines, go to timelines, and see how far back a story goes. Pick the Israel-Palestinian conflict.
If I want to go to that and go back and see why are those two days a big spike, I'll have Rob pull it over to go to those two days with a big spike, and I'll see exactly what happened on that day or the previous day and many other features VTnews.ai has. So simply go to VTnews.ai. There's a freemium model, there's a premium, and then there's the insider. If you want to have unlimited access to the AI, click on the VT AI Insider. You can now become a member effectively today.