Have you ever spotted McDonald's hot, crispy fries right as they're being scooped into the carton? And time just stands still. Ba-da-ba-ba-ba.
Rise and shine, fever dreamers. Look alive, my friends. I'm V Spear. And I'm Sammy Sage. And this is American Fever Dream, presented by Betches News. Where we explore the absurdities and oddities of our uniquely American experience. What's up? Dude, what is up? Have the robot overlords checked in lately? Where are we at? Not since we...
We last recorded, but maybe by the time the people are listening, it might have happened. And maybe we are speaking to you in AI right now. We could be. This is all just chat GPT or deep seek. Deep seek? Deep seek. They're going to ban that shit. They're going to ban that like TikTok. That's the next thing to go. I don't know. Don't get too comfortable with them. Let's wait and see. So how do you feel about AI? What's your vibe around it? Okay. Okay.
I don't care for it because I think best not, you know, best not.
And I do think the word machine learning sounds so cool. So I get that. And I do regrettably like some of the AI art that I see where they do like the countries and an animal and they like animate them and they make them look really cool and whatnot. And I'm like, but I don't think we need to burn the world to the ground for that. I think we could probably just do without it. I think we should probably not do it. That's really how I feel about it. Here's the thing. I...
know that you can't stop progress. So as much as you want to be like, we should just not the idea that we're going to prevent certain actors from doing something because we think it should stop at the art and the personal assistant in your phone. Uh, and agree. I agree. That's what, that's what scares me about it. It's not that progress has to, I think progress is bad. It's that I don't necessarily trust the stewards of that progress because
as much as I would hope. No. If it was going to be like the Jetsons, I'd be down, right? Because if you look at the way that things were improving by using technology in the 50s and 60s, it was a lot about giving the person back time, right? You had an automatic percolator coffee. That was incredible. That was like, gave you back time. You had different appliances or things in your home that made it more comfortable, more livable, gave you more time.
And then with that time, you were still making a salary that allowed you leisure. And with our AI now, I feel like it just gives me more work because I have to do my work and then also try and learn these new apps constantly. And then by the time you learn one, it's outdated.
So I don't know that the technology right now is giving me that fun, cool Jeffersons. Not Jeffersons. Jetsons. Jetsons. I feel like the Jeffersons, not the Jetsons. I feel like we're moving on up, but I'm not sure if it's better. Right. Well, I feel, okay. So I actually, it's interesting because, so Avi, I interviewed Reid Hoffman for the rest of this episode.
And he writes in his book that people used to think, oh, the way people think about the future, their sort of schema for it is only something like the Jetsons or like 1984. But in reality, nothing's ever as like bad or as great as we envision it becoming. And that's kind of one of his main theses is that we are going to shape this technology and it's going to shape us. And there are certain issues.
Yeah.
So it's complicated. These are some of the biggest questions that you could really conceive of in a way. I just worry about it in terms of the way that it is dehumanizing so much of the world. So when we think of AI, we think of tech and school and education and science, and sometimes we think of it in the good ways, right? It'll
It'll probably write code that someday cures cancer and that will be incredible or it can better formulate stuff. We could do a lot with the big data, but a big part of where AI will show up for the average person is in pornography. And I think...
What we see with AI-generated porn is the further dehumanization of women often and female figures because you can make the AI do something that a woman wouldn't. And then you can even mark it off as like, well, it's not a real person, so I can fulfill these dark freak fantasies in a way that I don't think improves us. So sometimes I tend to think of how something will humanize or not humanize
And when you think about the bros who are in charge of the AI, these are not empathetic, kind, compassionate, human-centric, woman-forward, human rights-loving folks. And so having an AI sex robot is actually what I fear the most with AI. I mean, that's what's so interesting about speaking to Reid Hoffman because he does have a humanistic perspective. And I did say to him,
Well, what about the other people who are in this race who are not prioritizing those ethics as much?
And it's a challenge. It's genuinely a challenge. But we do talk about that in the episode and we get into just a lot of different aspects of what to expect and envisioning the world in the future. So yeah, I hope you enjoy it. Sammy did this episode by herself because they didn't want me to ask Reid Hoffman about AI porn bots. So enjoy. Yes. Look, the hard-hitting questions. I wish you would...
I didn't agree that I wouldn't ask about that, but I didn't think to. Yeah, you're welcome. I don't think, I don't know if he would have appreciated it. I'm not sure. He's like really nice. He's very nice. Very nice man. Here's Sammy's interview with Reid Hoffman.
Hello, I am thrilled to be here with Reid Hoffman, founder of LinkedIn. That might be how you know him, but he is so much more than that. And also author of the recent book, Super Agency, all about our AI future. Thank you so much for joining us. It's a real pleasure. It is. So obviously, AI is the name on everybody's lips these days. And I think we can all easily catastrophize a world in which AI ruins our lives. But I think it's a real pleasure to be here with Reid Hoffman.
But you are quite a tech optimist and especially about AI. So maybe let's just start from the most positive point of view we can. Can you paint a picture of a world, maybe in like seven to 10 years, where our lives are run by AI-assisted living? Yeah.
So, 7 to 10 years in the tech industry is always a good way to look a little foolish, so I probably will look dated in a bit. But a medical assistant on every smartphone, available 24-7 to everyone who has access to a smartphone, that is substantially better than today's average high-quality GP system.
a tutor on every subject for every age and think of it as also kind of a general life assistant. It isn't just the kind of things you can do today, which are
Hey, I got these ingredients in my fridge. What might I make for dinner? Or kind of, hey, I'm going to have this difficult conversation with my family member. What's a way I might approach that? Or a friend has lost a pet, the treasured pet. What are good ways to try to comfort or reassure them? But also questions around an agent that's been in dialogue with you for years and you're saying, hey,
Hey, I'm thinking about like, you know, this big choice in my life. Um, it could be a job, it could be moving cities, it could be any of these things. And that the agent would be one of the, the, the, the things trying to help you. Mm-hmm.
So what about like, you know, wake up in the morning? Is there going to be something that can sort of regulate our lives? Is that what you mean? Like, how will it, you know, when I woke up maybe 15 years ago, I wasn't scrolling on my phone the first thing. How do you think about how we'll change our daily rhythms? Are people going to be working the same ways? Are they going to be transporting themselves the same ways? What are you seeing from your end as someone who's, you know, getting access to the most relevant
newest ideas all the time. So transporting with AI, autonomous vehicles, right? So I think that would be, you know, one thing working. I think every person who's deploying, doing work will deploy with a set of AI agents and
And so, as opposed to just us, like, for example, say we're having this conversation, there'll be an AI agent that might be talking to you through your earbuds saying, oh, maybe you should ask this question next. He said this. It was really interesting. Related to what else he said. And that's this particular context, but every context for that. When we
Do any kind of work meeting. We'll have an agent that's listening that goes, oh, when you said this, maybe you want to talk to so-and-so or maybe we should inform so-and-so or this might be the next thing to do or I committed to you to do something. It goes great and it will email me or email both of us and say, hey, remember when you said you'd follow up the following way? You'll do that now.
Right. I do wish sometimes that there were things that it's like where I need to go write something down. It's like, I wish that would just be written down and done. Like, or, you know, something what you want to do in your head. I wish it could just be translated to an output. So I do see, you know, obviously those those use cases. And I do think that people are or have been generally excited about the potential for the future. I do think it is very American to be excited to push boundaries and
But I think what we've seen is that this lack of trust, and you talk a lot about trust in the book,
It's not that people necessarily doubt the technologies. They doubt the people who are in charge of those technologies. So how would you say, do you think that the industry at large, and I know you're obviously a huge part of it, has earned the public's trust to be able to implement these systems in a way that's not going to put profits over humanity in many cases? Or that may make some deadly mistakes?
Well, I know that the industry is working very hard on not making any deadly mistakes. There's a lot of different safety groups, and the safety groups aren't just really serious things, but also kind of alignment to human interest. And obviously, trust in institutions across the board is at a real low, like in my lifetime. But it's not just trust in tech companies. It's also trust in government, trust in universities, and so forth. And so,
So it's a natural thing to say, well, you haven't earned the trust. Now, one of the things I think that people frequently forget about what the tech industry is doing is we all have access to a world of information through our phones, through our computers. Much of that information, Wikipedia, et cetera, is free. We have free communication in various ways. We have a whole set of things, free translation services,
We have a whole set of things that are essentially kind of engagement gifts from these companies. And it doesn't mean that they aren't also having for-profit motives. But, like, for example, when you go buy a sandwich from a local deli, they have a for-profit motive too. But, you know, you trust that you're getting a good sandwich. It's part of actually how things work. So, you know, my view is...
When you compare like where you might be getting this from, actually tech industry has a reasonable comparison standard of trust relative to other institutions. Yeah, I did think that that was an interesting point you made in your book that people feel more positively about Amazon than they do the Supreme Court, the police and the military. Maybe that's because those entities are failing at regulating the exploitative nature of big tech.
that they would expect those institutions to do. Meanwhile, Amazon is getting them their packages really fast for cheap prices. So that's enabling their lives. And then there, you know, I think there is this sort of
People aren't anti-capitalism. I think there's like a strong capitalist streak within American society. But you were a philosophy major. Do you believe sort of fundamentally that someone who is profiting from a system could be the one determining the ethics of that system fairly?
Partially, was what I would say. And part of that's because how companies respond, they're very responsive. They want not only customers today, but customers tomorrow and customers next year.
They're responsive to local community pressure, press pressure. They don't want to be in the subject of controversy of doing bad things. They have employees, and the employees go home and talk to their families and their community, and they want to feel proud about what they're doing. So there's all of these networks of accountability and governance that go into kind of how companies operate. Now, that doesn't mean the companies are perfect. That doesn't mean they don't have blind spots.
especially if a company is doing something particularly bad, like selling you cigarettes or something else, you can particularly misfire in that case.
But I think what happens is everyone likes to talk about it as like, well, if you're making a profit, there must be a crime there somewhere. It's like, no, no, this is how we build our whole society. And that's actually, in fact, very important. Do you think it is, though, about just making the profits or even just – I don't think it's necessarily that people are opposed to big companies. And like we said, they like Amazon. I like using these platforms. Yeah.
But I think the concern has been over the very small concentration of companies that are, you know, in charge. And now I think, you know, I wonder, would you have written things differently in this book if you had written the book today rather than early last year? Well, certainly not. Post-election, post-deep-seek. Oh, no, no. I would have written exactly the same. I mean, part of it is if you kind of roll back, call it 20-plus years—
there was really one big tech company microsoft now we have seven i think we're actually heading towards 15. that kind of competition is part of what creates you know kind of products and services cheap products delivered quickly etc and so no i actually don't have worries about like the concentration of these and even when you look today at companies building
There's a whole wide variety of AI agents that you can kind of pick and choose between. And by competing, they're trying to compete for which one's good for you, which one's cheaper, which one might meet your needs. That's part of how they kind of build things that are much more responsive to individual needs than, you know, even, for example, democratic governments.
Hello, Oversharing listeners. It's Dr. Naomi Bernstein with some exciting news. Starting January 13th, our Oversharing Calm the Fuck Down subscription is getting even better. Subscribers will get Oversharing episodes a day early, plus additional exclusive bonus content on the second and fourth Thursdays of each month. Here's what's new.
One bonus episode with even more emails and advice and another where we follow up with past email writers who could be you. While we won't be releasing new meditations in the new year, don't worry. All of our past meditations will stay available on the feed for you to enjoy anytime. Plus, we'll have a new meditations playlist for our Spotify listeners. To sign up now, head to subscribe.betches.com and select oversharing calm the fuck down.
We're so excited about creating this new bonus content, talking to more of you, hearing your stories, sharing some of our own and reminding us all to calm the fuck down.
You know what's smart? Enjoying a fresh gourmet meal at home that you didn't have to cook. Meet Factor, your loophole in the laws of mealtime. Chef-crafted meals delivered with a tap, ready in just two minutes. You know what's even smarter? Treating yourself without cheating your goals. Factor is dietician-approved, chef-prepared, and you-plated. Pretty smart, huh? Refresh your routine and eat smart with Factor. Learn more at factormeals.com.
This episode is brought to you by Shopify. Forget the frustration of picking commerce platforms when you switch your business to Shopify, the global commerce platform that supercharges your selling wherever you sell. With Shopify, you'll harness the same intuitive features, trusted apps, and powerful analytics used by the world's leading brands. Sign up today for your $1 per month trial period at shopify.com slash tech, all lowercase. That's shopify.com slash tech.
When the game tips off, the NBA action is just beginning on FanDuel, America's number one sportsbook. Because FanDuel is your home for NBA live betting, however you want to play. Now is the perfect time to join. Make every moment more with FanDuel, official sports betting partner of the NBA. 21 plus and present in Virginia. First online, real money wager only. $5 first deposit required. Bonus issued is non-withdrawable bonus bets which expire seven days after receipt. Restrictions apply. See terms at sportsbook.fanduel.com. Gambling problem? Call 1-800-GAMBLER.
So you have a series where you speak to your AI read and you ask it questions. That's a question I want to ask about it or he. You ask AI read questions about yourself to try to, you know, I guess interface with it. And one thing that you asked your AI self is about regulations and what does AI read think of regulations and you are not against them.
Whereas many people in the tech industry who you refer to in your book as Zoomers, people who are super optimistic about the technology and think it should just be able to continue unfettered. But you are somewhat pro-regulation. So can you talk a bit about your stance on regulations, especially given that a lot of
regulation hasn't worked and that it can sometimes be applied unfairly in a way that actually strangles smaller companies, not bigger ones. This is one of the reasons why I'm most often opposed to doing regulation and being overly. So I am pro-regulation, but the way you should do it is, as opposed to imagining what could possibly go wrong, you
You should go, look, the only future preventing regulations we're trying to do are ones where we're like, oh, that would be a massive impact. That would be very bad. That would be enabling terrorists. That would be enabling criminals. Let's try to pull those back. Now, generally speaking, I think the way you should do is add regulation as you're kind of – and I'll use this metaphor directly – going down the road.
Use cars as an example. Actually, in fact, deploy cars, allow them to drive, you know, and then as they're doing, you say, oh, there should be seatbelts. And then the industry goes, oh, no, we don't. Consumers don't want seatbelts. We shouldn't be doing seatbelts. Fine. At that point, you go, nope, we're going to add in regulation for seatbelts.
It's a specific thing. You know what you're doing already. You know why the market's already misfunctioned, why consumers are going, I don't really understand that low probability crash but high probability damage. Fine, you have to have seatbelts.
And that's the kind of way to approach regulation as opposed to like the I am a smart regulator and I understand this so much better than the companies and the creators of this technology. And so I'm going to chart the path of what the future is going to look like and what they can do and not do all in my imagination for this. Don't do that. Put the car on the road and then go, okay, here is a specific area of the industry the companies are misfiring. They're probably misfiring because, by the way, the customers are saying, I don't want this.
But we still need to do it anyway. Let's do the specific thing. So what is an example, maybe either within AI or within tech in general, what's an example of a problem that could be solved by something like a seatbelt? Well, part of what the companies are already doing is how do you try to prevent these chatbots? You try to make these agents, these chatbots, useful on a wide variety of things.
And, you know, like any set of different questions, but let's say make them less useful on, you know, how do I do a cyber attack, phishing attack for general things for, you know, like criminals or how do I, you know, make some kind of explosive or other kinds of things? Because like, no, no, actually, in fact,
in all of the elevate humanity cases, that's not, but that could be done by a criminal or could be done by a terrorist. And that's the kind of thing that's already happening. Now, part of, um, I think the, uh, the kind of thing that the, uh, Biden executive order did, if
very well is say, hey, look, when you're building this stuff, it's not a regulation. You have to do these specific things. Have a safety plan. Say what you're trying to train it to make sure that it's avoiding so that when we come and ask you and say, hey, can we see your safety plans? Like, yeah, here's our safety plan. Here's what we're training on. And then you might say, oh, you forgot about anthrax. You need to put anthrax in it and let's do that. But that kind of thing. Or you're not doing it well enough because
Could you show some quality metrics about how you're doing it better? And that's the kind of iteration for it. And you don't need to necessarily say, and I'm going to create a 43-page form that you have to follow exactly this 43-page form and so forth because that's part of not allowing all the innovation that could be really good. Right. This is a question I guess I kept returning to while reading, and it's something I think about a lot, which is,
should be coming up with these rules because you have the companies who have, you know, obviously split motivations, not only just for bad reasons. So, you know, in your telling that the market will kind of regulate it because, you know, the consumers won't like these bad things.
But then you also have – let's say you do take that sort of laissez-faire approach. Now, your thought process is we want to improve humanity. But you're competing with people who don't want to improve – not necessarily. They want to improve things for themselves or for some other interest. Yeah.
So, how do you – I think you, if you're the person who's kind of, oh, I want to distribute my profits a little bit more or I want to provide better worker conditions or whatever it is, you're probably going to lose in that market, in that competitive market to a company that maybe doesn't care about those things and they're maybe willing to take some risks. How do you then –
navigate that type of environment without an outside regulator or someone who understands the technology well enough or a group of individuals who understand it well enough, because our lawmakers don't, no disrespect to any who are in the vicinity right now, they don't really understand it. So how is there supposed to be some sort of ethical line that's drawn and say, okay, well, we're going to try to solve these problems?
So maybe this is best put as kind of a Europe versus U.S. process. So the European regulators are trying to say, hey, we know this better. We'll all do this, which means that relatively... But do they? No. And relatively, that also means that they are not doing... That they're hampering severely their own AI creators and startups and companies. I actually already know...
U.S. companies that are shipping much worse version of their products to Europe because the regulator is like, well, no, until you've done this particular test with this, you can have that product here. And they're like, great, we'll ship the modern good version to the U.S. and to the rest of the world, and we'll ship the two-year-old version to Europe, right? So this is one of the general problems with this kind of regulatory space. Now, that being said, like
I think that the notion of having kind of a centralized control, like a regulator goes, I know what good quality consumer products are and I'm the tastemaker. That's silly. That's why we have markets. The important thing is to say, hey, do consumers have a good sense of what they're buying, what they're participating in, what their preferences are? And so it's generally speaking much better to make
like the kind of regulation here, as opposed to that, you cannot release this product until it's approved by us. Say, well, you have to be self, you have to say what the product does and doesn't do on certain kinds of questions. Your auditors have to judge that in the kind of the timeframe that you're, you're being, you know, honest about it. So you have an honesty coefficient. And if you, if, if that breaks, we're going to find you, but
But that's the kind of iteration by which you get to. That kind of pattern is generally speaking much better. By the way, people say, well, but that pattern can fail. So can every pattern. Yeah. Right? Zero failure is not actually, in fact, a
a target for anything. So my question with comparing us to Europe, because I actually, just yesterday I saw a video, I think it was Nicola Mendelsohn from Facebook talking about how in the time that Europe has not even had a billion dollar company or one billion dollar company, the United States has had, I think she said, six or seven trillion dollar companies. And that's important. But everyone in Europe has health care.
And everyone in Europe, you know, they have certain regulations on their food that make it much healthier for the most part. So where do you kind of like draw the line between, you know, maybe their regulations are just bad.
Or stupid. Look, and we have regulations on our food too, and that's good. Actually, I think an FDA and all the rest is good. Look, I think anyone who argues zero regulation is generally a fool. Anyone who argues that we should live in, like regulators should be determining everything that's happening is also a fool. So it's a question of judgment between those two circumstances. Now, one of the problems is Europe tends to turn the regulatory dial up intensely on everything. Like, for example...
The number of harms you can get with software is not zero, but it's much more limited, much more basic. You eat bad food, you might die. Poisonous food. Yes. Or spoiled in some poisonous way. And I myself, I do think that a more social-driven healthcare system is better in certain ways.
But, by the way, for example, with the way that Europe has operated, like in the last 10 years, the U.S. economy has doubled and the European economy has stayed the same. You have to be building your economy into the future. And, by the way, there will be critics of that. There will be people who complain about it. There will be people who argue this, that, or the other. It's a change. It's a risk.
Yes, that's part of what taking innovation and advancing in the future means. And I think that's an important part for every healthy society. Right. I mean, it's not really truly a one-to-one comparison. The countries are much smaller. It's a different setup. It's just really not really comparable. But there is one thing I want to sort of come back to, which is...
Around, you know, this future where maybe where AI is really relied upon for a lot of decisions or using algorithms to determine things that could really change people's lives. What do you see as a way to safeguard against the dangers of AI in situations that are like really life or death? You know, like a guilty until proven, like, you know, certain surveillance situations or...
You know, they have a hallucination at the wrong person or something like that. Well, one general principle is you want to have your error rate be at least as good as the current error rate. And there is no zero human error rate. So this is like a classic parallel is, oh, we shouldn't put autonomous vehicle on the road until there's zero chance. It's like, well, with humans on the road, it's much better.
But like, for example, if you had all autonomous vehicles, you wouldn't have drunk drivers. You wouldn't have tired drivers. You know, you'd have more sense. You wouldn't have stupid, bad drivers. Yes. All this kind of thing. And by the way, that's not just for the safety of the people in the car. That's safety for the pedestrians. That's safety for all the other people in it. So trying to go to the zero error rate is a mistake. And that's also kind of life and death. And the same thing with health. Same thing with a number of things. So what you want is you want...
a rate that's kind of better than our current circumstances or at least as good and improving. And that's what you set it at. Right. Speaking of the hallucinations, I really feel weird about the anthropomorphizing of AI for some reason. It's...
Are we trying to humanize it? What's the – like why do we do that? Why not just call them errors? Well, human beings tend to anthropomorphize all the time. Like I'm assuming you also know a number of people who name their cars, right? I guess people do that. Yes. So it's like – and this is Georgia. And you're like, okay. Yeah.
Well, I would think that's weird too. So maybe I'm just not one for much anthropomorphosis. I don't know. Is that a word? Anthropomorphification. Yeah, exactly. And obviously the closer it is to like interacting with us the way that, you know,
organic entities human beings do the more that it becomes natural it's precisely like you were like well read AI it or he and the answer is like feels weird to say either one you say feels weird to say it because you're like well we're not trying to be dismissive like you and you say he but it's not human it's not conscious that way so this is weird
And I think this is one of the things that we learn as we develop. I mean, we used to anthropomorphize the mountains like the volcano. It was like, oh, that was a volcano. She, yeah. Or countries, she. Yes, yes. Ships, she. Yes, exactly. And I think we learn that as we go, and I think we have to establish new categories. Just like, for example, recognizing intelligent species as valuable species members on the planet with us, just not human. Right. Yeah.
So that's an interesting question because a lot of the examples you give in your book are about when cars came on, people were afraid and skeptical. And I appreciate your sort of middle-of-the-road approach or just sort of general outlook on the future, which is like it's not going to be what people predict. It's not flying cars in the Jetsons.
And it's also not going to be like this doomsday situation because the way things actually play out are affected by events. And it's all very, you know, you don't really know where it's going to go. But I do think the difference between this revolution in tech and past ones is that this one really gets at fundamental questions of humanity. And what is, it's almost a spiritual question. And, yeah.
you're almost looking at more like the lesson from Frankenstein. So what are your worries? Do you have any worries around crossing that barrier? Because people do think the machines will rule us all. Where do you stand on that? Well, some people think. Yeah. Right. I don't think that's a very likely future. It's weird when you get to talking about science fiction. They're going to come get you. Yes. Well, that's the, you know. You're the first target if you're going to get me.
You underestimated us. Yes. Well... Maybe I'm AI. Exactly. Well, no, I just hope they read Super Agency first. They already did. Yes. But one of the things that's odd about the AI topic is you get a lot of people thinking in science fiction futures as if...
Like that's the definitive thing. One of the mistake that even very smart people make in reasoning is they tell themselves a story and they spin up on that is kind of like, oh my God, that's, that, that, that's a possible truth. Oh my God, I'm really worried about that.
And I think that we're, as human beings, we're bad at, oh, look, there's a lot of uncertainty. We don't even know what the right questions are to ask about that yet. And we're uncomfortable with that, so we tend to tell a story about it. But I could easily tell a story about a future Terminator robot. I could also easily tell a story about a future human elevating robot.
And you're like, okay, which... That's because you're an optimist. No, but... You believe in the good and humanity. But I could tell both, right? Yeah. And so, and then you say, well, what determines it? What do we do right now? It's like, well, the answer is we pay attention to it as we go down the road and see what it is.
And people say, well, what happens when these AIs have superpowers? And I was like, they have superpowers today. Like, go play with ChatGBT. It has a breadth of knowledge no human being on the planet has, no roomful of human beings on the planet has. You go, well, but it's not that alarming. Of course, yes, it's not that alarming. So there's lots of super intelligent AIs that are not that alarming.
And so then you go, okay, well, which other things are we learning? It's like, well, you know, obviously if it, you know, kind of ended up being kind of in very deceptive, manipulative practices of human beings on its own autonomous goals and it was self-improving and that kind of stuff and you would be really concerned about that, we'd have to pay attention. That's like, okay, well, how about we have safety teams, the different AI labs that talk to each other and say, let's not do that.
But will they do that anyway? Well... For their own purposes, if trained by somebody who does want that. Well, it's part of the reason why I think it's important to have the kind of current AI teams that are, you know, kind of situated within Western democracies lead the world in this. And, you know, they are, even with DeepSeek right now. We have very little time left, but I just want to close out with this question because do you believe that humanity is mostly good?
I think humanity likes to think of itself as mostly good. And I think that we should design society to try to take advantage of the fact that humanity wants to think of itself as mostly good to help them be mostly good as they go about their daily lives. How do you think that plays into what we're looking at with this AI run-up now? Well, I think it's part of the reason why discussions like this one or other ones to say what's really important to make
AI amplifying of kind of human capability. It's part of the, like part of the reason I wrote super agency is also for technologists. They think about human agency and think about amplifying that human agency across it. I think that's really important. And I think that's part of the reason why, you know, as you know, from the very first chapter, you know, look, humanity in the chat, right?
really matters. Absolutely. Okay, one real last question. What's something that AI is doing that you would recommend everyone take advantage of that we may not know about? Well, one of the things you can do today with AI that's really cool is take something you really want to understand and put it into ChatGPT or other and say, explain this to me. And you could say, explain it to me like I'm 12. So I've been taking difficult technical subjects that are beyond my mathematical capability
putting the like advanced PDF papers into ChatGBD and say, explain this to me like I'm 12. And I go, okay, I got that. Explain it to me like I'm 18. Okay, I got that. Explain it to me like I'm a professor. Okay, now I'm working on it. Awesome. Well, thank you so much, Reid Hoffman. This was a very enlightening conversation. Always a pleasure. Thanks.
Okay, so Sammy didn't get to ask about the AI porn robots. I think we'll save that for another person. Perhaps that's a question for Sam Altman, right? I feel like that's a person who might know more about the ways that AI will be misused because he tends to run in a circle of 19-year-old intern boys who...
live in basements. So we'll save that for him. But we did learn a lot of really cool stuff. But anyway, Reid Hoffman, decent dude, doing AI, changing my mind about a couple of things, feeling good about some stuff, feeling a little bit better, a little safer. So yeah. Yeah. He put a lot of stuff in perspective. All right. We'll take it. Well, let us know what you think. Write in, tell us your questions about AI. Maybe you're an AI machine learner person and you can help us understand better, like what are the dangers? What are the
What are the energy impacts? I've heard doing one chat GPT inquiry takes the amount of power in a AA battery. Is that real or am I just...
learning misinformation on TikTok. I don't know. Come and help me out. I can only learn so much myself. Well, either way, I don't think it takes as much now that there's DeepSeek. So that's kind of the big feat there. As long as they don't ban it, we'll see. Well, good interview, Sammy. Thanks for chatting with him and bringing that to us. And until next time, I'm Thee Spear. And I'm Sammy Sage. And this is American Fever Dream. Good night.