Jason, where are you? Is that a virtual background? Oh, right. That's the place. It did look architecturally familiar to me. It is architecturally significant. We'll bleep out the beep house. But yes, I am
This is like top three or four of places I like to be a house guest. I'm in rotation right now, my house guest. You're Kato Kaelin-ing through our friend group? Basically, I'm just a great house guest. People like to have breakfast with me. People like having me around. So I just find myself in mansions around the world.
Hey, Nick, do you have an updated picture of Kato Kaelin? Is he still alive and kicking? He's alive for sure. He's got to be 70 or something, right? He's got to be old enough. He was on one of those reality TV shows with Dr. Drew, I think, a couple of years ago. What does Kato Kaelin do? He's a good hang. He's, yeah. There's a lot of these people in LA, you know, like just- They just hang. They kind of hang. They set things up. Oh my God. Oh. Is that current? Kato Kaelin? Yeah. Oh man. He's 64 now.
God almighty. That's incredible. How did he survive? He's got an Instagram account. I'll tell you how he survived. See no evil, hear no evil. Exactly. Because I didn't see nothing. All I know is, hey, listen, man, you gave me a pool house. Snitches get stitches. I didn't see anything. What did you see? Nothing. Rain Man David Sack. We open source it to the fans of Discovery. Love you guys. Queen of Kinwada.
All right, everybody, welcome to your favorite podcast, the All In Podcast, episode 168. David Sachs, can you believe it? We've made it to 168 episodes with me again, the Rain Man, David Sachs. How are you doing, buddy? Good. Yeah? I heard you got a big talk coming up.
Can you give another speech? Yeah. Yes. In DC? Yes, I'm giving a talk. All right. Get ready for that, all of the GOP fans out there, all your Saks fans. You got a big keynote coming from Saks next week. Also with me, of course, the Sultan of Science, formerly known as the Queen of Quinoa. He's got another crop he's growing. David Freeberg, how are you doing? How's the crops? How's the fields? How's life in the fields?
Worth cold open. Not a cold open. These are intros. Oh, intro. I'm glad you're not producing. There's a reason why I'm the executive producer. Go ahead, Freebrook. How's your crops? How's life in the fields? It's great. Got a great team making progress. Tech is awesome. Having a lot of fun. It's great. Back in the CEO seat, huh? And of course, the chairman dictator who's becoming completely insufferable because he invested in a company seven years ago that is absolutely crushing right now called Grok. We talked about it last week.
You're back. You're back, Chamath. Everybody's talking about Grok. You're on cloud nine, it seems. Grok, Grok, Grok. Grok, Grok, Grok. Grok, Grok, Grok. Bitcoin, Bitcoin, Bitcoin. Grok, Grok. I mean, it is interesting how your book, just literally whatever Chamath's book is, that's his mood. It's just Bitcoin 60, Grok. Well, so the honesty level is going to be through the roof now.
Oh, right. He's going to be running for governor and buying the Hamptons again. The truth bombs that Chamath is going to drop now. There is nothing like pig-zerp Chamath. I mean, are we going back? Are we back right now? I mean, on paper, what is your stake in Grok already worth? We're not talking about that. That's so uncouth. It's so uncouth. But he's buying the Hamptons and he's probably going to buy... Wait, wait, wait. Stop. What?
When has it been uncouth on this podcast to talk about wins or assets or any of this stuff? These are not wins yet. I agree with that. It's not a win. You can never count your chickens before they hatch. That's absolutely right. I am not counting any chickens. Don't book the win. Yes. There's a lot of hard work to do. There's a lot more people to sell products to, to build products. By the way, developers, by the way, on Grok just this past week in the queue,
The wait list tripled. Now there's almost 10,000 developers. That's a big deal. Oh, that's crazy. That's a great sign. I think the most important North Star metric for these developer platforms is basically that. As goes the developers, so goes the platform.
You can kind of count on some amount of pull through because some of those developers just statistically will land really important products. They'll consume more APIs. They'll just consume more stuff. It's just all goodness. Because if you have 9,000 or 10,000 people just, again, waiting in line,
Once those guys get in, somebody is going to create something magical. That's the great part of having a platform. It's beautiful. People just take it and they build on it while you're sleeping. Something amazing can just get built and you get some amount of the credit for that. The other thing that's really interesting is the number of developers is increasing. I've seen two or three founders recently, just the last couple of weeks,
who were previously idea founders, who are now have taught themselves to code. So you know, this, there's 1% of the whatever number of millions of developers in the world, I think it's gonna like double or triple. So the, the pool of people who are writing code, I think is about to grow very meaningfully. I mean, you guys, you guys saw the release from the White House where they were like, we don't want you to code in C and C++ anymore. Yeah.
Yeah, that was very interesting. I mean, they were talking about memory leaks and like, this is, I guess, the source of most memory leaks. Therefore, the White House wants you to learn to develop in memory safe languages. Oh, great. Awesome. Yeah. So why are they involving themselves? Like they got nothing better to do?
I mean, they have an opinion on preventing security leaks. All right, listen, the top issue this week is the same issue as last week. Google's Gemini DEI Black Eye continues. We covered this woke AI disaster last week. It was kind of funny. I was watching CNBC and they had a hard time describing the problem.
The woman just, I guess, didn't want to call it what it is. It's a racist AI. You type in text and it gives you the opposite or just culturally insane responses. So if you put in, you want a picture of George Washington from Google's Gemini or Sergey Brin, you might get back like a Benetton style diversity ad with like George Washington being black or Sergey Brin being Asian, etc.,
And so this has caused a bit of a kerfuffle here in the industry to say the least. The stock is down 5% since we talked about it last week.
And Sundar sent a memo to the Gemini team. Of course, when they write these memos to a team, it's written to the entire world because you know it's going to get leaked. And so you're writing it as such. It might as well be a press release. I'll give two quick quotes here and then I'll throw it to the besties because there's so many questions that we have to address. Quote number one, I know that some of its responses referring to Gemini have offended our users and shown bias.
To be clear, that's completely unacceptable and we got it wrong. And to be clear, it wouldn't show white people, especially ones like George Washington or just somebody who is obviously a Caucasian. And so the next quote, we'll be driving a clear set of actions, including structural changes. To me, structural changes means we're going to lay off
a bunch of people and we're going to get rid of the DEI group. That's a newfound motivational riff quote in my mind. Freeberg, will Sundar survive? And is Google too broken to fix? I'm just going to ask you, since you work there, I don't mean to make it uncomfortable, but what are the chances Sundar survives this? And what are the chances that Google can be fixed and produce great products quickly that delight users?
I don't know how to answer the Sundark will survive because it's kind of an idiosyncratic organization. There are a couple of founders who have super voting shares and ultimately comes down to their decision and the direction they want to take the company. And I have no insights into what they individually think. So frankly, I've spoken to a lot of folks who are investors in Google over the last week, and a lot of folks are just deeply frustrated by
and angry on a number of fronts. The business, really there's three businesses inside of Google. There's search and ads, there's YouTube and there's cloud and the rest of it is kind of noise. And to give you a sense of how big these businesses are, right? YouTube did 10 billion a quarter, roughly cloud did 10 billion a quarter. They have this devices business and subscriptions business does about 10 billion a quarter. And then search does about 50 billion a quarter. And the margin on search is much higher than any of those other businesses.
And so the search margin, the ad revenue on search is, you know, probably 100% of the true operating profit of the business. So the real threat to Google is more, are they in a position to maintain their search monopoly or maintain the chunk of profits that drive the business under the threat of AI? Are they adapting?
and less so about the anger around woke and DEI, because most of the investors I spoke with aren't angry about the woke DEI search engine. They're angry about the fact that such a blunder happened, and that it indicates that Google may not be able to compete effectively and isn't organized to compete effectively in AI, just from a consumer competitiveness perspective. So, you know, investors are banging the table. And in the past, we saw this
with meta, I think it was towards the end of 22. If you guys remember, and it was a similar situation, investors were like, Why are you investing in VR and AR? This is crazy. Why do you have all these people that are getting overpaid? And everyone started to write off the stock and the stock took a big nosedive for a period of time, just like Google's is right now. And then the changes came. And much like Google, there's an individual with super voting shares who basically said, you know what,
I am going to step in and we're going to make these changes and we're going to fix this organization and we're going to right size and we're going to focus on the product winning. And since then, Meta stock is up a tremendous amount. 5x since then. Google shares are down about 10% over the past two weeks. By the way, I was one of the stupid people to sell Meta around that time. And your thinking was that they just...
can't get out of their own way. And the God King is... It's another, yeah, it's exactly, it's another one of these idiosyncratic problems. You don't know what this individual is thinking and what he is individually going to do. The point I'm making is that at Google now, something has to give because the noise is so loud. The board is hearing this left and right. Investors are
banging the table. Analysts are banging the table. And I'll tell you a couple anecdotes. Internally, employees are now banging the table. A story I heard this week was that someone stood up in a meeting
And said a couple of weeks ago, if I had stood up in this meeting and said, we can't show black people in the image generation at the rate that we're showing, I would have been cast a racist. And I didn't have permission to do that inside of the organization. But today, and everyone's like, you're right, you would not have been able to stand up in the organization and say that.
But the tenor has changed inside of Google with a lot of the employees that I've spoken with who are now saying, I can stand up and I can say that this group called Responsible AI has too much power. And it's a one-sided asynchronous problem where they get to come in and say, we need to change this. And if you step up and disagree with them, you are deemed a racist. You are deemed, you know,
culturally inappropriate. Which is easier to keep your head down. It's easier to keep your head down. In this kind of a circumstance and keep collecting your RSEs. And all of a sudden you wake up one day and you see the blunder that happened last week. Yeah. And so now internally people are waking up and saying we need to change this. And I heard that that's made its way up to the higher ranks at Google and they're very actively, you know. So there may be a moment here where Google stock, which currently
is trading at just 17 times 2025 consensus earnings, which is cheaper than all the other big tech companies by far. And it's still growing. Core business is growing. Cloud is growing 20%. Search is growing 15%. All these other businesses, YouTube's growing 20% a year. It's a growth business that's very profitable.
And it's trading at a very cheap discount. So there's, you know, the bull case is now's a great time to buy because it's so cheap. And there could be this moment where, you know, you see some of the changes that are needed internally to get the AI products to where they need to be to maintain the lead that is inherent because of search. So that would mean, Sax, it would require, because we're using Zuckerberg as an example, we could also bring Twitter and X into this,
It would require founder authority to come in there and make these changes. It would require Larry and Sergey, who have the super voting chairs, to come in there and say, hey, this is all changing. Enough of this. Do you think there's any chance of that happening? Or is Google just too broken to fix and they're going to just lose this opportunity? And it's Microsoft under Steve Ballmer missing mobile and cloud. Well, to quote Jefferson, the tree of liberty must be refreshed from time to time with the blood of patriots and tyrants.
And I think there's something analogous here. I mean, you have to, if you're going to refresh this company, you have to go in and you got to go in and make major cuts, not just to rank and file, but to leadership who doesn't get it. And that's the only way it's going to get fixed. Do I think that Larry and Sergey are going to come in and pull an Elon and go deep and figure out which 50% or 20% of the company is actually good and doing their jobs? Probably not. But is it possible that
They could make a leadership change. Yeah, it's possible. Probable. I don't know. I mean, I've heard that the company is the way it is because they like the way it is. I mean, that they're part of the problem. In effect, they don't see they don't see the problem, or at least they haven't until now. Maybe they'll get the wake up call. What do you think, Shamar, watching while this happened? I think it's basically a small cadre of a thousand people.
that have built literally the most singular best business model and monopoly ever to be created on the internet, and a whole bunch of other people that have totally transformed this organization, as Zach said, into ability and a platform to reflect their views. And so I'm not a shareholder of Google. And outside of
The tools I use, I don't think I really have much voting power. So I don't. And I have so many alternatives now. So I actually think like the I don't really care that much, I guess, is the point. I think that the employees should care and the shareholders should care and they should come together and vote. And I think Saks is right. I think the company is the way it is because they've chosen to be that way.
And I think Freiburg is right, which is that there's a small group of people who have been protecting and breathing life into the single greatest business ever built, ever in the history of business. But now we need to have a confrontation amongst all of these three different groups of people, and they need to make some decisions. Let me put a few data points.
in play here, J. Cal. This all speaks to the problems being, let's say, deliberate as opposed to a glitch or an accident. So first you have Sundar's letter to the company, which there was a very interesting tweet by Lulu Cheng-Meservey, who does comms at Activision, and she writes a blog called Flack.
She said, quote, the obfuscation. She's a comms expert just for the audience. She's a comms expert, yeah. So she kind of graded Sundar's letter and it was scorching. She said the obfuscation, lack of clarity and fundamental failure to grasp the problem are due to a failure of leadership. A poorly written email is just the means through which that failure is revealed. So that was one reaction.
Mark Andreessen had a series of posts indicating that the AI was programmed to be this way. Again, it's not like a bug, it's more of a feature, and that it's not an accident. This is happening because of...
Because they chose to be this way. And in fact, he goes further and says that these companies are lobbying as a group with great intensity to establish a government protected cartel to lock in their shared agenda and corrupt products for decades to come. Wow. I mean, again, that's a really scorching critique here. And then, you know, the question I would ask is, who's been fired for this?
I mean, imagine if, I don't know, one of Elon's products had a launch that went this badly. Do you think no heads would roll? No heads have rolled. And so you have to kind of wonder, well...
And this team's job is to enforce those principles. Remember that we brought up last week that they've defined. And so they go in and they're like, well, you know, if you just render an image of a software engineer and all the images are just white guys in hoods, that's inappropriate because there are plenty of non-white people. You need to introduce diversity. So then the programmers say, okay, we'll go ahead and overweight the model and make sure that there's diversity. And you can't say no, because otherwise you are deemed...
a racist. So who's the individual that's responsible given that structural circumstance that exists within the organization? It's more of a cultural and structural problem to me than, you know, one, I guess ultimately there's leadership that's lacking, but. Actually, I agree with that to some degree. Let me describe how I think it works. So what I've heard about Google is that every meeting above a certain size has a DEI person in it. I mean, literally. So it's kind of like in the days of the Soviet Union,
Their military, the Red Army, would have in every division or unit, there would be a commander or lieutenant and there'd be a commissar. OK, and the commander reported up the chain and the commissar reported to the party and the commissar would just quietly take notes in all of the meetings of the unit. And if the commissar didn't like what the lieutenant was doing, lieutenant would be taken out and shot.
Okay, now that's kind of a dramatic example. But the point is that in every large meeting at Google, you've got this DEI commissar who's like quietly taking notes. I'll point out that at Google, the only person we can ever remember to get fired was James Damore, who was an engineer who complained about the political bias at Google.
In other words, he was a whistleblower about the very problem that's now manifested. He's the only person you can think of to get fired at Google. Google was mocked on the show Silicon Valley. It was called Hooli, but remember this? Yeah, you would go to the roof and you would rash and nobody ever got fired. It's a company where it's impossible to get fired unless you blow the whistle on the political problem. And I think that if you're sitting...
in those meetings with a DEI commissar present, and you know or you have suspicions that the AI product is not working right, are you really going to speak up and risk the fate of a James Damore? Of course you're not. So you're right, Freeberg, that it's a structural problem, that there's probably a low-grade fear
That's pervasive through the organization. And no one's willing to say the emperor wears no clothes. The woke emperor wears no clothes because they don't want to. Why stick your neck out? You may not know you're going to get fired, but you know you're taking a chance. Three years ago on Twitter, nobody would talk about certain topics related to DEI because it was in the aftermath of George Floyd and people just did not feel comfortable even calling out something minor that was unfair. No, I would say they didn't feel comfortable until Bill Ackman
Broke the seal on this just like a month ago where he really went after DEI. I mean, because remember the part of the reaction to Bill Atkin was like, wow, he's really going there. Even though everyone's saying that, like basically agreed with him and knew that he was making a correct point, but people were still afraid to like, again, call out this, this woke emperor.
Yeah. And the way this works for people who are not super aware, you have a language model and you write a bunch of code, but then there are guardrails put in and then there are red teams and people who test it. And so at the very least, even if you were trying to do something with great intent, as you pointed out, Friedberg, hey, you know, if we pull up a doctor, it doesn't necessarily always be a white guy. There are other people who are doctors in the world.
Somebody should have caught it in testing. They probably did, Jason, and they didn't report it. Who wants to report that problem? That's an interesting rub, yeah. They didn't miss it. They didn't miss it. They didn't have the guts to report it. You have to build a... Somebody built a reward model or people... A reward model was built for the reinforcement learning from all the humans and their feedback. So these decisions were explicit. The question that you guys are framing is,
Was this, though, a case where it was explicitly imposed on people and people felt a fear of pushing back? Or did they just agree and say, this is a great decision and these rewards make a ton of sense? And the whole point is that I think what this is highlighted is that's the truth that Google needs to figure out.
And they need to figure it out quickly. Because what is going to happen now is it's we I know, I think we talked about this a year ago, all Google needs to see is 300 500 basis points of change, and the market cap of this company is going to get cut in half.
Okay, because there is only one way to go when you have 92% share of a market, and that is down. And so the setup for the stock is now that people are looking at this saying, okay, if I see 92% go to 91, or 90 or 89, that's all that has to happen. And people will say the trend is to 50%. And you will price this company at a fraction of what it's worth today.
So I think it's really critical. This is the moment that the senior leadership that really understands business can separate it from politics and decide, is this a thing of fear where there's one rogue group that's run amok? Or is this what we believe? Because if it's the latter, you just got to go with it. There's nothing you can do because you're not going to replace 300,000 employees or however many Google has. But if it's a former, Saks is right. You're going to have to, some heads need to roll and they need to tell the marketplace, right?
that this was a mistake where a group of rogue employees got way too much power. We were asleep at the job, but now we're awake, they're fired. So they have two choices. But in all of these choices, what I'm telling you on the dispassionate market side is if you see perplexity or anybody else
clip off 50 basis points or 100 basis points of share in search, this thing is going straight down by 50%. By the way, unless cloud takes off, right? Because the other hedge that Google has is GCP.
And the tooling that they've built in GCP can enable and support and be integral to a lot of alternatives and competitors to what ultimately might be searched. So I think there's also a play here in thinking about some of the hedges that Google has implicitly built into the business. I'll just say one more thing on this structural point.
I kind of thought about this as when I started speaking to people internally about how this happened from a product perspective, it felt a lot like when you talk to a lawyer, as you guys know, you're making a business decision and there's some risk. I mean, think about Travis building Uber and the lawyer will say it is illegal. You need a medallion license to do what you're doing in the city. And Travis is like, well, you know what? I'm going to take that risk. Brian Chesky and Airbnb, he said, you know what? I'm going to take that risk. And the
And the problem is that like a lawyer's job is to tell you what you can't do, to identify all the peril of your action.
And then you as an executive or a business leader or a manager, your job is supposed to be to take that as one piece of input, one piece of data that you then make the informed business decision about what's the right way to build this product, what's the right way to build this company, and I'll take on some risk. And I think one of the challenges structurally inside of Google is that product leaders and other folks were never enabled to make the decision that there were these, as
kind of like policing type organizations that were allowed to come in and veto things, and the vetoing or make unilateral decisions and those vetoing on whether it's waiting or some, you know, training data set, or output, that ends up killing the opportunity for the smart business leader to say that doesn't make sense. We can't do that. This is where having founders in the organization every day changes everything because
The founders would say, hey, we're here to index the world's information. We're here to present the world's information. We're not here to interpret it. We're not here to win hearts and minds. We're not here for a political agenda. But there's a group of people there who, it's apparent, think that they're there to change the world, that Google is a vehicle for them to make social changes in the world. And, you know, that's art. And there's other ways to do that. And just paradoxically, like Hamilton making the founding fathers diverse and doing
doing art, you know, that can win Tony's in that context. You could win tons of awards and acclaim and make incredible, beautiful art. But on the other side, like people are not looking for Google to do that. They're looking for Google to give them the answer and the data. And then these people are thinking it's our job to actually interpret the data for you as we kind of touched on last week. So this is, I think, look, I think you guys are really close to the bullseye here, but I would just refine slightly what you're saying.
So here's how I think it happens. I don't think this is a rogue group. I actually think this is a highly empowered group within Google. I do agree with you. Yeah. Yeah. I think what happens is Sundar says from the top that we're going to be on the forefront of diversity and inclusion because he personally believes that. And that's the way the social winds are blowing. And they think it's good for the company on some level. Okay. To implement that mantra or that platitude really,
HR hires a bunch of DEI experts. Okay. Lots of them. I think the company has like this huge HR department.
And a lot of those people are basically fanatics. I mean, they're... Their chosen career. Yeah, they're trained Marxists, basically. And so they're the commissars. And they're sitting in all these meetings. And again, they're the ones taking notes. And they're the ones who push the company in a certain direction. But you have to then go back to senior leadership and say it's their fault for letting this happen. Because they should have made a course correction. They should have realized what was happening. They should have realized that through a combination of
bias, and through this sort of like overly empowered HR team who are pulling a legal card. I mean, Freeberg's right about that. They're saying that our point of view is the law, which isn't true, but they're basically pulling the legal card and pushing the whole organization in a certain direction. It was up to leadership to realize what was going on and make a correction. And the thing that you're seeing now is that in the face of what's happened,
The statement that we got really doesn't cut it. I wonder if they used the word problematic. Yeah, exactly. They're describing it as a glitch or a bug. It's not. It's a much deeper problem. And so, therefore, it gives no confidence that...
the solution is going to be pursued in as comprehensive manners as necessary. Yeah, I think it's just that memo is a tip off that there is a 20,000, 30,000 person rift and a reorganization, and they're just going to keep cutting the DEI group. And they've already met and Google have cut the DEI groups a bit. I don't know if it's about as much about cutting as much as it is about empowering. If you said to the product leaders, you can make the decision,
DEI and responsible AI and whatever other groups, they're going to inform you on their point of view, but they're not going to tell you what your point of view needs to be. Yeah, but that's the idea of pulling the legal card. They're saying this is the law. They're saying that if you... Well, I'm saying that's what needs to change, right? If you don't do things the way we tell you, Google is going to be hit with a civil rights lawsuit. That's what they're saying. Oh, well, I mean, let's see if leadership can overcome that threat. But I think that's exactly the threat that...
is keeping the organization from resolving this problem or could keep the organization. I think it's that, but I also wouldn't underestimate the bias. So, you know, everybody there, not, I shouldn't say everybody, but it's a very liberal culture, right? It's a monoculture. So when you're swimming in that much bias, it's hard to see, right? Yeah. When everybody is left to center, I mean, just look at something like political contributions, right? It's 90 something percent
are democratic. I mean, like high 90s. So it's just a liberal bubble.
basically. And so when everybody's liberal, it's very hard to see when, you know, the results are way off center as well. Also, if you give people this job, like how are they, what are they going to do every day, Friedberg, if they've been given the job, DEI, and they've been given the job to do trust and safety, they're looking to fill their time and make an impact. You got to, I think, maybe have less of these people. I honestly think it's about cutting people. I don't know if you guys saw it, you know, Sean McGuire is a partner at Sequoia and he used to be a
a member of the team at Google Ventures when it was called Google Ventures. And he did a Twitter post. Do you guys see this? Yeah. Where he said, when he was at Google, he was told by his manager, I'm not really supposed to tell you this. It could get me fired, but you're one of the highest performing people here, but I can't promote you right now because I have a quota. My hands are tied. You'll get the next slot. Please be patient. I'm really sorry. It ultimately led Sean to leave Google and the rest is history. He's a partner at Sequoia now. You're saying because he's a white male.
Because he's a white male. Yeah. And so the, you know, I'll be honest. Yeah. No, I had the same thing happen to me on AOL. When I was at AOL and Shabbat was there at the same time, the whole organization was white men from Virginia, whatever.
And they gave me an SVP title. And I said, well, I want the EVP one. And they're like, you know what, we can't have any more white males in that position right now. We have to like get some more women and people of color in that position. And they told me you'll have the same comp and bonus, but we just can't give you the title. Jamal, do you think these like race and gender driven quotas make sense for HR departments to try and enforce upon managers?
to increase diversity? I mean, is that a good objective for an organization to have? This is like a topic a lot of people I've heard kind of flip back and forth on. There is no company where I have majority control where I have an HR department. You don't have an HR department? No. Say more. Why? I think that it's very important. You should go to a very respected lawyer at a third-party firm, someone very visible, an Eric Holder-type person, and you should work with that law firm and retain them so that you have
an escape valve, if there are any kind of serious issues that need to be escalated, so that you can get them into the hands of a dispassionate third party person who can then appropriately inform the board, the CEO and investigate. So that covers sort of all the bad things that can happen. Then there are buckets of I think good things. One important set of things is around benefits. My perspective is that
The team should build their own benefits package that they want. They should understand the P&L of the company that they work for. They should be given a budget. And in my companies, again, what I do is I allow committees to form and I ask those committees to be diverse. But what I mean by diverse is I want somebody who has a sick partner. I want somebody who
has a family, I want somebody who's young and single, so that the diversity of benefits reflects what all these people need. They go and talk to folks, they come back, and they choose on behalf of the whole company, and there's a voting mechanism. Then when it comes to hiring, I think what has to happen is that the person that for whom that person will end up working for,
It's the head of engineering. It's the head of sales. Those are the people that should be running the hiring processes. I don't like to outsource it to recruiters. I don't like to outsource it to HR. So when you strip all of these jobs away, HR doesn't have a role.
And what is left over is the very dark part of HR in most organizations, which is the police person, the policeman, right? What is Saks called? The commissar. That is why everybody hates HR. I've never met a company where that is a successful role over long periods of time. They are this conflict creating entity inside of an organization that slows organizations down. So
That allows me to empower individuals to actually design the benefits that they want, to hire the team that they want. And I let them understand the P&L in a very clear, transparent way. And the results are what the results are. You want more bonuses, hire better people. And then what I do is at the end of every year, I talk about this distribution of talent and I make sure we are identifying the bottom five or 10%. They need to be managed up or they must be fired every year.
And it does not matter how big the company is. You must manage up or out the bottom five to 10%. And in some cases, I'm talking about one person because it's a small company. And in other cases, I'm talking about 30 or 40 people. So just hire the best person for the job. No, you eliminate HR. Yeah. You empower the team, allow them to make their own decisions, measure it, hold them accountable.
Right. So if you have a salesperson who just hires their, I don't know, 15 sorority sisters or fraternity brothers, whatever it is, and it just becomes not a diverse group,
Well, hold on. It is what it is. That still may be diverse. This is my point. Like the thing is that like there's different ways to sell. There's different sales motions. There's the sort of like elephant hunting kind of sales model. There's the dialing for dollars thing. So even those 15 sorority sisters, the way you describe it could actually be diverse. My point is that kind of superficial marking based on immutable traits will not yield a great organization. Instead, it's you're empowered to hire whomever you want.
Just know that at the end of the year, we're going to measure them, your bonus, their bonus, the company's performance. So it's just all performance. Yeah. Performance. I mean, Frank Slootman said this, and he got barbecued at some point. He said, I don't have time to do this diversity stuff. By the way, in my companies, like, for example, like when you do SOC 2 compliance, you have to generate these reports, okay? Especially for some of our customers, some of our companies that actually show the diversity of our team.
And when we measure them on the immutable traits, whatever they represent as their gender, whatever they represent as other dimensions, we are incredibly, incredibly diverse anyways. But the way that I- I've never seen a startup that isn't diverse. Silicon Valley attracts such a diverse mix of people. What do we know about success factors for startups? Number one, the ones that are successful have a culture of meritocracy.
Number two, they're non-bureaucratic and they don't have too much GNA, basically overhead in the company. Now, all those things...
are contradicted by having a large HR team or especially a DEI organization, right? They add bureaucracy, they add overhead. And what's that? Pips. Performance improvement. They cut into... Well, pips are great. I pip people all the time. Does it work? I mean, I know some people are just like, these pips don't work. If you're an unperforming member of the team and you've been identified in the bottom five or 10%, we have a responsibility as management to coach you up
or to get you to an organization where you're not in the bottom 5% or 10%. That's the right thing to do for people. You do that by being very transparent and writing it down. You are not good at these things. You're underperforming in these things. Fix them or you will not be here. That's a very...
fair thing to tell somebody. Yeah, look, how you want to implement your own meritocracy, I think there's different ways for founders to do that. The point is you want these companies to be a meritocracy. You don't want advancement in the company to be based on factors other than skill, merit, hard work, things like that. Performance. We know performance. We know that's a bad, bad path to go on. I think a lot of founders don't understand
that DEI is not something they have to do. They don't have to have a DEI organization. This has somehow become a thing. It's not required. And I think people are realizing, like, why would you do that? Why would you create this large bureaucracy in the company that undercuts the meritocracy, that adds a lot of costs, and that slows you down? None of those things will help your company. What you need to have, I think Chamath laid out some really good best practices. You should have an outside law firm
that I would say is you could call it HR law, but I would just call it employment law. I would say like a non-ideological partner, an expert in employment law who sets up your company correctly and to whom you can take a complaint. If an HR complaint
gets raised through the chain of your company, you do have to take it very seriously and that's be a proper investigation. And that's probably best handled by an outside lawyer. So get that outside lawyer. And then by the way, always, there's never a case where your coworker should know the intimate details of any of those. And that's what also creates this
horribly rotten culture in HR where these people act like gatekeepers of secret information, salary information, bonus information, and then all of the other things. Oh, you know, did you know this person did this with
It's terrible to have that inside of a company. It should go to a dispassionate third-party person whose job it is to maintain confidentiality and discretion while investigating the truth. Yeah, when HR is too powerful in a company, that's a red flag. At the end of the day, HR should be an administrative function. Their job should be to get people onboarded, sign their offer letter, and they're
confidentiality and invention assignment agreement and set them up in payroll and get them benefits and that kind of stuff. It should fundamentally be an administrative function.
And if it starts getting more powerful than that, it means there's been a usurpation. You don't even need people for that. Like you have software that does that now and it's all automated. You know, you send them a link, you go to your favorite HR site and boom, it's done. Yeah, I don't think you need a lot of people doing this. Anything you want to add to this discussion, Freeberg, as we move on to the next topic? I think that unfortunately, the term diversity...
Equity inclusion has been captured along, as Chamath points out, a single vector, which is this immutable trait of your racial identity or gender. And I think the more important aspect for the success of a team, for the success of an organization, is to find diversity in the people that comes from different backgrounds, different experiences, different ways of thinking. And so I'm not a huge fan of diversity.
race-based metrics or gender-based metrics driving. I'm generally more oriented around be blind to those variables and focus much more on the variables that can actually influence the outcome of the organization. Yeah, one of the great paradoxes of this as well is we are moving to a much more multicultural mixed race society anyway. People filling out forms, a lot of our kids are
you know, could pick two or three of the different boxes on a DEI form. It's not going to make much of a difference in the coming decades. All right, issue two, Google goes splashy, caches licensing deals for training data, and it's now becoming a bit of a pattern. Google, we talked about, I think just last week, had done a deal with Reddit for $60 million. That's reportedly per year. Today,
Stack Overflow is now using its Overflow API to train Gemini. No word on the contract value. I did get some back channel that it's a multi-year non-exclusive deal. According to Red's S1, they have already closed 200 million worth of AI licensing deals over the next two to three years. So maybe it's going to be 75 million a year, 100 million a year. Who knows how big that business can get? We're going to talk about the S1 from Reddit in just a moment.
And this is on top of all the other licensing deals that have occurred. Axel Springer and OpenAI, remember that one? And OpenAI is in talks reportedly with CNN, Fox, and Time to license their content. That comes on the heels of that blockbuster New York Times OpenAI lawsuit that we talked about, I don't know, 10 episodes ago. And OpenAI's lawyers are trying to get that one, just to give you a little update on it. They're trying to get that case dismissed, saying that the New York Times hacked the chat GPT to get certain results.
And that the New York Times took tens of thousands of tries to generate the results, yada, yada. And they said, here's the quote from the filing from OpenAI. The allegations in the Times complaint do not meet its famously rigorous journalistic standards. Both OpenAI and Google and Gemini have been furiously guardrailing their systems, as we talked about as well, to stop.
copyright infringement, like trying to make pictures of Darth Vader and that kind of stuff. Chamath, you've talked a little bit about your TAC 2.0 framework. Maybe you could talk about what you see happening here with all these licensing deals and what it means for startups in the AI space. Well, just to maybe catch everybody up, TAC is this thing called traffic acquisition cost. And you can see it most importantly in Google's
quarterly releases, which is that what they realized very early on at the beginning of the search wars in the early 2000s is that they could pay people to offer Google search
people would use it, and then it would generate so much money that they could give them a huge rev share, and it would still make money. So I remember, Jason, when you and I were at AOL, this is the first time I met Omid. We were flying back to California. We were both in Dulles at the same time in like 2003 or 2004. And that's when Omid did the first big search deal between AOL and Google. And it was, I want to say, hundreds of millions of dollars back then.
where Google pays you upfront, you have to syndicate Google search, and then they clean it up on the back end with some kind of rev sure. So what's incredible is that that process has escalated to a point now where, for example, on the iPhone, it's somewhere between 18 and 20 odd billion dollars a year is what now Google pays Apple. So that's the traffic acquisition cost 1.0 rule, TAC 1.0. And
I just said that we should call this TAC 2.0, except now what Google is doing is instead of paying for search, they're actually paying for your data and saying, give it to me so that I can train my models and make it better. And I think that that's an incredible thing. Both it's very smart for Google, but also it's great for these businesses because it's an extremely high margin business.
thing to do when you have a really good corpus of data that is very unique. So in the case of Reddit, that $60 million deal, I didn't, I looked through the S1 to try to figure out whether it was a multi-year deal or not. It wasn't totally clear. But the point is that, you know, Google's paying Reddit 60 million bucks. And Jason, you just said that they're, they've done a couple more of these things. That's incredible. This TAC 2.0 thing is amazing. So if you're an entrepreneur, building a website or building an app that has really unique training data, or really unique data,
You'll be able to license and sell that, and that'll be an incremental revenue stream to everything you do in the near future. That's amazing. That's what TAC 2.0 is. It's going to be incredible for the entire content and community-based industries. Do you think this could sustain content creation where advertising has become very difficult, Sax? I guess my question to Chamath would be, do you think this is going to...
be available to small websites. They'll somehow be some sort of program or because I mean, Reddit is one of the biggest sources of content on the entire web, right? It's like a top five traffic site with,
tons and tons of user generated content. Yes, I think like a small publication would be able to make these types of deals. Yeah. And in fact, I think like if you go back to search 1.0, that's exactly what these small companies were able to do, which was in a more automated way, they were able to basically partner. And in that example, what they would say is here, Google,
Why don't you just run your ads on our page? Right? And that was sort of in that web 1.0 world. So Google had solutions for the largest companies on the internet all the way to the smallest. And in this TAC 2.0 world, I do think that that it works in that way as well. The problem in the in that world, if it's a small website, that says, here's my training data. The question is, how do you attribute how much incremental value a model derived from it versus something else?
And so I think that that part has to get figured out. And so, you know, what Google will be able to pay you will probably be pretty diminimously small if you're small right now. So, you know, to your point, if the upper bound is 60 million for Reddit, then the average website is going to get a few hundred bucks. But that still may be a good start. And when Google figures out how to monetize this stuff or somebody else, where then they can give you back some way to make money, I think that there's a real monetization here. I really do.
You took the other side of this, but now that you see the market-based solution starting to emerge, what do you think of this market-based solution? Do you think it's got legs? Well, I'm not convinced that this looks like tech in the traditional sense where you're basically buying a continuous stream of traffic and then you're helping to monetize that traffic. That's effectively what Google did in the ad syndication business and does today. That business makes about $10 billion a quarter at Google.
And they're paying, call it 70 to 80 cents on every dollar back out to the owners of that traffic, the folks where that traffic is derived from. I would say that this looks a lot more like the content licensing deals to build a proprietary audience, which is effectively what Netflix did. They paid studios for content.
Apple does this, they have proprietary content that they pay producers to make, and they put on Apple TV, Amazon does this and so on. This is a lot more like that, where there are content creators out there, whether that content is proprietary, like the New York Times or user generated like Reddit. And what they're trying to do is acquire that content to build a better product on Google search.
And I'm not sure how you get paid a continuous licensing stream for that content. Once you've trained the model, the content gets old, it gets stale at some point in a lot of cases like news. And then eventually, if you don't have a high quality continuous stream of content, it's not worth as much anymore. To give you guys a sense, humans generate in total, well, let me just give you some stats. There's about a million petabytes of data on the internet today.
And humans are generating about 2500 petabytes of data, new data per day right now. Remember, I shared a couple weeks ago, YouTube's generating about two petabytes of data per day.
half of all data generated is never used. So this is like records and files and stuff that gets put on files, log files get stored somewhere, never accessed, never used. The majority of the rest of that data is not in the public domain. It's not on the internet. So there is a lot of data out there, what some people might call dark data to train on. And I think that as the identification of
better sources of training data and the value of training data. Right now, we're in this kind of shotgun approach. We're trying to blast out and, you know, source lots of content, lots of data. Just like over time, Netflix got better at figuring out what content to buy and what to pay for it. And they're the best at it. I think so too, will Google and others figure out what data is actually particularly useful, what it's worth, and what to pay for it.
And so there's a lot of data out there to go and identify to mine, to pay license fees to get access to whether those are continuous license fees, or one time is still TBD. That's a key issue. So I think we're still a little bit early to know if this is like, you know, a continuous model, like attack type business, or
If these are sort of chunky type deals, and we don't really know what the real value is yet, and that all changes over time. And remember, the rate of data generation is increasing. So while we're generating 2500 petabytes of data per day as a species on the internet, that number is going up every day. And so every year, all the old data becomes worth even less. So this is all changing fairly dynamically. And I think
there's a lot still to be figured out on what the monetization model will be for content creators and how that's going to change over time. Yeah, and it's really good point. Some things will be like the Sopranos or Seinfeld or Simpsons where
that library is worth a fortune and people will pay a billion dollars, a half billion dollars. No one's paying a lot for old NFL games, you know? Nick found it, by the way. Reddit's licensing deal was $203 million over, it says two to three years, so let's assume it's, call it three just to be safe. So it's about, you know, $60, $65 million a year. It doesn't say whether the deal with Google is exclusive. Wow, so they could do that same licensing deal multiple times. That's interesting. Yeah, none of these are exclusive, it seems.
This is what I don't understand. Why don't you do head deals like with folks like Reddit where you actually do it exclusively? Like, it seems like it's more valuable to spend a multiple of this number for one of the big seven who have tens of billions of dollars of cash anyways. And block the other players. And block everybody else. That just seems so much smarter. Well, then Reddit's multiple gets capped. If I'm Reddit, I don't want to do that deal. Yeah, Reddit may not put it on the table. Yeah. Because then my multiple is capped. Like, the way I can monetize my content is now set and I'm done. Yeah. It's going to get bought. Right.
And investors are like, oh, you're worth five times EBITDA. You know, that's it. Reddit, Quora, Stack Overflow, they're going to just get taken out. I think this is going to be the new model. Quora did around, actually. Didn't Quora raise $5,500 recently? I think they're going to get taken out. I think these businesses will become too valuable because they do have ongoing content that just keeps getting generated. You started web blogs, right? Yeah, yeah. Gadget, web blog, everything.
But think about the value of that content today. It's negligible. Like it was very valuable at the time. And as time went on, more content was being created a hundred times, a thousand times, 10,000 times more content.
that started to overshadow the value of that content. At the time the acquisition was done, it made a ton of sense. But all of a sudden, two years later, particularly with the rate at which data is growing on the internet, it's like, does it make sense to buy any content anymore? So you, well, on the other side of that is historical content could be worth a lot of money, especially some could be. So if you had the Charlie Rose archive, as an example, what is you know, he's probably interviewed Kissinger 10 times.
And he's interviewed Kissinger for, you know, 10 hours. I've got almost 2,000 episodes of podcasts I've done with startups over 13 years. Some of those are good, right? Yeah. This Week in Startups archive is going to be worth something at some point, right? I don't think it's going to be worth $60 million. What are all baseball games worth, right? Like, I mean, who watches baseball games? They're not rewatchable. And I don't think the data from them is particularly important. So I agree on that one.
But historical stuff. We don't know, right? Yeah, we don't know. And I just question how much of Reddit's content is actually like long-term valuable versus like they're covering a topic and they're talking about interesting stuff. And then no one cares a year later. It's a really, really excellent point.
Yeah. And we'll figure that out. Okay. And that's why I think it's like, it's the early days of knowing how to value all this content, particularly for LLMs. And so we don't really know yet. Over the next year, this will all start to become clearer. But it's again, it's here. The same thing happened to music licensing. Yeah. Right. But if all the content creators kind of unionize
then it might increase. I don't know if that's kind of like the, yeah, like music industry has ASCAP and other socialists. I love it. Well, I'm not, I'm not advocating for this, but the point I'm making is if they can't unionize, then there's a lot, there's just a huge number of vendors of content. And so models will need to buy some, but as long as they can get some, they don't need to have all. And therefore it's basically highly competitive among suppliers and, and,
there's a very limited number of buyers. So that tends to- This is why the news industry should have always had a federation because they could have just said to Google, hey, we're going to de-index ourselves from the Google search engine. And so you won't have the New York Times, Washington Post, LA Times. You're just not going to have any of us unless you give us X, Y, and Z. And they were just too stupid and not coordinated to do it. Music industry, the exact opposite. You try to do anything with the music industry,
they're going to come down on you like a ton of bricks to this day. I see what startups all the time. That's actually really interesting. Yeah. If all the old school legacy newspapers and magazines, so on, had basically formed their own whatever, federation, cartel, trade association. Yeah. That would have been powerful.
Yeah, I mean, that's what Murdoch wanted. Murdoch saw it clearly. He was like, you know, Google's the enemy here. They're going to take all of our revenue and they're going to get all our customer names and we're not going to even know the names of our customers. All right, issue three, Clowner crushes customer queries with AI. You may have seen this trending on X and Twitter and in the press.
If you don't know Klarna, they're a Swedish fintech company. They do that buy now, pay later stuff. I think they were the originators of that online. And they put out a press release with some really eye-popping claims. AI assistants are now doing the work of 700 full-time agents at Klarna. They moved issue resolving times from 11 minutes with humans to two minutes with AI. And customer satisfaction is on par with human agents.
And it said its resolutions are more accurate than humans, creating a 25% drop in repeat inquiries that tracks. And so far, their AI, which they built with OpenAI, has had 2.3 million conversations, accounting for two-thirds of Cloner's customer support service chats. Cloner estimates its AI agent will drive, wait for it, a $40 million increase in profits this year. We could talk a little bit more about Cloner and their valuations, but Freeberg, what do you think this
means if we're in year, this is the start of year two of chat GPT as like a phenomenon, let's say, what do we think year three looks like? I think the tech, the techno pessimist point of view is, oh my God, look at all these jobs that are getting lost.
I think the optimistic point of view is that that company has all of that excess capital now to reinvest in doing other things. That capital doesn't just become profits that flow out the door and everyone's done with that money and that money just gets put away in a sock. That money gets reinvested. And that money gets reinvested in higher order functioning work.
And that's really where there's an opportunity to move the workforce overall forward, which is what I think is super exciting and I'm super positive about. We've seen this in every technological evolution that's happened in human history from the plow in agriculture to automobiles to computing and to now AI that humans moved from manual labor to
to knowledge work, to now ideally and hopefully more creative work. And so I do think that it isn't just about eliminating jobs and making more money, but it's about enabling the creation of entirely new class of work, whether that's prompt engineering,
or building entirely new businesses that simply can't exist today, or perhaps even downscaling businesses where you no longer need to have a 10,000-person organization. Smaller organizations can be stood up as startups to start to replace large functioning organizations. So I don't know. I think it's a time of great opportunity. I know that some people would view it as being highly shocking. I think it's inevitable that human knowledge labor, where the job of the human is simply the ingestion of data and then communicate an output of data, seems like it will eventually be replaced by computing somehow.
And this is happening now in an accelerated way with these LLMs. So I think that what we should focus on and think about is what are all the new businesses, all the new jobs, all the new opportunities that just couldn't have existed 10 years ago that are now emerging, that are very exciting as the workforce transitions. Sacked you by this techno-utopian view of this, all these jobs that are going to obviously be retired.
are going to open up the opportunity for these humans to do even better work at Klana? Or do you think it's just going to go straight to the bottom line? Well, it sounds like they're able to eliminate a lot of frontline customer support roles by using AI, which is what I would expect. Yeah. I think this is a very natural application for AI. You know, it was already the case that you could pretty much find answers to questions by searching the FAQ, things like this. This is an even better way of doing that.
So look, I believe that this will be a big area for AI is saving on, again, I use the word frontline customer support because the way that customer support is typically organized is there's level one, level two, level three. The more difficult queries or cases get escalated up the chain depending on how hard they are. And I think the AI will do a really good job eliminating level one. It'll start to eating to level two, but you're probably going to need humans to deal with the more complex cases.
Now, the question is, where do those displaced humans go? I think there's going to be new jobs, new work. That's always been the history of technological progress. And one of the things you're already seeing is there's a whole bunch of new AI companies that are exploiting this technology and they need to hire people. So I basically agree with Freeberg that you will elevate people's work by automating away the less interesting parts of people's jobs.
and then creating more productivity and more opportunity. By the way, just to use your example, Sax, so imagine if all the level one support people, some chunk of them can now do level two support. And so the customers are going to get greater hands-on care. More customers will get access to a higher level of service. The organization can afford to do that. They'll be more competitive in the marketplace because customers feel better taken care of.
I just think that's how the organizations get leveled up as new technology kind of shows up like this. That's a great point. And then those folks can have a much deeper level of interaction with their customers than they are today. Just answering basic questions. The world gets more complex and then people might get better at the software and they might discover new features and you might be able to redeploy those people. If you look at coffee, like,
I don't know, was it 40 years ago, you went to order a cup of coffee, it was decaf or regular coffee, milk and sugar. Those are your four choices. And now you go order coffee. I don't know if you guys have used the Starbucks app or I just had the sweet green CEO on the pod. And man, the fidelity and the nuance of what you want to order is absurd. Chamath, where do you think this is all heading? Because there is the issue of displacement and how quickly people can be redeployed
And if we're seeing in year two, customer support and developers getting 10x, what other categories do you think we're going to see fall next? I think the truth is that, as you said, the real world applicability to AI was not last year. So I think we're really in the first
five or six weeks of the first year. So you consider that year zero? That's year zero. That was sort of like the, you know, where everybody was running around building toy apps. Group of concept. This is one of the first few times where you're seeing something in production where there's measurable economic value. And the important thing to note about that is that it's not just what it means for Klarna, but what it means for everybody else. So if you look at everybody else, for example, here's Teleperformance, which is a French company that runs call centers.
They lost $1.7 billion of market cap when that tweak went up, about 20% of their market cap. So this is the real practical implication. Yes, Klarna replaced 700 people and they saved 40 million of OPEX, but Teleport for months, while they were just doing their everyday work, lost $1.8 billion of their market cap at the exact same moment. And so what does it mean?
I think that what Klarna should do is open source what they've built. And the reason is that you want to give companies like Teleperformance a chance to retool themselves with the best possible technology so they can actually preserve as many of the jobs as possible.
Because at the limit, if every single company is able to implement something that is as economically efficient as what Klarna did, teleperformance doesn't exist. And there's $10 billion and 335,000 employees that will not have a job. And so for Klarna, the reason to open source it is twofold. One is they don't lose anything because you will still need to train it on your own data. And so there's no disadvantage that Klarna
will have, right, they're just saying, Look, I built this on top of GPT, here's what it looks like. And that production code can be used by anybody else go for it, but it has to run on your own data. That's a very reasonable thing. So I think it has the benefit of both a setting a technical pace,
that can help them attract better employees and more highly qualified people who find the scope of work even more interesting. And B, I think it's on the right side of history with all this AI stuff where it's allowing everybody to sort of benefit in a way that is the least destructive.
But I just wanted to show you that the destruction was quite quick and it was pretty severe. And if two or three other big companies launch these kinds of tweets after real measurable results, Teleperformance will be a $1 billion company in short order. There's a third reason. I think it's a brilliant idea for Klarna to open source this tool because it's not their business, right? This is just something they did as a...
as a productivity improvement, they get the benefit back to them of the community working to advance that technology. So they don't have to put more engineers like advancing the ball on their customer support AI. Totally. They can just re re merge in the changes that the open source community comes up with. And since they're not in the business of selling AI directly, there's no reason not to do it. Like Jamal said. So I think it's kind of brilliant. Yeah. This,
This is Meta's strategy, by the way. I mean, Zuck said the same thing. They should be building this on Meta's open source products and Apple's open source products, right, Zucks? Yeah. Well, so what Meta said, what Zuck said on the last Meta call is the reason we open source everything is because we don't directly sell AI. We create products that AI makes better. So by open sourcing this, we allow the community to advance the ball and we get to reincorporate those changes. So it's a very smart strategy for companies
that aren't directly selling the ai now you know if you're if you're like brett taylor's new company sierra obviously you're not going to open source it because your whole business model is to create a proprietary solution yeah but then to chamath's uh you know 80 90 whatever he was talking about uh with his you know incubator concept and you know these things it's a company it's a company okay um is it 80 90 or 90 80 i'm sorry 80 90 80 90
Is there a third word that comes off of it or you're just going to call it 80-90? Just 80-90. Got it. Okay. 80% of the features at a 90% discount. So back to that, what does this mean if these things are getting to... Freberg, your point about the pace and the pace of these things, is it going to improve 10% a year or 10% a month? If it's improving 10% a month, we're going to get to 98% of queries done this year.
If it's doing 10% a year, okay, we're going to get to 99 or 98% of queries in four years. In other words, this is happening, folks, and it's happening at a blistering pace. By the way, think of, so not just Teleperformance, which was a $10 billion, now a $7.58 billion USD company. But think about Zendesk, right? Zendesk was, I think, an $8 to $10 billion take private by PE. Yeah, it was a $10 billion take private in the hands of PE companies.
The entire Zendesk workflow could be replaced by a handful of these open source agents, where all of a sudden people can eliminate a lot of OPEX. I think the thing to keep in mind here is where the world is going.
has always been to try to lower cost. And the original foundational principle of SaaS was that there's these line items in on-prem software that are just extremely expensive over time, very hard to justify, right? And so when people moved to SaaS from on-prem, they were looking for cost savings. That was the initial thing. Now, it's actually not cheaper anymore, but it's much more feature rich. So you get a lot more value in SaaS, et cetera, et cetera.
But the point of these AI agents and bots and workflows is that it'll reintroduce the concept of cost savings, of this idea that you can have cheaper, faster, and better. And the more that that stuff is open source, my gosh, I think it just makes it very hard for companies that have point products to survive. Yeah, I was talking to a friend of mine, Josh Moore, who was the city head of Uber in New York, and he launched his own note-taking app. He's writing it himself.
And he's obsessed with this concept of building a billion dollar, a unicorn company with one employee. And this is something that a lot of people have been talking about. And my friend, Phil Kaplan from DistroKid built a very large business in DistroKid, a unicorn with a very like, I think low single digit number of people. This could be the future of...
you know, efficiency. You could build, if you catch fire with a really hot company that gets a million customers. It's absolutely the future. It's absolutely the future. And then if you think about our jobs in terms of capital allocation, well, how much capital does that founder need? Do they need to dilute 10%, 20%, 30%? They're not going to need to dilute 60, 70, 80%. A one person company should be able to spend less than a few hundred grand to get to product market fit in the next few years. Yeah. I mean, that's kind of what we're seeing. Just a
Go back to your question, Jake, where does this go next? Yeah, please. I think it's really interesting to speculate about that. So what Klarna seems to be talking about are email-based customer support cases. I think where this is going to go next is to phone cases.
A hundred percent. And these call centers use what are called IVRs, these interactive voice response systems, but they're very rigid. It's a lot of prerecorded messages. And it says, push one if your problem is this, push two if your problem is this. Everyone hates those things. Yeah.
I think where it goes next is you'll call up the call center and you'll get a voice that sounds like a human just talk to you. And you won't even necessarily realize that you're talking to an AI because there are already these AI companies that can do generative voices, any language, any accent. Oh, and they're fast now.
And they're fine. Yeah. And multi-language. Think about that. Just localizing them across the globe. You know, you want to launch your product in Japan. By the way, did you guys see there was a meta demo, which I thought was really cool, which was, it was run on Lama 70B, but
It was a real-time translation tool where the person was speaking in Hokkien Chinese and the other person was speaking in English and they were able to understand each other. But, Sax, to your point, when that person calls, for example, B of A, now Spanish is not a language. It's actually many dialects and many, many, many different accents, right? Depending on which country you're from, like the accent that you hear in Spain is totally different than the accent in El Salvador or the accent in Chile or Argentina. And so wouldn't it be amazing where you
You call your B of A app, it picks up your accent and your tonality, and it responds with the person of that exact same accent and tonality. That is incredible. Well, I'll take it to another level. You call, it recognizes your number. It knows what you're doing in the software. It knows the problem you've had. It knows the last three times you called and how long you've been using the software. And it anticipates like, okay, I know this person has a Windows machine and it's still five years old. And it's like, are you still using that same five-year-old Windows machine?
Yeah, we know it's a bug with Windows, whatever. You should probably upgrade it. I mean, it's going to know the entire context of this. And so it's going to just get, it can be more efficient than a human could ever be. The best customer support interaction I had was I called JP Morgan Chase because I had an old credit card that I'd had for 20 years. And I think that they had outsourced it to somewhere in the Caribbean.
and this woman picked up the phone. It was so cool. Chamath, what is the problem, man? And I was like, oh, this is the best. And I had this whole conversation with her for 15, 20 minutes, nothing to do with the phone, nothing to do with my credit card, rather. It was great. She's like, I mean, man, you want to cancel your card? I was like, yes, please. No, but you could do celebrities. Celebrity voices, yeah. J. Cal, do the DJT. Okay, let's role play. You're Donald Trump, and I am JP Morgan. Hello.
Hi, Mr. President. To Martha.
You're huge. You've got big spending. Every time I go see Chamath, I go see Chamath, I go in. Amazing. Mr. President, I would like to cancel my credit card, sir. Okay. You don't want to cancel it. We've got a great APR, 3.9%. But for Chamath, you know what? He's my Sri Lankan friend. How much do we love Sri Lankans? Okay. Huge. And not J. Cal. Nasty. Nasty man, J. Cal. Very nasty. Yeah, it's like a dog. Right.
He's a dog, okay? He's a dog. Some people say TDS. How much do we love our Sax? Sexy poo. Great. Mar-a-Lago. I go to Sax's house. His house, huge. Huge house. Almost as big as Mar-a-Lago. Not quite. I got to work on it because I haven't done it for years. That's really good. It's really good. I got to work on it. Have you seen that guy? There's a new impressions guy who's amazing.
He does Trump really well. I have seen him. He's on Howard Stern all the time, right? He does the Howard Stern. Yeah, yeah, yeah. He does Howard back to Howard. No, not Shane Gillis. This is another kid. Matt Freed, friend. Yeah, yeah, yeah, yeah. He does Howard back to Howard. It's hysterical. He does Howard. Oh, wait. That video is good, too. Hold on. This is what he does on multiple Trumps. I want to hear Trump. I want to hear Trump. Multiple Trumps is hysterical.
on other Trump impressions. So like, I'll tell you what, Zach, there are so many people that try to do, but you see the failed Alec Baldwin. Baldwin comes out on SNL, which used to be a lot funnier back in the day with Chevy Chase. But Baldwin goes like this. He goes, we've got a great show.
Boopity, boopity, boopity, boo. I don't do that. I never say boopity, boo. I never say boopity, boo. This is Stephen Colbert. He comes out like, I know a dot, dot, dot, a dot, dot, a dot, dot, dot. China, I don't say the dot, dot, dot. And the last one is Jimmy Failing Fallon. He touched my hair like a dog. And Jimmy Fallon, he goes, okay. Failing like a dog. We're rolling. Okay. I don't do these moves. They're all wrong. So that's a little...
That's an incredible impression. Can I see him doing Howard Stern? You gotta see him do Howard Stern. It's so next level. The other one he does that's incredible but subtle is Stanley Tucci. How are you, by the way? Robin, you look beautiful. Right. Thank you. Is this still f***ing you up, Howard? It's f***ing me up. Yeah, right. Because when I start talking to Robin after you leave, I get crazy. Right. Can we just talk about this? Go ahead. Right or wrong, right?
Right. Right. Right. Right. Right. Right. Right. Right. Right. Right. Right. Right. Right. Right. Right. Right. Right. Right. It's so great. He is killing it.
I think we have our halftime act for the All-In Summit 24. Oh, for sure. Oh my God. Yeah, he'd be amazing. I love Stanley Tucci, by the way. He's great. Oh, you got to see this kid, Stanley Tucci. Do this, do this, Stanley Tucci. Have you guys watched the Stanley Tucci on HBO where he cooking? Yeah. Where he like, he goes to Italy. It's so great. It's so great. He does it on TikTok too. He just...
randomly cook something and he's like, I'm going to make a frittata. I have some leftover gnocchi and I'm just going to... I do watch him on TikTok. Thanksgiving is a special time. I'm in London celebrating. So today I'm making...
Stan's stuffing. And also, sugo di carne, or as it's known, gravy. Now, the thing about stuffing is, you might use a traditional white bread, but in my household, I use a homemade focaccia, because I'm Italian on both sides, and nothing tastes better than bread with a little marinara sauce.
It's your time for gratitude and giving thanks. Enjoy your holidays, and thank you very much. Felicity, good. That is really good. He's so good. He's so good. All right, listen, I don't know how I keep the show going here, but three, two, okay. Issue four, Reddit's S1 is kind of fun. Let's break it down, everybody. 2023 revenue, $804 million, up 21% year over year. They're still losing money. Net loss, $91 million in 2023. They lost $159 million in 2022, so they're cutting the loss.
Free cash flow is negative. Here's a chart of their quarterly revenue and cash flow. So they're kind of bouncing along the breakeven mark, as you can see there in the chart.
They got a wonderful gross margin because they don't pay to produce the content on like the New York Times or Netflix. And so an 86% gross margin, up 2% year over year. Their daily active unique, 76 million, up 27% year over year. Average revenue per user is incredibly low, $3.42. And their daily active unique users, here you can see another chart,
growing nicely quarter over quarter. They got a billion two in cash. Most interesting, probably, and could be challenging to execute on is their direct share program. They're going to carve out a bunch
of shares in the ipo to sell to their most active mods those are the moderators the people who run the different channels or subreddits as they're called what could go wrong what could go wrong before and uh but they're going to invite you to participate on it on a rolling basis unqualified retail buyer pool does not exist in the reddit mods except maybe the reddit
participants themselves. WallStreetBets knows what they're doing, yeah. I think they'll buy after the lockup comes up. But let's just get started here. I think you looked at the S1 a little bit, Freeberg, and you had asked me to put it on the docket because you were digging into it. Anything stick out to you or thoughts on the business overall? No, I mean, I wasn't pulling... I just asked if you guys had read it. Hey! Hey! I think...
I made a funny joke. Reddit, get it. I'll let you guys go on for a minute. Go ahead. Tell me when you're ready. It was a good pun. Too bad it was accidental. No, it was intentional. I'll give you credit. It was intentional. I'll give it to you. So I think the thing about Reddit, if you could pull up the chart with the
quarterly average daily active user data. This was a business that the last couple of years, everyone was like had flatlined because it was only growing 5% a year.
in terms of usage. And then all of a sudden, in the last two quarters, so starting in late summer, early fall of 23, so just six months ago, the usage started to climb pretty significantly, growing 15 and most in the most recent quarter, 27% year over year. Absent that growth story, it's a really challenged business because a business without much growth gets value typically on a multiple of the cash flow that they're generating. And
And you know, there's less upside and all this kind of optionality goes away. That's kind of a key point. I don't know, I think like for you to make an investment at a $5 billion valuation here, you've really got to believe that the growth continues at this rate. And it doesn't revert back to the mean growth rate of the last couple of years of basically 5%, which is roughly flatlined. The other challenge they have is that there are poos only in that kind of $3 range, which is like less than 10% of where Facebook is at.
And the data that Facebook collects on their users gives them the ability to do much better targeting on ads and therefore monetize their audience much better than Reddit has been able to do to the order of over 10x. And then if you look at the ARPU number, how much they've been able to grow that metric, it's also, you know, been a little bit flatlined. So this business, I think, is a real question mark. I mean, you could argue it's probably worth in the best best case in the two to $3 billion kind of valuation range.
And then you have to believe the bull case that the growth continues or accelerates from here. And they have a plan to address the ARPU problem. They have other paths for monetizing their audience than what they're kind of doing today. What do you guys think this thing's worth? Do you buy it if it goes out at $3 billion or $5 billion? I think the first question, which you nailed, that a buy-side investor will ask is what happened in the last two quarters that was different than the last 15 quarters?
That's going to be a very important question. I think they're going to have to have a very buttoned up answer for that. And if they can point to very specific repeatable things, I think that'll be good. The thing that they, this IPO, if it goes off in the next four weeks, they won't have to wait. But if it doesn't get off in the next four weeks, they'll have to update the S1 probably with Q1.
And so you'll see whether this thing is a trend or whether it was a one-time thing. Do you know what it is? Well, the growth in the logged out is probably largely because typically if you use it on a phone, it tries to force you to use the app, right? So that you can be in this logged in experience. And if you just turn that off, you can get a lot more logged out because Reddit gets tremendous rank authority from Google.
So if you just turn that off, I think that you'll have a lot of logged out customers and that will grow very quickly. And so maybe it's a decision that they'd rather have the top line number grow than have logged in users grow. But the logged in user growth has still been pretty healthy. It's basically doubled in the last three years. So, but to your point, Freeberg, if they said, oh, our business is really only these 30 odd million logged in users, it would be worth a lot less than saying 75 million. I think it's, I think you're right. It's kind of like in the mid,
you know, kind of two, three, $4 billion range. The big problem is the ARPU because these are not users that represent sort of Facebook's bread and butter, kind of a $40 ARPU lives in a good suburb in the United States and is monetized like crazy.
I just don't think that's what these users are. But you could see that as a challenge or an opportunity because if their RPO is only 10% of Facebook's, there's a lot of headroom there to grow it. If those users become...
economically more valuable. ARPU is actually down 2% year over year at SACS. The issue with this user base is they're incredibly sophisticated internet users who don't click on ads and are kind of anti-ads as opposed to, you know, the general population on a Facebook or a generic service. And it's anonymous. So you don't know who the user is, which is how Facebook has such incredible demographic targeting capabilities. There's been a lot of
conspiracy theories, long con theories that came up, sacks that we were talking about on group chat. Maybe you could summarize this long game that was played by Sam Altman and the allegedly founders of Reddit to wrestle control of Reddit back from their previous corporate owner, Condé Nast. Well, this was a post by Yishan, who is a former CEO of Reddit that was published back in, I think, 2015. And he kind of lays out
what I think happened. Or I mean, he says at the end, just kidding. But if he's a former CEO describing these events, he must be describing something he knows about, I would just think. But in any event, what happened is that Reddit was sold for only about $10 million a year after it launched. So like really, really small. And I think that it kept growing. And the founders realized maybe that made a mistake or that this was actually a bigger property. And
And so they started scheming on how to get CondiNAS to spin it back out. And so Yishun lays out the steps they went through. They recruited a CEO who they kind of pre-agreed on. Then they had that CEO demand options specifically in Reddit from CondiNAS, which meant that CondiNAS had to create a separate cap table for it.
And then once they had a separate cap table as a subsidiary of Condé Nast, then they could sort of pressure to have like an outside investor bought in for the expertise that just happened to be Sam Altman and his fund. And, you know, eventually like step-by-step, they worked it to the point where they got Condé Nast to spin off the company. And I guess this plan worked now.
It should be said that the largest showholder in Reddit, according to the S1, is Condé Nast or Condé Nast's parent company. So no one's going to benefit more
from this plan if you want to or scheme if you want to call it that than connie nest it was a smart thing for them to do to spin out reddit to allow the employees to have options and then i would say to bring back the founder steve huffman as ceo several years ago so yeah he's great it worked out for everybody you know who knows if it was all premeditated so they own 30 percent and it goes out for 5 billion and they got 1.5 billion and they paid 10 million for it so
This was like a web 2.0. No, it launched in 2005 and it sold in 2006. Yeah, it was a 2.0. It was the same time as Weblogs Inc. and Delicious and Flickr and all that. That whole cohort of little web apps that used Ajax and other things. The web was just getting faster and there were a lot of users.
It was smart for Condé Nast to do the spin out and give up 70% in order to have 30% of what's going to be a multi-billion dollar IPO. Great outcome for everybody. All right. Issue five. Apple doesn't have a fast car. Project Titan is DBA. Dead before arrival. Apple, as you know, has been working on an electric vehicle for a decade. Self-driving as well.
They've invested billions in the project, according to Bloomberg. And Apple was targeting a $100K price point, basically going after the Model S Plaid. With FSD, the company had 2,000 employees working on this project. It was called Titan. Then they had designers from Austin Martin, Lamborghini, Porsche. According to the report, most of the team will be transferred to Apple's generative AI division. We talked a little bit about Maggie, their generative AI image language model.
There's going to be some layoffs. It's unclear how many. What does building a car have to do with building an LLM? Well, they were, yeah, so they weren't just building a car. They were all in on self-driving and not having a steering wheel. So I understand. I'm just saying it doesn't make a lot of sense that 2000 employees that were specialized in building a car all of a sudden now become the AI team.
It would be more like the full self-driving team is probably getting those AI jobs and the rest are probably getting laid off with incredible packages. But what do you think, Sax? I was always skeptical that Apple was even working on a car. It's just a very different kind of product than anything else they make. And so I never really treated it that seriously that they were going to make a car. So I'm not, the surprise to me is not that they canceled this, but that it was even true that they're working on it in the first place.
Yeah, no, there were reports of them having test tracks and everything. It was pretty well established. And you could see that people from Tesla and other places had... Does it say why they actually killed it? No, this is just a report. And the speculation is that they're going all in on AI. They just see that as a much better future. Well, I think it's more core to what they do. I mean, the car never seemed that core.
No. And there were reports and Elon's talked about it publicly, so we're not speaking out of school here or anything, but that Elon and Apple had and Tim Cook reportedly had talked or there have been overtures.
That maybe Tesla was going to get bought and Elon's been pretty clear that during the Model 3 rollout, he was considering selling it to Apple. That would have been a smart acquisition if they had done that. Can you imagine all the Apple showrooms having a Model S in it or something or a Model Y? I mean, boom, they would have just sold so many of them and having the Apple operating system on that display. Yeah.
Wait, was it before the Model S came out or the Model 3? It was Model 3 was the time I think they were talking seriously. But remember, no one really believed in the Model 3 until it came out and started selling hotcakes. Quite the opposite. They thought it was going to kill the company. Right. All right, everybody. For the Rain Man, David Sachs, Chairman Dictator Chamath Palihapitiya, and the Sultan of Science, David Freberg, I am the world's greatest moderator. And we'll see you next time. Bye-bye. Grub grub.
What's going on? Rain Man, David Sack. And it said, we open sourced it to the fans and they've just gone crazy with it. Love you, West. I'm the queen of kinwad. What your winners like? What your winners like? Besties are gone. Go 13th.
We should all just get a room and just have one big huge orgy because they're all just useless. It's like this like sexual tension, but they just need to release somehow. You're the beat. What? You're the beat. What? We need to get merch. I'm doing all it. What?