This podcast is supported by KPMG. Your task as a visionary leader is simple. Harness the power of AI. Shape the future of business. Oh, and do it before anyone else does without leaving people behind or running into unforeseen risks. Simple, right? KPMG's got you. Helping you lead a people-powered transformation that accelerates AI's value with confidence. How's that for a vision? Learn more at www.kpmg.us.ai. You can
Casey, did you hear about the guy who was arrested for trying to steal a driverless car? No. Yeah, this is real. This actually happened a few days ago in Los Angeles. Apparently, this was a guy who got into a self-driving Waymo as someone else was getting out. The police officers say he got into the driver's seat and tried to like basically drive it away. But
but couldn't manipulate the controls. And then a Waymo employee who was like watching on the, you know, the sort of like closed circuit TV that they have, uh,
was basically like, sir, please leave the car. And the guy would not leave the car. And so the Waymo employee just called the police and the guy got arrested. Well, see, I think that that's unfortunate, Kevin, because there's a much funnier way to resolve that situation, which is you close the doors, you lock them, and then you just have the car drive itself to jail. That's true. Like, if I'm the Waymo employee...
that's the most fun day you've ever had. Usually you're trying to like, you know, help it. Oh, it got stuck on a curb or whatever. This is your chance. You can make, make the best citizens arrest of all time by just remote piloting this man directly to prison. That's true. I hadn't really thought about it, but like self-driving cars kind of defeat the whole concept of the getaway car. Like you rob a bank and you get into the way, Mo. And it's just like, no, we're going to jail. Yeah.
But you know what you get charged with if you steal a self-driving car? What's that? Grand Theft Autopilot. No, no. I reject that joke. Rejected!
I'm Kevin Roos, a tech columnist at the New York Times. I'm Casey Newton from Platformer. And this is Hard Fork. This week, OpenAI punches back at Elon Musk in a messy new legal battle. Then, a sweeping new law aimed at reining in big tech takes effect in Europe. Will it succeed? And finally, The Wall Street Journal's Joanna Stern joins us to compare notes with Kevin after a month of using and abusing Apple's Vision Pro headset. Sorry, I can't hear you. I'm playing Fruit Ninja. Take that thing off your face. Ha ha ha ha ha ha ha ha.
Well, Casey, the AI moguls are fighting again. And it's gotten me saying AI, AI, AI. I'm going to just ignore that and proceed on. So we have to talk about the drama that has been playing out in the past week between OpenAI and Elon Musk. Last week, Elon Musk
filed a lawsuit against OpenAI, which basically boils down to an accusation that this company pretended to be a nonprofit that was interested in building AI to advance all of humanity, but then sort of covertly turned itself into a for-profit and became kind of like a regular tech company that just wanted to make a bunch of money.
And OpenAI has responded to this lawsuit now. It's been a very dramatic few days. But I think we should just start by outlining what's in this lawsuit that Elon Musk filed against OpenAI. Let's do it. So let's remind people why Elon Musk is involved in this in the first place. So Elon Musk was a part of the founding team of OpenAI. He provided a lot of its early funding. He was instrumental in sort of getting it off the ground and recruiting key people.
And then in late 2017, Elon had a falling out with OpenAI's leadership, which led to him stepping down from the company in 2018. And now, you know, six years later, Elon Musk has his own AI company, XAI. He's trying to build...
competing products with the ones that OpenAI is doing. And he's also been very public about how disappointed he is in what's been happening to OpenAI since he stopped being involved. Yeah. In his view, he created a nonprofit. And while there is a nonprofit board that controls the company...
It also is doing a lot of commercial work. And in fact, its for-profit subsidiary is currently valued at $86 billion and is, we think, probably eventually going to make a lot of money for Microsoft in particular. So it is a very different company today than it was when Elon Musk left.
Yeah, so that's the heart of his allegations is that OpenAI has essentially sort of breached the open part of its founding agreements, that it was supposed to be this sort of nonprofit AI developer that was going to build advanced AI and, you know, release it freely to the public, right?
for the good of humanity, basically as a counterweight to what they saw as the danger of Google, which was building all this stuff in a very closed and proprietary way. And the lawsuit basically says, you know, OpenAI was a nonprofit, which is why I put a bunch of money into it. And then it became a for-profit. And not only is it a for-profit, but it's now aligned itself with Microsoft and
and can basically be thought of as a subsidiary of Microsoft for all intents and purposes. Yeah, so, you know, and I will say that some of that argument resonates with me. I think there is some truth in it, but you can't win a lawsuit with some truth. So what are the actual legal accusations that Elon is making against the OpenAI folks? So one of them is just breach of contract, right? You had this agreement to develop this technology as a nonprofit and open source it.
you have not done that. You know, GPT-4 is not open source and OpenAI is partnered with Microsoft. And now let me ask an important follow-up question, Kevin, which is, is there actually a contract where OpenAI says it's going to do all of this stuff? So that's one of the interesting wrinkles here. You know, the lawsuit makes reference to this thing called the founding agreement, which is sort of what Elon Musk claims was breached in this case.
It does not actually appear that there was a founding agreement that said we will remain a nonprofit forever and will develop only open source technology and release it all freely to the public. That is basically inferred from things like OpenAI's certificate of incorporation and these emails that Elon quotes from as part of this lawsuit.
And one piece of legal analysis that I've read over the past week, Kevin, as we've been digesting this, is legal experts say that if there is not a contract, it's actually very difficult to enforce. That's true. That's true. To sue for breach of contract and win, there has to be a valid contract. You know, that contract needs to be written down in some form and enforceable. This is why I love my job. I learn something new every week. Yeah.
So this is why a lot of people have said this lawsuit is probably not going to succeed because Elon Musk is alleging that OpenAI breached a contract that actually doesn't exist. All right. Well, let's set that aside for a second because I'm hoping that this lawsuit also contains another charge and maybe one that's even just based around some weird legal terminology I've never heard of. Well, I'm glad you asked because today we finally get to talk on the podcast about the concept of promissory estoppel.
What did you just call me? So promissory estoppel, according to my three minutes of Googling before we started recording this podcast, is what is promissory estoppel, Casey? Well, you know, I asked Google Gemini what promissory estoppel meant, and it said, buzz off, white boy. No, I'm just kidding.
It said promissory estoppel is when you make a promise to someone that you're going to do something and the other person relies on that promise to their detriment and then you go back on your promise. Okay, wow. We have gone to law school today. So in addition to talking about promissory estoppel, there's also, I would say, the piece of this lawsuit that interested me the most was about AGI, Artificial General Intelligence.
And specifically, a claim that I did not, frankly, expect Elon Musk to make in a legal filing. He claims that OpenAI with GPT-4 has already achieved AGI. Now, this is not something that people tend to say about GPT-4 is a very fringe view. But I think we should explain why Elon Musk is claiming that OpenAI has already achieved AGI. Yeah.
Yeah, so you may recall last year, Microsoft researchers wrote a paper after GPT-4 came out that said, we're already seeing sparks of an artificial general intelligence. What does that mean? I think the most generous, non-hypey reading of that statement is GPT-4 is great.
in some ways, truly a general intelligence, right? You can throw a lot of different kinds of things at it, and it can handle those tasks reasonably well. Now, that is not how most people, including us on this show, think about AGI, right? When you and I talk about AGI, we generally mean a computer that can do anyone's job better than they can, right? And we are not there. But
Microsoft came along and said, hey, this thing that we're like massively invested in, we think it could be the beginnings of AGI. And now Elon Musk is weaponizing that against Microsoft and OpenAI saying, oh, you've already achieved AGI. Well, that's going to create a problem for you then.
We should explain why it would be a big deal if OpenAI had achieved AGI. Aside from sort of the obvious societal implications, there's also a contractual implication for the company. Because when they struck their deal with Microsoft that would give them billions of dollars and access to tons of computing power to train and build their models...
One of the provisions that OpenAI put into that deal was that it only applied to pre-AGI technologies, right? So Microsoft can license and use GPT-3.5, GPT-4, DALI, but...
if and when they do achieve AGI, they won't be able to license whatever that new technology is. And they did this basically as a safety measure because their theory was eventually we're going to build something like AGI. That thing is going to be massively powerful, not just for doing people's jobs, but also potentially for some of these existential reasons.
And we don't want to be in a position where we are forced to give that over to Microsoft. We want to be able to have our nonprofit board make decisions about what to do with that AGI if and when it arrives.
So OpenAI is really in this kind of fascinating position where it wants to build AGI, but the minute it actually does build AGI, then it loses the ability to sell or license that technology to Microsoft. So Microsoft has an incentive not to
what OpenAI has built as AGI, even though its own researchers are saying this thing sure feels a lot like it's the beginnings of AGI. Yeah, this just feels like the latest case of a tech giant getting so rich that it can afford to have its own research department and then the research department doing nothing but embarrassing the company. I mean, how many times have we seen this before? Whether it's like the researchers at...
Google's AI division that created all sorts of headaches for them or like researchers inside Facebook being like, sure seems like this is harmful to a lot of people. You know, if there is a lesson here at tech companies, it's be real careful when you create those research divisions. So,
This all comes around to the lawsuit because one of the things that Elon is arguing here is that because he argues that GPT-4 is actually a form of AGI, he says that this deal between Microsoft and OpenAI no longer applies and that Microsoft doesn't have the exclusive rights to use it. And in fact, that legally they can't use anything more powerful than Clippy. Yes, that's sort of between the lines of the complaint, but it's in there somewhere. Yeah.
He also goes on to argue that after Sam Altman's ouster last year, that the board of OpenAI, this nonprofit board that's supposed to decide when something counts as AGI, was basically reconstituted with people who don't actually have the necessary expertise to say whether or not something qualifies as AGI. And the most memorable line in the lawsuit,
was when Elon Musk and his legal team quoted the musical Annie. And they said that, you know, basically for open AI, AGI will always be a day away, like tomorrow. And, you know, I think no matter what else you think of the lawsuit and the merits of the lawsuit, I do think that is...
an interesting and important point. So is this a fun thought experiment? Sure, but let's be clear. OpenAI has not achieved anything close to an artificial general intelligence. GPT-4 can do some pretty cool stuff, but it is not nearly close to the things that have been described to us by Sam Altman in this room as AGI, right? And so I think we're just kind of a long way away from that. I also think on the point of like,
How will we be able to tell who's qualified to make that decision? It should be pretty fricking obvious, right? If there's a piece of software that you can just put on your computer that can do any job in the world at a human level of competence or a superhuman level of competence.
I don't think you're going to need a blue ribbon commission to determine whether that's true. I disagree. I actually think it's going to be really hard to determine what does and doesn't count as AGI. I think that if you showed GPT-4 to someone in the tech world 10 years ago,
They would probably say that's AGI. It can write papers on any subject. It can tell you about anything. It can pass the bar exam. It is doing all these things that researchers previously thought it would be impossible for AI to do or that it would take decades for AI to be able to do.
So I think the goalposts on this do keep shifting. And I think, you know, there will be endless debates and there already have been endless debates about what is and isn't AGI. And so I think we'll continue to talk about that. But let's talk about how OpenAI responded to this lawsuit from Elon Musk. Yeah, because they put out a blog post. They sure did. Yeah. So this blog post appeared on Tuesday, a few days after Elon Musk had filed his lawsuit. And
And the blog post doesn't really address this claim about whether GPT-4 is or isn't AGI.
But basically there, they say, you know, we don't believe Elon's lawsuit has merit. We are going to move to dismiss it. But there's also this piece. It says, quote, we're sad that it's come to this with someone whom we've deeply admired, someone who inspired us to aim higher, then told us we would fail, started a competitor, and then sued us when we started making meaningful progress toward OpenAI's mission without him.
Go off. So that is basically the top line of their claim is that Elon Musk is just jealous. He's just jealous. He's just, you know, basically a hater who was instrumental in our founding. We admire him, but he did not think we would succeed. We succeeded. And because of sour grapes, he's now suing us.
So that's their basic claim. But then they also include all these emails back from like 2015 and 2018, sort of the early years of the company. And Casey, what do these emails show? My favorite of the emails is from Elon in 2018. He sends this to Ilya Setskovor, Greg Brockman, and Sam Altman saying,
And Elon says, my probability assessment of OpenAI being relevant to DeepMind slash Google without a dramatic change in execution and resources is 0%, not 1%. I wish it were otherwise. Even raising several hundred million won't be enough. This needs billions per year immediately or forget it. Yes. So this is a very delicious email for the OpenAI folks because here you have –
Elon Musk saying there is a 0% chance that you can compete with Google or DeepMind. And of course, we now know that they're essentially in a neck and neck race to build these frontier models. And he's also saying you guys need to go out and raise a lot of money.
which is the exact reason that they moved away from this pure nonprofit model toward one where they created a for-profit subsidiary that would allow them to raise the billions that they needed to train AGI. So this is OpenAI saying, hey, numbskull, remember when
and you sent us this email and you said, you need to do this exact thing. Well, we did the thing. And then you come around six years later and you say, you're just going to sue us over it. Right. And they also sort of take issue with this claim by Elon Musk that all of this software should have been open source, right? That the open in open AI meant that
when they built AI models, they should release them to the public and that they went back on that promise. And they show an email exchange from 2016 where Ilya Sutskovor, one of the co-founders of OpenAI, is talking about sort of what happens as we get closer to a very powerful AI system. And
Ilya writes, as we get closer to building AI, it will make sense to start being less open. The open in open AI means that everyone should benefit from the fruits of AI after it's built, but it's totally okay not to share the science. And Elon Musk, according to this blog post, replies to that email with one word. He says, yep.
So essentially, OpenAI is saying, look, you knew from the beginning or very close to the beginning of OpenAI that we were going to have to raise a bunch of money and probably lose our pure nonprofit model. And you also knew that we were at some point going to have to stop releasing stuff because
to the public because it was going to be more and more powerful. We don't have to share all of the code to achieve this mission of being open. Yes. Although, of course, Kevin, when we read that email where Elon Musk says, yep, the one question that the OpenAI blog post cannot answer is how much ketamine was in Elon's system when he wrote that? Because depending on what the level, he may actually have no recollection that he wrote that. That's true. That's true.
So this blog post also goes into some of the sort of reasons that there was this falling out between Elon Musk and OpenAI back in 2018. And they show that there was basically this disagreement over how the for-profit part of OpenAI should be structured. According to OpenAI, Elon Musk wanted majority equity, initial board control, and to be CEO of this new for-profit organization.
According to their blog post, they couldn't agree to the terms of the for-profit because they didn't want any one person to have absolute control over OpenAI. Elon Musk also apparently floated an option for funding OpenAI by having it basically attach itself to Tesla so that OpenAI would essentially become a subsidiary of Tesla. Tesla could use all the money it makes from selling cars and trucks.
to fund the research at OpenAI, the OpenAI team obviously declined that offer. Yeah, and Elon has since taken that exact idea and used that with his new AI company, X.AI, which has close ties to both Tesla and X. Yeah, so what do you make of this exchange? It's obviously very dishy. It's obviously full of sort of beef and feuding between these very powerful tech people, which makes it interesting to folks like us. But is
Is there a real case here, or is this just kind of a bunch of rich tech guys sort of arguing with each other? Well, I do think that there is a case that this lawsuit and the fracas around it winds up serving both people because Elon gets to take a stand and say, look at OpenAI. They are out there and pitch themselves as these kind of goody two-shoes who are trying to save humanity. But in reality, they have evolved into just another, you know, humanoid.
capitalist factory like so many others. And that is arguably good for Elon. You know, he gets to make a point that I think resonates with a lot of people. You know, reading parts of this lawsuit, I felt a little bit like I was reading that classic Onion story, heartbreaking, the worst person you know just made a great point. Yeah.
Because I do think that there is some substance to the complaint that there is something about all of this that doesn't feel right. At the same time, OpenAI's blog post really serves OpenAI. They're able to come out and say, look...
Elon Musk is lying like he has been caught lying about so many other things. And this whole thing is ridiculous. And so I'm sure this will, you know, stir up their fans and they'll get something out of it, too. What would you make of it? Yeah, I sort of largely agree. I think, you know, I don't I don't.
have any special insight into whether this is a good or substantial legal case. For all I know, it could get dismissed tomorrow. But I'm glad that we're learning more through this lawsuit and the response to it about how the people who built OpenAI were thinking back
then. Because I think it's really important to understand why these companies were pushing toward this goal that they were pushing toward, what they were worried about, who they were worried about. And honestly, I would like to see more scrutiny and access to information about OpenAI. Specifically, this is a company that has been very secretive for some good reasons. They don't want
everything to be out there about what they're working on. But we also just don't know a ton about what they're building and how they're building it, what their data practices are, things like that. Their governance is still a pretty big mystery to a lot of people. So whether it's through this lawsuit or other lawsuits, I just imagine that we're going to be learning more about OpenAI and how they build and how they think about this technology in the coming years. And honestly, I think that'll be a good thing.
Also, this is apparently true. On Wednesday, Elon Musk posted on X, change your name to Closed AI and I will drop the lawsuit. Which is obviously just a dumb joke, but I do think that it reflects that life is truly just a video game to this person. And he cares about almost nothing with any degree of seriousness. I mean, I think it's kind of...
And it's not a good point. And I don't want to say it's a good point. But I do think that putting open in the name of the company has led to a lot of misunderstandings. Like, no one expects, you know, McDonald's to share the secret recipe for the Big Mac sauce because they're not called open McDonald's. I once ate at an open McDonald's. It was the worst hamburger I've ever had in my life. Like.
Elon Musk has also not open sourced his AI stuff. Grok is not an open source AI model. Clearly, he doesn't think that everything should be open. But I think if you put open in the name of your company, people are maybe going to assume that what you're going to be doing is going to be open.
- You know, this is OpenAI's equivalent of Google adopting the mantra, "Don't be evil," which like exclusively became a cudgel to beat them with anytime they did anything that anyone anywhere didn't like, right? Same with Facebook and move fast and break things.
Originally, it was just sort of a little slogan designed to get engineers to ship a little bit faster, and now it's sort of synonymous with the company's misdeeds. So you've got to be real careful with your names and slogans at these companies because they do come back to bite you. Yeah, which is why I'd like to announce that effective immediately, the name of this podcast is now The Closed Hard Fork. When we come back, the most sweeping effort yet to rein in big tech. Will it work? Will it work?
This podcast is supported by KPMG. Your task as a visionary leader is simple. Harness the power of AI. Shape the future of business. Oh, and do it before anyone else does without leaving people behind or running into unforeseen risks. Simple, right? KPMG's got you. Helping you lead a people-powered transformation that accelerates AI's value with confidence. How's that for a vision? Learn more at www.kpmg.us.ai.
I'm Julian Barnes. I'm an intelligence reporter at The New York Times. I try to find out what the U.S. government is keeping secret. Governments keep secrets for all kinds of reasons. They might be embarrassed by the information. They might think the public can't understand it. But we at The New York Times think that democracy works best when the public is informed.
It takes a lot of time to find people willing to talk about those secrets. Many people with information have a certain agenda or have a certain angle, and that's why it requires talking to a lot of people to make sure that we're not misled and that we give a complete story to our readers. If The New York Times was not reporting these stories, some of them might never come to light. If you want to support this kind of work, you can do that by subscribing to The New York Times.
So Casey, there are a few topics on this show that one or the other of us has always been reluctant to talk about because we think it's boring. For you, this is semiconductors. And for me, this is European tech regulation. But this week, you are forcing me to talk about European tech regulation. And my first...
Question is, why are you doing this to me? It's a fair question, Kevin. But look, I think a question that undergirds a lot of the journalism that you and I have done is, hey, these tech companies have done nothing but get larger and more powerful ever since you and I started to write about them. Should we maybe try to rein that power in? And if so, how could we do it?
We live in a country that has offered essentially no answer to this question. There's been endless hearings and screaming and people write laws that go nowhere. But what if I were to tell you that just across the ocean, there was another democracy that had big ideas for how you could maybe start to chip away at that power and maybe distribute it a little bit more broadly across the land? What if I told you that, Kevin? I would be slightly more interested. Is that happening? We're getting started.
somewhere. Yes, Kevin, we are. Because in 2020, the European Union began to pursue what would become the Digital Markets Act. It also has a twin, by the way, called the Digital Services Act, which we're not going to talk about today. But these are sort of twin bills that in various ways try to rein in the power of big tech and
And it is a very long process to get a law enacted in Europe. But the reason we're talking about it this week is that this was the week that the law went fully into effect. And so for the biggest companies in tech, they now have new obligations. And so they have been rolling out changes at a fairly rapid clip so that they are in full compliance with this law.
They're calling it Digital Markets Day or D-Day. No, that's something different. So I have been seeing various stories about tech companies that are attempting to sort of comply or at least pretend to comply with this new European tech regulation. It gets very complicated in all the details, but I'm hoping maybe you can help me understand what is going on and why I should care. It would be my great privilege and honor to explain to you some of the provisions of this law, Kevin. Look, the only...
The overarching principle here, which I bet you would agree with me about this, is that if you are one of the real tech giants, so we're talking about Google, Apple, Amazon, Meta, you shouldn't really preference yourself all the time. Like, you know, if you ever just like look up a flight on Google, you immediately see a box that says Google Flights. Right. Well, most people just use the default. And so Google has built a system that just funnels all kinds of travel revenue directly into its coffers. Now,
That's not evil necessarily, but it does mean that if you want to start your own business where you're selling flights, you're at a massive disadvantage against this company that is Google. Or think about if you buy a Windows PC. I mean, you've done this recently because you bought a gaming PC. Yes. And do you know what the default search engine was on your gaming PC that you bought? Probably Microsoft's search, Microsoft's Bing. It was Microsoft's search, Microsoft Bing. And did you find that annoying? No.
Did you have to change it? No, because I probably installed Chrome, on which the default is Google. Right, but my point is there was something you had to manage because one of these tech giants said, hey, we're just going to give ourselves a helping hand. Our market cap is in the trillions. We're going to give ourselves another helping hand up.
And then along comes the DMA, Kevin. Yeah, and the DMA says you can't do this. That's right. So there are a bunch of changes in here. That Google Flights example that I mentioned, that is no longer going to be the case. In Europe, Google is going to get rid of that flights box and other companies that are selling air travel are going to have a fair crack at things. Microsoft is no longer going to be able to set Bing as the default search in Europe. And there's more. You know, Apple is...
is having to open up its iOS operating system so that people can bring in their own app stores, their own payment systems. So if there's an app that, for whatever reason, Apple won't approve, well, now maybe you're actually going to be able to run it on your phone. You paid $1,000 for the damn thing. Maybe you should have some say about what software runs there. Am I making any sense over here? Yes.
I mean, I've heard bits and pieces of this, and I actually have heard much more about the Apple piece of this for reasons that we should go into because they have been sort of rolling out all these changes in a way that strikes me as sort of undermining the spirit of the DMA. But I do think this is like starting to show up in real products that real people, at least in Europe, are using online.
all the time. Let me give you another example. Yes. You ever paid with anything with Apple pay? Yes. You know, you sort of double click the little button on the side of your phone and you're able to touch it down on a little NFC reader and you're able to pay for something. That's a nice experience. Kevin,
What if you were running your own payments company? Do you think you might actually want to insert your own little payments app onto the iPhone? Would that maybe, that could be cool, right? Guess what? You can't do it. Because Apple said no. Even though, you know, again, you're the consumer. You paid $1,000 for the dang phone. Apple is just deciding that you can only use the NFC chip for what Apple wants. Not in Europe anymore, my friend. So who knows what kind of crazy payment solutions we're going to get over there in Europe. Right. So, okay. Okay.
Let me do a quick check. How excited are you about everything I've just said? I'm more excited. Honestly, you're doing a pretty good job of convincing me that I should care about this. So I recently learned about something called looping for understanding. Have you heard about this? No, I haven't. This is from my friend Charles Duhigg, who just wrote a book about communication. And he says that part of being a good communicator is doing looping for understanding. So I'm going to repeat back what you've said to me. Verbatim, please. You tell me if I'm on or off base. Okay. So...
Europe passes this law, the Digital Markets Act, or DMA. By this week, tech companies had to show how they're complying with this. That's right. This law does many things. Among them, it makes it illegal for certain tech companies, the really, really big ones that the EU has designated as gatekeepers, to self-preference their own products and services ahead of competitors on apps that they own or platforms or app stores they control. Yeah.
And they have also forced Apple to open up its app store, basically to allow you to sideload apps onto your iPhone without going through their official app store. Exactly. Okay. Those are the big headlines from the DMA. Anything I'm missing? Those are some of the big ones. Look, there's a lot in there. I could give more examples, but I think that's like a pretty nice little package of stuff that might,
actually affect you, the listener, or you, Kevin, in your life that is going to happen as a result of the DMA. And is this stuff just applying to European users? Like for people who are not in Europe, will they see changes to the apps and services that they use as a result of this law? No.
Not yet, but let me tell you, Kevin, regulators around the world are paying attention. Japan, South Korea, Turkey, and the United Kingdom are all contemplating their own versions of this law, according to Bloomberg. I would be shocked if we passed something similar at a federal level in the United States, but I would not be surprised if individual states look to the DMA, particularly if it is successful, and look to implement similar rules in their own states.
Right. So the most I have read about the DMA and its various effects was actually about Apple and how it is complying or attempting to comply with the Digital Markets Act. There was an amazing post written a few weeks ago by John Gruber on his blog, Daring Fireball, that sort of broke down what Apple is doing in response to this new European law. And I think it is truly...
worthy of being described as dastardly. Yes. It is one of the few times we've described anything on the show as dastardly, but this is A1 dastardness right here. So basically, Apple is given this new set of rules by European regulators saying you have to open up the iOS platform. You have to allow these other app stores that are not your official app store. You have to let people sideload apps onto their phone. And they respond by rolling out this series of
changes for iOS users and developers in Europe. And one of the things that they do, according to this post that I read, is that they tweak the way that they do payment processing for apps. So basically right now, if you want to process payments inside an app on an iPhone, you have to use Apple's payment processing system and they charge a fee for that.
The DMA says Apple can no longer require you to use their payment processing system. You have to allow people to use other payment processing options. Apple responds basically by saying, okay, you are forcing us to open up the app store and introduce this other payment processing method. We are going to impose something called a core technology fee.
This is something that they've never imposed before. Basically, if you are a developer making an iPhone app and you choose not to use Apple's default payment processing system and pay them the associated fee, Apple is instead going to charge you an annual install fee for everyone who downloads and installs your app over a million downloads per year. So if you are
Spotify and you get many more than a million downloads, you are now going to have to pay about 50, what do you call it? 50 cents of a euro? It would be 50 cents of a euro or euro cents as they call them over there. So every time someone installs a
one of your apps every year. So this could amount to millions of dollars a year that these developers would have to pay Apple. I mean, honestly, tens of millions of dollars a year. Spotify came out with a group of other large app makers and they said, we will wind up paying
more money in all likelihood to use Apple's new system that was designed to save us money than we would if we just stayed on the old system in which we were also at risk of losing money. Right. So Apple says these changes will only apply to a very small percentage of developers. Which is such a cop-out because
Yeah, it's like the very small percentage of developers who make the absolute most money for Apple. It's like most app developers make no money for themselves or for Apple, but there is 1% of companies that are making all the money, and that's whose money Apple wants. And that was who was complaining about these rules in the first place, was companies like Spotify. So Apple is basically arguing that...
Because they build the iPhone and the App Store and the infrastructure and all the review processes that go into making sure that apps are safe when they're put in the App Store, that they are entitled to these fees from developers. They've also said that the DMA will effectively make users less safe because you'll be able to sideload these apps that haven't gone through their whole review process before.
you could get things that are offensive or pornographic or have malware in them or something, and that this is ultimately going to backfire for consumers. Yeah, whenever I read Apple saying about something harming consumers, I just always replace the word consumers with profits because then I think you get a sort of closer approximation of what Apple's really mad about. It's like, wow, if we have to implement these rules, it's really going to harm profits, and profits are not going to be happy about this. Profits are going to be banging down our door saying, we hate this.
Are you familiar with the subreddit malicious compliance? I did not know there was a subreddit. So this is one of my favorite parts of Reddit. You know the phrase malicious compliance. It's basically... Yeah, it's like I'm going to find a way to follow your rule, but in the worst way possible. Exactly. It's like I'm complying with the letter of the law, but not the spirit of the law. The example is like if your kid...
asks, like, can I have a bowl of ice cream? And you say yes. Then they, like, bring out the salad bowl, like the biggest bowl in the house, and they go, you didn't say what size bowl. Like, that is malicious compliance. And that is essentially what Apple is doing here. Did you ever use that trick, by the way? No, I didn't, but I should have. That would have been smart. So Apple has basically said, okay, you want to force us to open up the App Store to allow alternative App Stores to allow alternative payment processing options. We are going to make it very expensive for you to do that, but
basically we are now in compliance, we believe, with the DMA. Yeah. And I suspect we're going to see a lot of litigation around this. And, you know, Apple is not alone in challenging various aspects of the DMA. And as excited as I am about some of its provisions, I'll be the first to admit there is no guarantee that this stuff is going to work as intended. And one of the big reasons is
is exactly what you just said. These American tech giants are being dragged into the future, kicking and screaming, and they are going to cling onto every penny that they wring from our pockets for dear life. And I truly am...
taken aback by how aggressively Apple has been fighting this. Yeah, so it's not just Apple fighting it. Google has also, you know, come out with some sort of statements about how they believe this is going to be bad for consumers. So I guess this is my big sort of overarching question about the Digital Markets Act. I remember a few years ago, European tech regulators passed this thing called GDPR, which was, I forget what it stands for, General Data... Drew Paul's Drag Race. I don't think
That was it. But basically, it was this big sweeping privacy law. And like, you know, I interviewed a bunch of European tech regulators and they would, you know, give these, you know, sort of stem winders about how they were preserving dignity and privacy for their citizens and keeping data sovereign inside the EU. And it sort of sounded like, you know, they're storming the Bastille or something like that.
And then fast forward a couple years, and the only tangible effect that I have felt as a user from GDPR is that whenever I go to Europe, I have to spend like half my day clicking through little buttons that say like accept cookies or reject cookies. Like that is the only thing, honestly, that has changed as a result of GDPR for me. You don't even have to go to Europe to have that experience. You could just sit on your laptop in America and click on those cookies. But have you ever gone on the European internet in Europe?
I sure. Well, it's like playing a video game. Like how many times do I have to be, do I have to click through a screen to accept cookies? There are aspects of it that are really silly, but look, I think, you know, GDPR had one really good idea in it, which is that if a company is out there somewhere collecting data about you, you just have a right to know that you should be able to petition any company that has been collecting data about you saying, Hey,
What do you know about me? And after it was passed, that law got copied in other places, among them California, where we live right now, which means that if you're worried that one of these companies like Clearview AI is collecting a million pictures of your face and then selling it to a police department, you as a Californian can now go to, uh,
you know, a regulator in the state and say, hey, I want you to tell me everything that you know about me and possibly even delete those things. So did GDPR, you know, create a bunch of silly pop-ups that affect no one in any positive way? Sure. But there were good ideas there. And I feel like we see this with European tech regulation all the time, which is it never gets us
all the way to, oh, phew, big tech has been reined in. We can now move on with our lives. But it does introduce these little ideas that are good that can get picked up by other countries, lawmakers, regulators around the world. Can I ask you a couple other questions? Ask me. So we know from sort of the history of tech regulation that often, you know, tech companies will fail to comply with some new law and they'll get fined or they'll get their wrist slapped. But the fine is not big enough to actually force them to try to change their practices. It's just kind of a slap on the wrist. Right.
So is that the kind of thing that we can expect to see more of here with the DMA? Is just like companies violating this law, getting fined by European regulators, paying a fine that's chump change, and then they continue to go on with their lives? Yeah, well, so the DMA has a provision where if they're found to be in severe violation of the rules, they can be fined up to 20% of their global revenue worldwide.
or as they call revenue in Europe, turnover. Do you know that in Europe they call revenue turnover? Wait, really? Yeah, and in America, turnover is a pastry. But in Europe, it is revenue. Really? Yeah, so they can be fined up to 20% of their global turnover slash revenue. Wait, I don't get that. Why do they call it turnover? I guess it's just because, like, you know, you got the money coming in, you're turning it over and putting it into a bank. Like, who knows? Who knows why they do things in Europe? I'm not European. I'm not European.
Wow. Yeah. That really bothers me for reasons I can't articulate. Yeah. Well, anyway, look, let's just say that 20% of revenue is not a slap on the wrist. It is a punch in the mouth. And do we have any sense so far? I know it's very early because just this week that tech companies were required to show that they're complying with this new law. But do we have a sense yet of whether it's working as intended? Yeah.
Well, no, is the answer to that. You know, our producers put a great question in our prep document this week, which was, how will we know that the DMA is working? And it's a tricky question to answer, right? Because
you know, I would love to tell you that because the DMA went into effect 10 years from now, there's going to be five major search engines and six major smartphone operating systems and 11 major e-commerce platforms around the world, right? To me, that would sort of be the ideal is that we distribute the balance of power much more broadly across companies, across regions. It doesn't feel like the fate of humanity is in the hands of five companies. That is like what I
actually want, the DMA is not going to get us all the way there. But I do think it can get us some of the way there, right? If Google isn't putting its own vertical search results on top of so many different categories of searches, there might be room for new competitors. New kinds of businesses might be able to...
Yeah, I would say that so far what I am feeling about the Digital Markets Act is sort of analogous to what I'm feeling about the lawsuit that Elon Musk filed against OpenAI, which is like, you know, it's a little bit of a, it's a little bit of a, it's a little bit of a, it's a little bit of a, it's a little bit of a, it's a little bit of a, it's a little bit of a, it's a little bit of a, it's a little bit of a, it's a little bit of a, it's a little bit of a, it's a little bit of a, it's a little bit of a, it's a little bit of a, it's a little bit of a, it's a little bit of a, it's a little bit of a, it's a little bit of a, it's a little bit of a, it's a little bit of a, it's a little bit of a, it's a little bit of a, it's a little bit of a, it's a little bit of a, it's a little bit of a, it's a little bit of a, it's a little bit of a, it's a little bit of a, it's a little bit of a, it's a little bit of a, it's a little bit of a, it's a little bit of a, it's a little bit of a, it's a little bit of a, it's a little bit of a, it's a little bit of a, it's a little bit of a, it's a little bit of a, it's a little bit of a, it's a little bit of a, it's a little bit of a, it's a little bit of a, it's a little bit of a, it's a little bit of a, it's a little bit of a, it's a little bit of a, it's a little bit of a, it's a little bit of a, it's a little bit of a, it's a little bit of a, it's
Does this make a lot of sense on its face? Hard to tell. Maybe I'm skeptical that the actual complaints here are valid. But I do think that there is a process of learning that is taking place here and a public education around what is the structure of open AI? What is the structure of the Apple App Store? How does it sort of...
treat developers? What are the terms of building apps for iPhones? Like these are questions that I think a lot of people have not historically known the answers to and are now finding out through the process of watching these tech giants try to comply with this new law. So regardless of whether or not the DMA has the intended effect, I'm
I feel okay about its existence because we're just learning so much more about how these companies operate. And I think if three or four or five years from now, we look back and say, hey, the internet is actually kind of broken in Europe, it's easy enough to sort of undo the regulations there. I don't think there's anything wrong with running the experiment.
Yeah, I agree. It's very easy looking at any new tech regulation and figure out a million different reasons why it probably won't work. It won't have the desired effect. It'll have these unintended consequences. And that could be an excuse for tech companies to essentially throw up their hands and say it's not even worth trying. And I am just here to say it is worth trying, okay?
We do not want to live in a world that is run by five for-profit corporations. We want to figure out a way to make them open up a little bit, play nice with others, create opportunities for other companies. And this is the most significant effort we have seen in the world to do that so far. So while, again, I am skeptical that it's going to get us even halfway to the finish line, it is a place to start and we can build from here. And let me just kind of
steel man the opposition here that you will hear from people in the tech industry and people who work at these companies and have you respond to it. People in the tech industry will say, this is Europe trying to regulate because it can't innovate, right? Europe has not built any companies that are the size and scale of Google or Apple. And the people in the tech industry who are skeptical of the DMA will say, this is just them trying to clamp down on innovation that
Europe is sort of turning itself into a technological backwater. They don't have a huge, vibrant startup scene there in part because they have decided to strangle promising new technologies with regulation rather than letting them play out naturally. What do you say to that argument? I mean, I...
I think I can honestly just accept it and say, but it doesn't really matter, right? Because in the United States where we don't have any regulation, we're also not seeing a lot of innovation. When was the last time a huge successful new search engine came along or a huge successful new social network came along or a huge new successful e-commerce company came along or a huge new successful smartphone operating system came along? We have
all the room to innovate here in America, and we have none of that, right? So I think it's just a good thing that there are some countries on Earth that want to encourage innovation a little bit more. And I do think that these are pro-competitive steps that they are taking that will sincerely benefit companies both in Europe and elsewhere.
Casey, I have to say, you have convinced me that this particular European tech law matters. Thank you. And then I need to pay attention to it, which I thought was impossible. Well, it was my pleasure, Kevin. You know, often on this show, you are driving the train and you do a fine job at it. And I get intimidated when I have to try to walk you through something. But this was something that I really wanted to go
through because I think it matters. Yeah, it does. At the very least, I think that we should continue to follow up on this story by taking ourselves on a European vacation later this year and seeing how the DMA is putting itself into practice. What do you think? What is our travel budget? Have we ever looked into that on this show? No, but I think our turnover is high enough that we can justify at least a week. When we come back, a vision pro meets a vision amateur. Joanna Stern comes on to help Kevin use his new toy.
This podcast is supported by KPMG. Your task as a visionary leader is simple. Harness the power of AI. Shape the future of business. Oh, and do it before anyone else does without leaving people behind or running into unforeseen risks.
Simple, right? KPMG's got you. Helping you lead a people-powered transformation that accelerates AI's value with confidence. How's that for a vision? Learn more at www.kpmg.us.ai.
Casey, a very exciting thing happened in my life in the past week. What's that? My Apple Vision Pro finally arrived. Wow. Well, I know this is something you have been excited to get your hands on basically ever since we tried it. Yeah, and it took a little while for reasons that are complicated and probably not all that interesting, but have to do with how one gets a new piece of technology at the New York Times. Specifically one that the New York Times and not yourself is paying for.
Correct. And so it took a while to sort of get the necessary approvals and to get it shipped out. But I do have it. I've had it for about a week now. And I was really excited to talk about it with you, except you don't have one.
Is the platformer technology department not springing for one? Well, the platformer procurement policy has said we'd maybe like to wait for version two or three of this stuff. Okay, fair. But I do want to talk about this because I have been having this experience of using this thing every day since it showed up at my house.
And it is a wild piece of technology. I think it has things that are really much better than I expected. It also has things that are sort of puzzlingly bad and much worse than I expected. You know, it's been a month since this thing has come out. And some people have been using it for most or all of that month. And I think it's time to sort of step back and say, well, this thing that came out that had all this attention around it, all this excitement, all this skepticism, what's
What is it actually being used for? How are people liking it? And so today I thought we should bring in Joanna Stern. Joanna is a personal tech columnist at the Wall Street Journal, and she was one of the early reviewers that Apple actually sent a review unit to of the Apple Vision Pro. And she's written a lot of great stuff about her experiences using it.
And I want to just talk to her about what the past month of using this device has been like for her and talk about whether we think this thing is here to stay or whether this is just kind of a fad and a novelty item that is not going to be that widely used. I like that. You know, I have a lot of friends who have got the Vision Pro, and I would say their opinions have been pretty mixed. But since you've gotten your hands on this thing, it seems to be bringing you a lot of joy. And I think it's
brought Joanna some joy too. So I'm curious to hear you two trade notes, particularly because Joanna, as you point out, has had this thing for a lot longer and if nothing else, I think she might have some pro tips for you. Let's bring her in.
Joanna Stern, welcome to Hard Fork. I'm so excited to be here. Hi, Joanna. Seriously, long time listener and viewer. Long time first time, as they say. Long time. Thank you, Joanna. First time. We're excited to have you here. You say that to everyone. We actually did not say that to Kara Swisher. We said we're terrified to have you here. That's true. So true. So you were among the anointed...
who were given early access to the Apple Vision Pro in order to test it. You made this great video and wrote this great column when it first came out. You spent 24 hours wearing this new device. You showed off all these different features like putting timers over pots that you had simmering on the stove. You went skiing in the Vision Pro and you used these personas, these little 3D renderings of your face on a FaceTime call with some other early testers.
And I just got mine because, for reasons that are not worth going into here, but it takes a little while in the New York Times procurement process. So I've only had this for about a week, but you've been trying this now for a month.
And I would first love to ask you, like, what were your first impressions and what are your impressions now, a month later? Well, I'd like to ask you about the New York Times procurement process. Don't get me started. I think that what I've really been feeling is that I want to love this and I want to wear it. But.
But I don't. I don't actually gravitate towards wearing it a month in. And so that honeymoon period of when you get a new gadget and you're like, this is awesome. It smells so good. It feels so good. I can do all these things I can't do with my other things. And that's just like general tech excitement about any new product.
it wears off here. And it isn't only that the excitement wears off, it's also that the use of it wears off. And so the things I find myself going towards it for now are not the things I actually thought I would. Like working was one I thought that, oh, I'm going to work in this all the time. It's going to be so great. I'm going to have all these monitors. I'm going to bring it to and from the office every day. Well, no, it's way too heavy to do that every day. My backpack is not big enough. I would have to buy a new backpack. I don't want to buy a new backpack. And
buying backpacks for women is very tough. That's a whole other podcast. And then there are these things where I'm going to use it a lot more now that I didn't think it would be. And that was more in the entertainment world. Talk about some of the other friction that comes to mind when you think, I want to use this thing more, but I'm not actually doing it. What are just kind of the steps to use the Vision Pro that make you think, ah, the heck with it, I'm just going to use a laptop? So I think the biggest is that
that it's the biggest, literally. And I think that in the first couple of days of use, you sort of put up with these compromises because you're really getting used to it. And also, it's not as bad. It's not as wearing. And so then after you wear it for a number of days in a row, you're like, I kind of need a break from this thing. So that was the main thing. And I think there's a couple of things that
come towards the steps of setting that up, right? So I've got to take it out, which is not a big deal. I could keep it on my desk, right? But I got to make sure the battery's charged and all the things are sort of set up. So it's not like you can set it up so it's like plug and play and you're ready to go, but not if you are traveling to and from work, which I do a lot. I come in and out of the office a lot. So that's one place, which is again, one of the reasons I find like just keeping it next to the side of my bed
is easier than, Kevin, I don't know if you've, do you have the travel case for it? I do, yes. You have the pillow. I call it the marshmallow, but yeah, same idea. It's this giant, white, soft, fluffy case. Don't you have it here? Isn't it here? I do, yes. Do you want to show it to the camera? Sure.
Well, no, because it's a delicate tower that I've arranged over here with my electronics, and I don't want to knock anything over. That is actually interesting that even just showing the case on the camera was too much of a hassle. That is a little bit about it. That is actually a good moment right there to say, like, I've got to pick this case up. I've got to pack everything in it. I've got to make sure the battery is connected. Everything's right. Yeah.
I could just pick up my phone. I could just open the lid of my laptop. And I mentioned that a little bit in the comment. I do think this thing is great for public transit and for flying. Like it was a wonderful experience flying with this thing because you were just like, yes, the plane really does suck as much as we all thought it does. Like,
Yeah, talk about your experience flying with the Vision Pro on because this is something that we've talked about. These things are starting to show up on airplanes. And other reviewers have said this is the single best use case for the Apple Vision Pro is being on an airplane. What was your experience like? It was that. I had to fly on a quick business trip down to Florida. I decided to pack it.
And I said, oh, I'll watch something quick on it. I'll just try it out. And then I ended up wearing it for the three-hour flight because it really took me out of the seat. And I do describe this in the column and I'll explain it here a little bit too, because it was a miserable flying situation. It was a 24... I booked the ticket in the last 24 hours and I get to my seat and there's a woman who wants to sit on the aisle and her husband wants to sit on the window seat. And
I am in the middle of them and there's no budging. They do not want to move. I'm like, okay. And I sit down and they like, she wants to talk to me and be friends with me. And then they're passing things back and forth between me. It was like out of, it was out of a, out of a,
Wait, this is truly the worst flying situation because people do that now. They book the window and the aisle if they're flying together because they think, well, no one's going to want that middle seat between us. And then people end up booking the middle seat. And now you're just stuck between this couple that is passing snacks and trying to talk to each other over you. Throwing Cheetos at each other. And I put this...
thing on and I was blown away actually just how seamless the United Wi-Fi worked because those words had never come out of my mouth before. I quickly get on United Wi-Fi. I'm already to their like free entertainment tab. I'm streaming for ADP friends and it is what it is the future we were promised.
And it just, it takes you out of that situation. You can turn the dial and yeah, I'm in Mars or the moon or whatever environment I was on. And yeah,
So you did like traveling with this Vision Pro. You do not like working with it as much as you thought you would. Let's talk about the good. What impressed you? What have you actually found yourself going back to use this thing for a month later? Really watching stuff. I mean, and like I had a mention of this in the column, but
My wife likes to watch Love is Blind, and I do not. I do not care for this show. No offense to any listeners. And so I find it to be very dystopian, but sometimes I will put on the headset while she's watching on the couch. I put on the headset. I put my AirPods in, and we can be together forever.
but we're not together. See, this is, I do think this is one of the use cases that I am most excited about because my wife and I, you know, we like to watch TV together, but we also have some different tastes. You know, she's a fan of the Real Housewives franchise. I'm not that invested in that series. But,
But so I have also used it this way as kind of a way to say like, I want to be in bed next to you watching TV, both of us, but I'll watch my show and you watch your show and we'll sort of happily coexist with each other. It's great for that. They should market that. We can have a first wives club of the Vision Pro. Yeah.
Yeah, support group. My spouse has a Vision Pro. Yeah, I've literally been talking about this with Nilay Patel as we were doing the review that our wives should just get together and talk about the reviewers of the Vision Pro because same exact situation and it does sound dystopian and sad, but also it's nice. We still want to be next to each other, but we're just not watching the same thing. Can I ask you about something that I've started to notice as I use this thing more often?
Do you feel any kind of, you know, let down when you take off the Vision Pro and you no longer have, you know, you're no longer surrounded by screens and moving things and videos? You're just kind of in like base reality and it's just like what your two eyes can see. Do you ever feel like, I kind of wish I had those screens back? Yes.
So because the video, the original video I did, really, I did wear it for an unhealthy amount of hours in a row. And when I would take it off, I would actually, there was something that happened with my consciousness and mind where I was, wait, is there supposed to be an app? Yes. It messed with me to that degree where I- It messes with you. And I would take it off and I'd be like- I keep trying to pinch things. Right.
And I'd be like, oh. I heard you got a note from HR about that. That's why it got lost. I keep trying to close Casey out. Close window. And he's not disappearing. Absolutely, it does. And that's where I think some of this future stuff is really compelling. You get used to seeing digital stuff in your real world and you're like, where did it go? Why isn't it there anymore? Yeah.
You're like, oh, I went in the living room. I thought I left a window in there. I keep calling them windows, but an app. They should have called this windows. I have to say, the latent side of me that does feel like we actually should just shut down all the technology to see what happens is really coming out hearing you describe your little bespoke realities that you're creating for yourself inside your dystopia machine. Like, I don't know, you guys. Well, I also like, I want to ask you about the reaction of other people to you wearing this device because my experience so far in the time that I have been
had the Vision Pro is that when you break it out, like I was at a gathering of friends this weekend and I brought the Vision Pro. - What poor person's gathering did you ruin with your Vision Pro? - Well, I thought like, I'm gonna take some spatial videos, I'm gonna demo it, pass it around. And I would say like half the people at this gathering wanted to try it and put it on.
And the other half were completely repulsed by it. We're like, get this thing out of my field of vision. I do not want to be in the same room as this device. Have you had similar reactions from people in your life?
No, everyone in my life really loves me and just loves me for who I am. So I'm sorry that you haven't surrounded yourself with such loving people. I mean, I wore this very quickly because I had the early review unit and I had it in the office after I was able to say I had it, which was after the embargo. And people were always just coming by and pointing. And I have an office with a glass window.
window. So they like also think I can't hear them, but I can hear them and I can see them. And I just would keep telling people, yes, I can see you and you're naked. That's very good. So if you had to assign an overall grade to the Apple Vision Pro in your month of testing it, what would you give it? Do I get to break out like certain things? Sure. Sure.
Like travel is an A. Watching Love is Blind, A, because I don't have to watch it. Spousal avoidance, A. There we go. Spousal avoidance, A.
Working is a D. FaceTiming, F. Not a big fan of the personas. It's just like, it's useless. It's just like, nobody's taking me seriously. And I haven't tested the beta, which is supposed to make some improvements. But if you call people and they are laughing, this is a humorous call and you're not getting anything done.
Yes, there's no way to surreptitiously enter a meeting and not have the entire meeting be derailed by the presence of your creepy VR avatar. Yeah, it's just everyone's laughing and mocking you and saying you look like Botox on hell. And I've just had terrible things have been said to my persona. It's just...
People are so mean to the AI avatars. This is a big problem. The other thing that I really do think a lot about is the way to capture video on this thing. I don't know, Kevin, if you've done that at all. And I know you're a recent parent. The spatial videos? Yeah, the spatial videos. Is that what you're talking about? Yeah. Yes. Watching those in here is really compelling. It's amazing. Yes.
And, but also the idea that you can capture video with a camera on your head. That's really where Meta has broken through on the Ray-Bans. And this is obviously not the form factor that Apple should go for, but I do think that's something down the line for Apple, whether it's a different form factor or this. I just shot a video this week on Tesla chargers and my, my Ford, but I wore those Ray-Bans the whole time and was recording a lot of the footage, right? I, I pick those up a lot now to get first person video, whether I'm
doing it for work or I'm doing it with my
my kids because it goes skiing with my kids and I don't want to be holding a phone. So there's a lot that I think that is coming with head computers. Yeah, I agree. And I think the spatial photos and videos are something that basically, if you're going to buy one of these things, spend all the money to get one, that is the feature that you're probably going to end up using the most. At least I find myself most excited about that feature. I've taken a number of
spatial videos. These are these 3D videos. When you watch them in the Vision Pro, it feels like you're in the memory. It's very...
sort of uncanny. And yeah, I've been using that a lot. You can also take those on a new iPhone, so you don't have to be like wearing the headset everywhere you go. But that I feel like is a feature that Apple should tout more because that is just so compelling and so different from what's out there on other devices. - Interesting.
All right. So we've learned a lot of lessons about the Vision Pro over the past month. Sounds like, Kevin, you have some things to look forward to as you get used to your new purchase. But at the end of the day, I feel like what I'm hearing both of you say is if you were inclined to just ignore this thing for now, you can absolutely just ignore it. Is that a fair assessment? Sure.
Fair. But Casey, you've done a demo. I'm going to interview you now. You've done a demo. Do you feel any yearning to get one of these and try it out? Do the weak test?
I do feel a twinge, and I do think a week test might actually be the most fun for me. When I tried it, the thing I said on the show was, if I had this thing, I think my main use for it would be entertainment. That was the stuff that seemed the most compelling. Watching the video, doing the little VR dinosaur experience, that's what I wanted. At the same time, Joanna, I kept thinking about my experience using the meta headsets, which was, I would use them for a month, and I would put them in a drawer, and I would never get them back out again.
And I just thought I'm not willing to spend almost $4,000 to have that experience. And I still think that is the case at the same time. You know, I love to play video games. I love to play my PlayStation five. The moment that I can play like a PS five game and project like the entire world of Diablo four, the game I'm playing right now on, on a wall, uh,
and like play with my PlayStation controller. That's amazing. So I'm very much like in the camp of, yes, there is a there there. It just feels like one of those things where we're several years and product iterations away from me using it all the time. Yeah. I'm curious what you make of this comparison between the Vision Pro and the Apple Watch.
because as we've talked about on the show before, the Apple Watch, when it first came out, it sort of faced some of the same kinds of criticisms. People said, what is this for? Why do I need another screen? Why do I need another thing that I have to remember to charge every day? Why do I need my text messages to come through to my watch? And it took actually a couple years for Apple to realize why.
what this thing was actually good for, which was fitness and step tracking and things like that, and to really sort of lean into those features.
And now it's the best-selling watch in the world, and it makes billions of dollars a year for Apple, and it's a huge success. So does this rollout, the Vision Pro, remind you at all of the Apple Watch? And is there anything that we could learn from sort of watching earlier generations of pundits sort of, you know, scratching their head, trying to figure out what that was for? Yes and no. I think you hit on the ways that it does, right? They didn't quite know or figure out what the...
the killer app, I hate using the term, but let's use it here, was going to be for watches. Fitness certainly became one of them. I think fitness is going to be one on the headset as well. I think that's just something Apple wants to push throughout its product line. But I think one thing I do keep saying with this is with the iPhone, we absolutely knew what its purpose was before it came, right? Phone calls, texting, email,
Those things were established by the category already. Same with the watch and wearables. We knew that wearables were good for telling time and for working out. Fitbits had been around. They had already seen that category grow. In this category, you've got
Gaming, right? I mean, what are the real reasons people buy VR headsets right now? Gaming. So Apple's got to break out into those other categories because it's not, Casey, you hit on it before. If you could play some of your PS5 games in here, maybe you'd be really excited about. But guess what? You can buy a headset through Sony, right? So what is that thing? And that's where I think it's different.
- Here's my tip to Apple. I think they should take a page out of the Apple Watches book and they should show you the time when you're using the Vision Pro. - That's a good idea. - This is one of the craziest things about it. - Such a good idea. - It has no clock. - Oh no, there is. - Have you noticed this? - It's like a Vegas casino. - Where's the clock? - You do, you have to go up. You have to go up into the control center. - You gotta go up, Kevin. - Oh, see, I hadn't found the clock and I was just, I felt like I was in a casino.
For what it's worth, I was making a joke because I assumed that the time would be very visible somewhere in the operating system. It's not. I thought I was crazy. I was like, is there really no clock on this thing? Okay, now this has been upgraded to a legitimate suggestion. Yes. Show the time. In fact, on the little part of the mask where it shows, get rid of the eyes and just show me, you know, like 1.05 p.m. That's a great idea. Your walking alarm clock. Yeah.
A walking alarm clock. That's a billion dollar idea, baby. Great idea. No, Kevin, the thing about the clock I actually had meant to mention in the first review and it fell out, like it just got cut along the way. It is maddening. And I think I keep thinking it's on purpose. So you do lose track of time in there and you're like, what time is it? Oh, my gosh, I've been in here for three days. Yeah.
Well, as I keep testing this product, I will keep your experience in mind. I hope to discover some more things that it is good for, but I share a lot of your frustrations with this device already, and I think there are still a lot of bugs to be worked out. And how will we know? Are you going to keep this? Is this a New York Times-owned Vision Pro? This is a New York Times-owned Vision Pro, so I will keep it unless they pry it away from me.
Which they might do. We'll see. We'll see how this podcast goes. I've actually asked them to pry it away from you, so hopefully my prayers will be heard. It is good for trolling your co-host, I've found. I did have a good experience of making an I'm with stupid sign and just hovering it over Casey's face. Which only you could see.
until you airdropped it to me so that I could see it. I'm just saying, there are cheaper ways to troll you, but few are as satisfying. Well, I look forward to watching this video of this podcast in my Vision Pro in a Safari window. Oh, you gotta let us know how we look at IMAX resolution. You'll see every pore. All right, Joanna Stern, thank you so much for coming on. Thanks, Joanna. Thanks for having me.
This podcast is supported by KPMG. Your task as a visionary leader is simple. Harness the power of AI. Shape the future of business. Oh, and do it before anyone else does without leaving people behind or running into unforeseen risks. Simple, right? KPMG's got you. Helping you lead a people-powered transformation that accelerates AI's value with confidence. How's that for a vision? Learn more at www.kpmg.us.ai.
Now, Casey, I'm sorry to admit that I committed podcast infidelity this week. Oh, no. Did you go back on Joe Rogan? No, but I did go on The Daily, which is a small boutique New York Times podcast about the news. And how often does it come out? Well, I...
Amazing, you should ask. It comes out every day. That's incredible. Yeah, and I was on this week to talk about Gemini and the whole debacle over its image-generating capabilities. Yeah. And you can listen to that episode in the Daily Feed. Very good. I can't wait. Yeah. It was nice to talk to a real journalist for a change.
If you haven't already, check out our YouTube channel. It's at youtube.com slash hardfork.
Special thanks to Paula Schumann, Quewing Tam, Kate Lepresti, and Jeffrey Miranda. As always, you can email us at hardforkatnytimes.com.