She saw AI's potential and wanted to cover the biggest technology hitting the scene.
She uses it to simplify technical documents and white papers by uploading them to GPT-4.0 and asking questions.
Use AI for low-stakes tasks and be mindful of data privacy.
Planning travel, comparing health insurance options, and writing stories for children.
The models may use the input data for training, potentially leading to privacy issues.
Netflix algorithms, self-driving cars, TikTok algorithms, and automated technology.
Claude Opus by Anthropic.
More accurate voice interactions, advanced code generation, and AI agents.
They are designed with guardrails to prevent inappropriate or harmful outputs.
Volatility similar to late 90s internet, with long-term growth.
Support for this episode comes from AWS. AWS Generative AI gives you the tools to power your business forward with the security and speed of the world's most experienced cloud.
Hi, everyone. This is Pivot from New York Magazine and the Vox Media Podcast Network. I'm Kara Swisher. And I'm Scott Galloway. And you're listening to our special series on AI. We talk a lot about the business of AI, but today we want to focus on the ways we can actually be using it in our day-to-day lives. But here to chat with us about some of the AI basics is Kylie Robison. Kylie is a senior AI reporter for The Verge. Welcome, Kylie. It's good to talk to you. Yeah, thank you for having me.
So, first of all, talk a little bit about how you personally use AI in your day-to-day life. And why are you covering it? Yeah, you know, before this, I was covering Twitter at Fortune magazine, which is run by Kara's favorite person. And that wasn't, you know, easy to cover. And I thought AI...
showed a lot of promise and was just such an interesting area to tackle for a young reporter. I mean, who doesn't want to be covering the biggest technology to hit the scene? And I live in San Francisco, so it just seemed perfect. But it is perhaps the most stressful beat I've ever had because it's so large and so nuanced and people argue about it all day. About using AI in my day-to-day life, I would say it's not...
something I use heavily. I do use it for like, I upload a document, for instance, OpenAI will release these safety cards for their models that show like, this is how safe it is. So I can upload that PDF to GPT-4.0 and say, okay, and ask questions based off that PDF. Do they mention this? Can you expand on what this means? So like really heavily technical documents or white papers, I can ask questions and like simplify it in a way that's
more helpful and it's quicker for me to understand than going to a bunch of researchers, making a bunch of calls to get them to explain it. I think that's been really helpful for me. I know Scott uses it for writing. I think I've noticed Claude can be kind of a helpful writing assistant, but in terms of actually using it to write, I don't use that because I don't think it's helpful yet. But just as a partner, as Scott has mentioned on the podcast, as a
Here are some of my rambling thoughts. Can you streamline what I'm trying to say and edit it for me? Right. Absolutely. So give us a few tips for someone who's looking to implement AI in their lives, like daily tasks that could be made easier. When people ask you this question, obviously, Google is integrated into writing emails, for example, which I don't find useful. But bills, resumes, and much of what the average person knows of AI is chat, GPT, and related tools.
Try to expand on that. What other tools are useful for them? Yeah, there's so many different AI tools now. I think, you know, a lot of people use Grammarly so they can check your grammar in your browser, which is really helpful. I think, you know, when you want to use AI, I think you should consider it for low stakes tasks. You have to imagine your data privacy because often these models will use what you input to train the model. So you don't want it someday spitting back, you know, your business
bank account information, which is really funny because I'm a big listener of Pivot and Hard Fork, which Kevin, one of the hosts there, had uploaded his bank statements to Notebook LM so they could create a podcast to help him with his financial information, which was a really interesting thing that I think people want AI to be capable of right now is like, can you help me budget? Which
You know, I think, again, low stakes tasks, thinking of the data you upload, try not to upload sensitive information. I think just like I said with the PDF, that was really helpful. I think, you know, I used it. I just turned 26. So I used it to compare health insurance. These are my needs. These are the options I have. Here's a PDF of what they offer. Which one should I choose was really helpful. Stuff like that.
Nice to meet you, Kali. By the way, I used it this morning. I asked AI, why am I so broke? It immediately sent me a copy of an e-mail
confirmation from Amazon that my Mexican cat costume will be delivered tomorrow. That's a little AI humor, Kylie. It's a little AI humor. I'm so sorry, Kylie. As a young woman, you needn't endure this, but here you are. Go ahead. Or I'll say, how can I feel better about myself? And it'll just come back with, good morning, you fucking stack of sunshine. So it has. Anyway, a lot of people use AI. A lot of people haven't even started using
What would you suggest someone does to get started? Which one or two LLMs would you suggest they download, the free version even? And how do they start to try and unlock the potential and just understand it more? And of course, I'll start with an answer. The first time I really interfaced with AI was trying to figure out fun stuff for me and my 14-year-old son to do in London. And it kind of went from there. What two or three things would you suggest someone
to help people get started. Yeah, I think that's a really helpful example and something I think the makers of these models want people to be using it for rather than highlighting any nefarious ways to use AI. They're hoping people use it to write a story for their young child set with pictures. You can use Chachapiti for that or Grok.
And you can use it to book travel or like to plan your travel, which is really cool. And I used it just when I went to Spain. I was like, what should I go see? So I think those are really helpful examples. Again, these are all low stakes tasks that you can use any really any chatbot on the market that's capable enough to do.
use creatively. I think you have to check to see if these places are open and exist because it can hallucinate very confidently. But yeah, any model right now, they're all kind of on par with each other. A lot of them are because they're all working towards the same thing. So you can use it to decide what
hair color you want to do next. Anything that's just fun and low stakes, I think is easy. Low stakes. So you did mention privacy. Now we do put a lot of privacy online. My bank statements are online, everything else. But I really limit what I use here because of that. Because I'm like, we're very worried. And I'm someone who's very aware of privacy. I'm like, Scott, when you said you put your medical records, I'm like, I'm not putting my medical
into open AI. No way. They already know, Cara. Your privacy's gone. Yeah, I guess. But I just don't want to help them along to like put all the things together. And so I just, they definitely don't have my heart surgery stuff. They don't. They don't. And so- Are you worried about the CCP like scaring you to give you a heart attack? I don't know. I just, I'm just telling you the feeling I had. Kylie, this is what I deal with.
This is what I deal with. This is my feeling. Like, I don't want to give them too much personal information. I don't mind it on writing things that are low stakes. But then now we put lots of stuff on the regular internet. When do you imagine that crossover? How should the average person feel about that? Because we don't know what this stuff is being used for, correct? Like, there's not as much transparency as there needs to be. No, there is not as much transparency. And they...
claim that it's because of like scott said the ccp or you know anti-competitive reasons i think you know they've already hoovered up the entire internet there's no going back from there all these models have hoovered up the entire internet it's something i've kind of reckoned with when they said you know photobucket signed over all of its data to train large language models i was like dang like whatever i put on photobucket that was stupid when i was 11 is now going to be used to train large language models and 11 year old me wouldn't have known that uh
So I think it's kind of... It's a really tough position. People on Instagram, celebrities were like, you know, Meta does not have...
the right to use my photos if I post this story. I think people are really protective over the lives that they have shared with the internet, that they have been encouraged by these large companies to share with the internet. And I think they feel like it's being taken away from them and used to train these black box models. I think people have different opinions on it. I personally feel I have the heebie-jeebies about it because I grew up with the internet, with Facebook launching when I was...
a young teen. So I think it's a very tough position. I think some people are like, I don't care. So do you think that people are more worried? Because a recent study found that one in nine Americans use AI every day in the work. That's a very small number, like at this point, right? Where are we in that? Do you think everyone's just going to do it? Like, not do it, it's going to be done to them and that'll just be foisted upon them by Apple intelligence or whatever. You know, do you want to get an Uber? You have your airport. That seems like a good thing, for example. Yeah.
Totally. I think automating, you know, rote tasks is not a bad thing. When it comes to the workplace, I think a lot of workplaces are well aware of the data privacy issues. And they're like, please don't upload our internal documents to OpenAI. That's been a problem. It's trained in a lot of workplaces. So one in nine doesn't surprise me. I think it's going to continue to grow. I just
published a story today about AI agents, which is just like the new next thing. So sort of an AI assistant. And where I'm seeing this a lot is in SaaS products, like Salesforce released a CRM agent. Microsoft has co-pilots, stuff that they believe will increase efficiency amongst their staff. But I think it's going to be hard for that number to grow so long as there's transparency issues and that trust has to grow.
Okay, let's take a quick break. When we come back, we'll talk about where we're already using AI without realizing it and what we should not be using it for.
Support for this episode comes from AWS. AWS Generative AI gives you the tools to power your business forward with the security and speed of the world's most experienced cloud.
Scott, we're back with our special series on AI. We're talking to a senior AI reporter for The Verge, Kylie Robison. Huge leaps have been made in AI over the last couple of years. Talk about how we're using it without realizing it. It's also been around for a while, right? Where are people not realizing they're using it today? Totally. I think in your career, Cara, you've probably covered AI. It's been around forever. I think, you know, your Netflix algorithm, that's AI. Automated technology,
self-driving cars, Waymos. I live in San Francisco. Waymos are everywhere. That's AI. It is used in TikTok algorithms. That's AI. It's everywhere. And it has been working in the background quite a bit. I think you'll hear companies, especially as a reporter, they're like, we've been in the AI business for two decades, which is, you know, it's not facetious, but it is
different than the frontier models we're seeing, open AI and anthropic release, but it is those algorithms you're used to using. So, break down or do a quick kind of J.D. Powers review of the biggest LLMs from your favorite to your least favorite.
Favorites to least favorites. Scott likes Claude, just so you know. I do like Claude. Claude is really good. It's surprisingly good. I just started using it recently and I messaged a coworker. I was like, I'm a bad AI reporter because this is way better than I anticipated. It also has, I don't know if you've noticed this, Scott, kind of
kind of intense guardrails. I asked, you know, some questions about AI. It's like, well, as an AI, I can't exactly answer these questions. Whereas ChatGPT would have just spit it out. Just so people know, it's by Anthropic, which was a group of people who thought OpenAIR was not safe enough and started Anthropic. Exactly. And it's backed by Amazon. Exactly. Yes. And Google has a smaller cloud share for Anthropic. But yes,
So Anthropic is a competitor to OpenAI. OpenAI has GPT-4.0 is their latest frontier model. They've also released a reasoning model called O1, but they consider that, for lack of a better word, kind of dumber than their frontier model. And frontier models are basically the biggest, the best models that are out there. So frontier models are like
The next one is like the next iPhone, basically. So I would say Claude is amazing. Claude Opus is amazing. I think the thing is they're all building the same thing with the same training data, which is the entire internet. So they're going to continue just leapfrogging over each other. So it's hard to compare because it's, you know, five major companies with some of the best researchers in the world with all of the same training data, all building the same thing. Do you use Grok?
Don't laugh. Do you use Grok? I'm not getting in any of the new, whatever cyber taxis get, I'm not getting in and I'm not putting any information into any of his properties. I don't trust him personally. I used Grok when it first came out to, I think I...
I made, you know, Kamala with a gun is, you know, The Verge put out a story because they have Grok for the listener is available on X, formerly Twitter, which is owned by Elon Musk. And he owns XAI, which created this chatbot. And it has what it feels like no guardrails. So you can make a lot of photos that, you know, break all sorts of copyright laws. No, I don't use Grok.
And I don't necessarily find it to be top of the line. If I were to rank models, they're not at the top. So I'll go back to my question. What's your favorite LLM?
I would say Opus, Claude. It's really intelligent and incredible. I think, you know, it's enviable from other labs what they've built. And then what, can you name some long tail LLM or AI apps that are sort of fun that people that maybe haven't gotten very much attention that are kind of fun? Any sort of undiscovered gems out there? Undiscovered gems. I mean, if you go to Hugging Face, if you're into open source, there are hundreds.
hundreds of thousands of open source LLMs that people can mess with. I mean, that's the cool part about open source LLMs, which is a very hot debate. But, you know, developers are creating all sorts of cool shit with the open source LLMs available on Hugging Face. So there's almost too many to choose from, but none of them are mainstream in the way...
because it costs so much money. Hence why OpenAI just raised the most money that anyone's ever raised, ever. It's a lot of money. Yeah, it's just for people that are hugging faces in AI community, where it's a platform where they collaborate and they do different things. And, you know, this will be very much like the early app days or the early internet days where there was suddenly websites and things. And then there was Yahoo that compiled them into yet another hierarchical, officious oratory. I have a follow-up. Yeah.
I find that AI is very politically correct, that it will say, to answer this question, you know, make sure that you check with law enforcement or... That's not politically correct. Oh, I find it very politically correct. Please don't steal from the jewelry store. All right. No, but it'll come back and say, well, this might reflect bias or you should... I just find it's very...
I'm looking for an AI that'll say, that's a stupid fucking question. Or your question, I want it as a friend. He wants the shame AI. Shame AI. Makes sense. Hit me harder. Call me daddy, you bitch. Oh my God. No, but I do find it's very, that they've put it, they're so worried about it going weird places that it- It has gone weird places. It's constantly preconditioning and qualifying all its answers and being very gentle and
I find it's very overly sensitive and, quite frankly, politically correct. So I'll start with Kylie. Do you find that to be the case, or do you think that's just they're putting in appropriate guardrails? I think they're putting in the appropriate guardrails because it's so nascent. I mean, why start off crazy? I feel like we can work our way up to getting you a sadistic chatbot, but for now, it's so... Go on! Yeah.
It's just such a nascent technology. So I think being overly safe and correct and nervous about what it's going to output to millions of people, I think that's a good move. Yeah, Scott, come on. I think that I'm going to answer this. I know you want to please bitch AI, but one of the things that's really important is it doesn't sexually harass people. It doesn't like start. Okay, you're taking this to an even darker place than I would go. But I'm just saying.
It has. It's been the original ones were racist. If you ask it a simple question, it'll start conditioning everything and telling you to check this and make sure that you talk. And it's a sort of just give me the goddamn answer. I get that. But that's they're never going to do that because they literally they the first time they put out some Microsoft stuff, it was racist. Right. It started to say racist things. So they really can't.
One of the things that I think I tell a lot of people is I met these two guys on the street yesterday and they were creating an AI. They just ran up to me. They love Pivot. They're creating an AI that goes on, speaking of odd and unusual things, that goes on top of 911 calls that they'll be selling into cities. And it'll translate, say, Spanish immediately because not every person, there's a delay there because the person who's taking the call is not Spanish and they have to go get a Spanish-speaking person.
a dispatcher. And so it's doing all kinds of things that groks it and sends things out really quick. I thought it was a great idea. I thought it was a really interesting way of use of AI. And I said, but you know what something is? You can't make a mistake even if human dispatchers do. So I think they have to be unusually careful with all these things as this AI is shoving us around the planet. I don't know. I just feel like that's okay. Yeah.
You can take it, Scott. But I'm going to get someone to make you a mean AI, overlord or please bitch or something like that. I learned this morning from AI that a group of flamingos is called a flamboyance. That's true. And I love that.
How awesome is that? That's also in the dictionary. A flamboyance. You're a flamboyance. Anyway, last question. We just sort of covered the idea of what things we should not use it for. It's going to be used for everything, just FYI. But any predictions, last question, on things AI can't do yet but will be able to do for us in the next three years? Use your, you know, imagination hat here, things you're seeing or hearing. I think in the next three years...
Again, these are so hard to tell because you need so much money and so much compute. So if we just continue on that exponential curves that these companies are hoping for, I see probably more accurate and natural voice interactions. That's something that they're building that they really want, the her movie style reality. I do think that that will get better, especially as they release all of this to the public and people test it and they train on people using it. I think those will naturally get better.
advanced code generation and debugging, that's something they're already really good at. And if these reasoning models from OpenAI and others continue to get better, it's going to be better at coding and debugging, which will be really cool. And they're all building agents, which again are like little AI assistants.
That's sort of the high stakes tasks that they want to access, hence why all these guardrails are so tough because they want these high stakes tasks like running your life and booking new flights and having access to all of this. So I do see them building out agents, but it would require so much compute and so much money to get there. So I'd be curious what you guys think because I get asked all the time, like, is the bubble going to pop? Is opening I just going to crash? Which I think...
It's so hard for me to tell. You guys have been doing this for so long. It will, but no. No, no, no. It's like when the internet crashed. This is a big deal. This is a change in computing. It's yet another great change in computing. This is not crypto. This is not some of the little...
little bubbles, but a bubble, I guess, but it's directionally correct. It's directionally, and it's going to be huge and encompass everything. Scott? Well, there's two things. There's the valuation of these companies, and then there's the real impact they have on the economy. I think the latter is just getting started. There's going to be a lot, there's going to be, what I would say in terms of valuations, there's just going to be a lot of volatility. We've talked about this. We think that relative to its size and leadership position, OpenAI, in my view, is actually at 12 times revenues
is actually probably the best value. Because some of the long-tail ones you talked about, who have almost no revenues and no real visible business model yet, still get $2, $10, $20, $50 billion valuations. So it's going to be a wild ride. It's like, I would describe it as like late 90s internet. We don't know if it's 97 or 99 now, but we know that by 2005, it's going to be much bigger than it is now. That's a long-winded way
Kylie of saying, I have no idea. Yeah, he does. It's up and to the right eventually. Anyway, thank you, Kylie. We really appreciate it. You can read Kylie on The Verge. She does amazing work on this topic and breaks a lot of stories. A colleague. A colleague. She's a scoopster. She's a scoopster and she's a great one at it. Anyway, okay, Scott, that's it for our AI Basics episode. Please read us out.
Today's show is produced by Lara Naiman, Zoe Marcus, and Taylor Griffin. Ernie and her tight engineer in this episode. Thanks also to Drew Burrows and Miel Saverio. Nishat Kharat is Vox Media's executive producer of audio. Make sure you subscribe to the show wherever you listen to the podcast. Thanks for listening to Pivot from New York Magazine and Vox Media. You can subscribe to the magazine at nymag.com slash pod. We'll be back next week for another breakdown of all things tech and business.
Support for this episode comes from AWS. With the power of AWS generative AI, teams can get relevant, fast answers to pressing questions and use data to drive real results. Power your business and generate real impact with the most experienced cloud.