cover of episode How to make yourself more human in an automated world (with Kevin Roose)

How to make yourself more human in an automated world (with Kevin Roose)

2022/7/11
logo of podcast How to Be a Better Human

How to Be a Better Human

AI Deep Dive AI Chapters Transcript
People
C
Chris Duffy
K
Kevin Roose
知名科技记者和作者,专注于技术、商业和社会交叉领域的报道。
Topics
Chris Duffy:作为播客主持人,我担心人工智能会取代我的工作,这反映了很多人对自动化时代的焦虑。 Kevin Roose:人工智能和自动化正在快速发展,各个行业,甚至包括艺术和音乐创作等领域,都面临着被取代的风险。人们需要直面这一现实,并积极思考如何提升自身价值,避免被自动化取代。首先,要评估自身工作岗位的自动化风险;其次,在工作中展现更多的人性化特质,留下‘人’的痕迹,让工作更具有不可替代性。 技术并非中立,算法中可能存在偏见,这会对社会产生负面影响,例如在执法和招聘中造成不公平。我们需要警惕并努力减少算法偏见。 我们应该谨慎地使用技术,避免过度依赖,并重新平衡人与技术的关系。智能手机等设备已经从助手变成了‘老板’,我们需要重新掌控自己的时间和注意力。 在互联网时代,积极参与创作和交流,比被动消费更有益身心健康。积极的互联网参与能带来积极感受,而被动消费则可能带来负面情绪。 对于下一代,我们需要引导他们积极参与创造性活动,避免过度沉迷于被动消费。 我们需要积极参与到科技发展中,成为积极的参与者,而不是被动的接受者。学习和理解科技发展,并积极参与相关的决策,才能更好地塑造未来。 Kevin Roose:我个人通过使用Freedom app来管理时间和注意力,这有助于我更好地工作和生活。

Deep Dive

Chapters
Kevin Roose discusses the complex relationship between humans and technology, emphasizing the need to set boundaries with our devices to maintain our autonomy in an increasingly automated world.

Shownotes Transcript

Translations:
中文

TED Audio Collective. You're listening to How to Be a Better Human. I'm your host, Chris Duffy, and today we're talking about the surprising ways that artificial intelligence and automation will affect both the future of our jobs and our own behavior beyond the workplace. Okay, hi. This is actually Chris. This voice you're hearing right now, this is really me. The voice that you heard before, that was computer generated based on audio of me from past episodes.

And the fact that it's even remotely possible to create a computer generated version of my voice is terrifying. Even if that voice sounded like he was maybe not fully enthused about doing this show and needed a cup of coffee. But I am scared about that because I need this job. I don't want to be replaced with a hosting robot.

And, you know, that fear, that fear of automation coming for our jobs and changing the way that we work, that is something that our guest today, Kevin Roos, knows very well. Kevin is a columnist for The New York Times and the author of a recent book called Future Proof, Nine Rules for Humans in the Age of Automation.

So Kevin has written a ton about how technology might impact our jobs and the way that we work in the future. And he's a guy who really understands my terror when I heard that computer version of my voice. Because when it comes to worrying about automation in your job, Kevin has been there himself. Here's a clip from his TED talk.

I was in my mid-20s the first time I realized that I could be replaced by a robot. At the time, I was working as a financial reporter covering Wall Street and the stock market. And one day, I heard about this new AI reporting app. Basically, you just feed in some data, like a corporate financial report or a database of real estate listings, and the app would automatically strip out all the important parts, plug it into a news story, and publish it with no human input required.

Now, these AI reporting apps, they weren't going to win any Pulitzer Prizes, but they were shockingly effective. Major news organizations were already starting to use them, and one company said that its AI reporting app had been used to write 300 million news stories in a single year.

For the last few years, I've been researching this coming wave of AI and automation. And I've learned that what happened to me that day is happening to workers in all kinds of industries, no matter how seemingly prestigious or high paid their jobs are. You might be surprised after hearing that clip of Kevin to learn that I always feel optimistic after I hear his thoughts. And that's because Kevin believes that if we exercise some agency over technology, we can make it something that works for us rather than the other way around.

And today, we're going to be talking about how to make that vision of the future a reality. But first, we're going to take a short break. We'll be right back with Kevin after this.

How to be a better human is brought to you by Progressive Insurance. What if comparing car insurance rates was as easy as putting on your favorite podcast? With Progressive, it is. Just visit the Progressive website to quote with all the coverages you want. You'll see Progressive's direct rate, then their tool will provide options from other companies so you can compare. All you need to do is choose the rate and coverage you like. Quote today at Progressive.com to join the over 28 million drivers who trust Progressive.

Progressive Casualty Insurance Company and Affiliates. Comparison rates not available in all states or situations. Prices vary based on how you buy. Hello, hello. I'm Malik. I'm Jamie. And this is World Gone Wrong, where we discuss the unprecedented times we're living through. Can your manager still schedule you for night shifts after that werewolf bit you? My ex-boyfriend was replaced by an alien body snatcher, but I think I like him better now. Who is this dude showing up in every...

Everyone's old pictures. My friend says the sewer alligators are reading maps now. When did the kudzu start making that humming sound? We are just your normal millennial roommates processing our feelings about a chaotic world in front of some microphones. World Gone Wrong, a new fiction podcast from Audacious Machine Creative, creators of Unwell, a Midwestern Gothic Mystery. Learn more at audaciousmachinecreative.com.

Find World Gone Wrong in all the regular places you find podcasts. I love you so much. I mean, you could like up the energy a little bit. You could up the energy. I actually don't take notes. That was good. I'm just kidding. You sounded great. So did you.

And we are back. Hi, I'm Kevin Roos. I'm the author of Future Proof and a tech columnist at the New York Times. Let's start by talking about Future Proof. I actually want to talk about a lot of your writing and your other books as well. But starting with Future Proof, what should regular people be doing to prepare themselves for the future of work? Well, a couple of things. One is

I think we really need to figure out who is most at risk. So I think people need to look at what's happening in their industry, their profession, and automation and AI are making huge strides forward in every industry right now, including some ones that we thought were kind of immune to it, like art and music and caring for elderly people. I mean, robots are being deployed to do all of those things now.

And so I think we all need to really take a close look in the mirror and say, like, is what I do for a living vulnerable? Is what I do for a living repetitive enough that it could be automated and may be automated soon? And if that's the case, it doesn't mean that you should pack up and, you know, go plan for your second career, you know, mining Bitcoin on Elon Musk's Mars colony or whatever. But it does mean that we should figure out how to

and make ourselves less replaceable. And so for a lot of people, I think the first step is just sort of acceptance that this could happen to me. And this was something that sort of dawned on me about a decade ago, and I...

I learned that there were AIs being taught to do basic reporting tasks, including some of the ones that I did as a young journalist. And then the second thing I think we need to do is to display our humanity more in our work. In my book, there's a rule that I call leave handprints.

And this is about basically taking the work that you do and instead of trying to erase yourself and the traces of sort of human frailty from it, leave those things in. Make it very clear to the people who are consuming your work, whether it's, you know, an audience. Well, okay, so I have a question about that piece because

Like you said at the beginning, there are lots of jobs that I would have thought are not at all vulnerable to automation or to AI. And then increasingly, I wonder if that's even true, if there are any jobs at all, because I would have never thought that writing and comedy and

And for example, like using my own voice to host a podcast, I would have never thought that those were vulnerable. But now there are tools where people can type words into a script and it will make it sound like I'm saying them or, you know, it's kind of a meta joke. Right. But there's like all over the Internet is like I fed 300 sitcoms into a neural network and look at what it spit out. And it is genuinely funny, mostly because it's like full of weird non sequiturs. But

It just makes it clear that like it's possible for a computer to be funny, whether it's intentional or not. Are there jobs that are just not automatable or is everyone at risk? Well, the way I like to think of it is not as occupational categories because there are no occupational categories that are safe.

So some of every job will be automated. The question is just which parts and how quickly. So for example, there was a interesting little flap just a couple weeks ago when OpenAI, this studio, this AI company in San Francisco, released this program called DALI. Have you heard of this? I haven't heard of it. Sort of like WALL-E, but DALI. It's an AI that basically takes text and

and turns it into art. So you tell it, I want an illustration of three bears playing ping pong in business attire on the moon. And it will generate an original piece of artwork depicting exactly what you have asked it to depict. And it's incredible. It's really good.

And it's the kind of thing where immediately illustrators and people who make art for a living saw this thing going viral on Twitter and thought to themselves like, oh crap, like I thought I was safe and I am really not safe because that is essentially what I do.

And this program is maybe not as good as me, but it's maybe 80% as good as me. And it's so much cheaper and faster. And you can get something instantaneously from a machine. And so that's the kind of realization that I think a lot of people, especially in our industries, in the creative industries, have had recently. We sort of had this like automation for the but not for me attitude where we like thought we were immune because we make things with words and art and music and

And that is just not true. There are AI programs being deployed now to, for example, create new levels in video games or to write music. A lot of the music that used to be written by studio musicians, like the songs you would hear, you know, over the loudspeaker in a supermarket, those are now being written by AI.

And so we creatives are not as safe as we may have thought we were. So I guess then it raises the kind of reverse question then, which is what is uniquely human and how can we be more of that? I know something that a lot of people are concerned about with AI is the ways in which it's built, but also its effects affecting unequally, right? That there's been a lot of talk about like racism in the AI programming and in the effects that the results that it spits out. There's this idea I think a lot of people have that artificial intelligence is somehow more

neutral and unbiased and just a computer spitting out facts. And it seems like the results are very clear that is not the case. I wonder how you think about combating that piece, too, as we think about like how to make things more human. How do we make maybe AI less human in that way? Yeah, well, AI is a big category and it includes everything from like, you know, the Roomba that vacuums my house.

to the supercomputers that run YouTube and TikTok and Facebook and all of these giant billion-plus user algorithms. And so it's hard to generalize, but I would say that in general, AI is very good at using past data to predict future outcomes. If you have clicked on 300...

YouTube videos about a toilet repair. The algorithm is pretty good at figuring out that you might want more of those. And this is not a random example. I did just watch a bunch of videos about toilet repair. This is a side note. But just to say that the algorithm is absolutely convinced that I own a pet lizard because one time I was doing research about lizards for a joke. And like for years, it has been like, do you want a warming lamp for your

pet lizard, which I do not own. You're being radicalized into lizard ownership. I guess I have at a certain point. I'm like, fine, I'll get the, I'll get the iguana. You sold me. Well,

Right. So this is a thing that AI is very good at. And that can be good and it can be quite dangerous. There's been a lot of research showing, for example, that these things called predictive policing systems, that a lot of police departments now use AI programs to try to guide their officers to quote unquote high crime areas. So basically use an algorithm to tell me where a crime is likely to occur.

And because these systems are built off of decades worth of data that reflect biased policing practices, over-policing low-income neighborhoods, minority neighborhoods, systematic over-policing of those areas, it is more likely to tell an officer, hey, if you go to this neighborhood,

corner on this street at this time, you are very likely to see a crime in progress. And of course, where do the crimes, you know, like if you put a police officer on a corner, they're more likely to see a crime happening there, which then feeds back into the algorithm, which then tells them this is a really high crime block or corner. And it perpetuates this bias throughout the

The ages, except now it's, it seems objective, right? Because it's coming from a computer rather than from the brain of a police officer. So those are the kinds of things that I worry about. I also, you know, there are many examples of algorithmic bias in, for example, hiring. A lot of companies now use AI to screen resumes and that can be disastrous too if it tends to select for, you know, only white men or people who went to Harvard or some other, you know, flawed criteria. So I guess-

That leads to a question, which is in what ways are the same things that make us human and make us special and unique? Also, what make us susceptible to being shaped or manipulated by technology? It's a really interesting question. I mean, I think that we have always been shaped by our technologies, right? So there's this really fascinating study that came out of the University of Minnesota called

a few years ago and basically they were interested in this question of whether algorithms sort of reflect our preferences or whether they shape our preferences. And so they took some students at this college and they basically set them up with this experiment where they would be tested on how they liked a series of songs, like whether they liked a series of songs that were played for them.

And then they sort of manipulated the star ratings, like, you know, how, you know, Spotify or any of these, they'll like give you sort of ratings on based on how much they think you will like the song. And they sort of manipulated these ratings and so that they didn't really have any real connection to people's actual preferences. And then they forced people to sort of listen to the whole songs. And it turned out that the sort of star ratings influenced people's

judgment of the songs, regardless of whether or not they actually liked them. They trusted the star ratings more than they trusted their own subjective taste and experience. And so I think there's this way in which we are kind of outsourcing our judgment and our preferences to

to AI, which may or may not actually have our best interests and a good picture of what we're like as people in mind. And so I think that worries me almost as much as like the factory automation and stuff. It's like this kind of internal automation that I think we all feel kind of tugging on us every day. I wonder how knowing that and also just your reporting on tech in general, how has it changed your relationship to what you use in your day-to-day life? Well,

I am not a Luddite. I am not a technophobe. I have plenty of robots and gadgets in my house. But I do try to exercise real caution with the kinds of things, the kinds of decisions that I let

algorithms and machines make for me. I wrote a story a few years ago where I did a 30-day phone detox with the help of a professional phone rehab coach because I was horribly addicted to my phone. We all are. This was pre-pandemic, which I need to probably do it again.

But it was really instructive and it was really instructive about which of my cues I was taking from my phone. I think our phones kind of started out many years ago as like assistants, like they were there to sort of be helpful with whatever you wanted to do.

But then at some point in the past few years, they got promoted and became our bosses. And now they just tell us, pay attention to this thing. Get mad about this thing. Get freaked out about this new story. And I think that restoring balance in our relationship with the devices in our lives is really important. So right now, I just had a kid. And I'm very cautious of what...

kind of media I'm consuming about that, whether I'm in, you know, the Facebook group where all the parents share the craziest, scariest things that have happened to their kids. Like, I'm very guarded about what I let into my consciousness. And that's maybe makes me sound like, you know, a paranoid freak. But it's part of how I try to reduce the influence that machines have on my life. For example, I don't use YouTube autoplay, I turn that off.

And so that when I'm watching a video, it doesn't just automatically start playing a new video, because that's something that I found I'm very susceptible to. I'm careful about TikTok. Actually, I've been meaning to write about this. I have this sort of like,

TikTok amnesty policy where every few weeks I delete my TikTok account and start a new account just to clear out the algorithm. Whatever junk I've been watching, I don't want to be fed just more of that. I want new junk. And so I try to sort of cleanse my timeline a little bit that way. So you wrote a book review with the help of artificial intelligence, which I think

Kind of goes to the idea of flipping the script of like the artificial intelligence and phones and technology used to be something that were our assistants and now they're more like our bosses. So.

Yeah, tell me about the process of writing a book review using AI to help you. Yeah, well, this was for the New York Times Book Review earlier this year, and I had gotten assigned to review this book. Eric Schmidt, the former CEO of Google, and Henry Kissinger had written a book about AI together. And I read, and I was sort of dreading reviewing it because it was kind of boring. And I was like, I'm not really like, really, I got to come up with like a thousand words about this book.

And then a light bulb went off and I thought, what if a robot could help me? So I used this app called PseudoWrite, which is basically like a... It's basically a super powered version of the autocomplete on your iPhone where like you put in a little bit of text and it spits out the next, you know, however many hundred words you want. So I wrote a little intro and then I fed it into PseudoWrite and it spit out like, you know...

seven or eight paragraphs of analysis of this book that, you know, it had not read and just was sort of guessing at. And it was pretty good. Like, it was not great. Like, it was not perfect. And it took me a couple sort of tries to, like, tune it to get the right kind of output. But eventually it was, like, you know, as good as anything I would have written. And so I just slapped an intro on it and disclosed, you know, this review was written by an AI and then printed it. And it was, like, perfectly serviceable. And I don't

people objected to it is a little bit of a stunt, but this is going to be happening more and more. I think we're going to reach very soon, if we haven't already, the point where more text on the internet is written by AI than by humans. And that will be an important inflection point and kind of a scary one if you're in the words business.

So thinking about the future, you have kind of an interest. I mean, there's a lot of reasons why you have an interesting perspective on the future that have to do with just your own brain. But you also are a new father. And I think a lot of people, when we think about the impact of technology, it's not just for our own lives. It's for the next generation. So.

I wonder when you think about your son growing up in this world, what are you excited about for him? And what are you worried about when it comes to technology? I'm excited that...

he will have access to so much more information. I mean, I'm not one of these people who thinks that, you know, children should have no access to technology and no access to even phones at the right age. It was really important for me as a kid to have that stuff. And I think it's important to find ways to coexist with it for kids growing up today. I am scared about the...

kind of loss of autonomy that I see happening in a lot of parts of culture

culture. But there have been some studies that have shown that it matters what you're doing on these screens and on these devices. You know, playing Minecraft is not the same as like watching a zillion TikToks, you know, connecting with your friends on, you know, inside Fortnite or, you know, on a group chat or on Snapchat is different than, you know, posting selfies on Instagram for other people to kind of like and comment on. And then I think just

sort of being discerning about what kids are doing on social media and encouraging them to do things that involve being creative. There's so much, I mean, it's so possible to be a totally passive person on the internet.

on the internet and just lurk and scroll and never, you know, create anything. And for me, what was, you know, what was important about the internet as a kid for me is just the ability to make stuff, to create stuff. I had a modestly successful ring of GeoCities fan pages for Buffy the Vampire Slayer.

that I maintained when I was a kid. I built websites, I, you know, did little flash animations, I coded a little bit, like it was really a sort of sandbox for me. And I think there are ways to do that, you know, today, a lot more ways, actually. But there's also this kind of other way to experience the internet, which is like as a totally passive consumer. And that I think is really damaging. I find that for myself, and I'm not a parent. And I find that

even just as an adult who likes to think that I kind of am like more fully formed and not as as malleable as maybe a young teenager is. I still find that when I am using the Internet and I'm using it as a way to put things out and to produce and to connect with people that I feel good about. Right. Like when I'm like, oh, here's something that I wrote and I want to publish it. Great. I love that. I love that.

If I can't find a newspaper or magazine that will publish something, I can just put out my thoughts and people will still read it and engage with it. That feels good. And the part that feels bad and feels like it starts to shape me and maybe make me feel inadequate or feel like I'm not doing enough or constantly competing with a bar that is ever shifting higher and impossibly is when I just start thinking,

passively consuming. So when I'm scrolling through Instagram or I'm scrolling through TikTok or I'm just looking at other people's accomplishments, then I feel bad.

But when I put things out and creatively engage, then I feel like, oh, this is an amazing tool where I can be talking to you from hundreds of miles away and we can have an actual conversation. That never makes me feel bad, that part of it. Totally. And you asked about reasons for optimism and sort of things I'm thinking about with respect to my son growing up with technology. And I'll add one more, which is that I think this generation of Gen Z, people who got their first smartphones and their first social media accounts as teenagers or during this last wave of

I think those were basically the, like, the sort of guinea pigs for this giant social experiment. And I think we're going to look back and see, like, a bunch of people driving fast cars with no seatbelts who just, like, didn't have the tools to, like,

with what was now possible. So I think, unfortunately, there's like a generation of kids who grew up without any real safeguards or knowledge about what they were even doing to themselves by like living on these platforms. And I think that by the, I hope that by the time my kid is of age to start using this stuff, like we've built up a little bit more sort of knowledge and awareness and, you know, sort of immunity and resistance to like,

this thing that we all do. I think you never want to be like the first generation to be like building with stuff. It's always nice to like work the bugs out and use the second version of the product. So I think with any luck, he will be using like the second or third or fourth or 10th version of this stuff rather than kind of being on the frontier where no one knows anything. So Kevin, obviously there is, there's no going back to a world where using technology like smartphones or the internet, it's

is not essential to participation, right? We're not going to go back to that world. But if that was possible, is that something that you would even want? No. And why or why not would you? Why wouldn't you want that? No, I don't want us to go back to a world with no internet, no social media, no smartphones. You know, I think...

I think, you know, these things have had enormous costs, but they have also had a lot of benefits. And I'm very critical of certain social media companies. I don't think, you know, I don't think a world without, for example, Facebook would be significantly worse, might be significantly better. But I do think that on the whole, you know, we just need to figure out how to make this technology work for us rather than us working for it.

And so I think I'm still, you know, maybe I'm a starry-eyed optimist, but I still believe that there's a world in which we use all of this stuff for its highest purpose, and it frees us from routine and repetitive, you know, tasks. And it leads to a society that is, you know, more abundant and more fair. There's a great book came out a few years ago, which I mostly just love the title of, called Fully Automated Luxury Communism.

which is about how sort of robots and AI could produce this kind of utopian society where we just all sit around and make art and do philosophy all day and the robots just take care of everything we need. So I'm still, I don't think we'll ever get fully there, but I think we can do better than we are now. And that's what keeps me motivated. Okay, we're going to take a quick break, but we'll be back with much more from Kevin Roos right after this.

Warmer, sunnier days are calling. Fuel up for them with Factor's no prep, no mess meals. You can meet your wellness goals thanks to this menu of chef-crafted meals with options like Calorie Smart, Protein Plus, Veggie Vegan, or Keto. And Factor has fresh, never frozen meals, which are dietician approved and ready to eat in just two minutes.

That sounds like a dream come true. I cannot wait. So no matter how busy you are, you will always have time to enjoy nutritious, great tasting meals. Make today the day that you kickstart a new healthy routine. What are you waiting for? Head to factormeals.com slash betterhuman50 and use code betterhuman50 to get 50% off your first box plus 20% off your next month. That's code betterhuman50 at 50%.

factormeals.com slash betterhuman50 to get 50% off your first box, plus 20% off your next month while your subscription is active.

I want to tell you about a new podcast from NPR called Wild Card. You know, I am generally not the biggest fan of celebrity interview shows because they kind of feel packaged, like they've already told these stories a bunch of times before. But Wild Card is totally different because the conversation is decided by the celebrity picking a random card from a deck of conversation starters. And since even the host, Rachel Martin, doesn't know what they're going to

pick. The conversations feel alive and exciting and dangerous in a way because they're vulnerable and unpredictable. And it is so much more interesting than these stock answers that the celebrities tend to give on other shows. You get to hear things like Jack Antonov describe why boredom works or Jenny Slate on salad dressing or Issa Rae on the secret to creativity. It is a beautiful, interesting show, and I love it. Wildcard comes out every Thursday from NPR. You can listen wherever you get your podcasts.

And we are back. We've been talking about the impact of technology on our work. And if you find yourself increasingly worried about that impact and what that means, here's a clip from Kevin's TED Talk that I think can help us understand one way that we might move forward. If you, like me, sometimes worry about your own place in an automated future, you have a few options. You can try to compete with the machines. You can work long hours. You can turn yourself into a sleek, efficient productivity machine.

or even focus on your humanity and doing the things that machines can't do, bringing all those human skills to bear on whatever your work is. I'd love to talk a little bit about something that I know you've done a lot of recent work on in explaining and in doing research on, which is crypto and also Web3. And, you know, I've heard you say this, that basically that there's

this element of how everyone made fun of social media when it first started. And we're like, it's a joke. Look how dumb this is. Oh, my gosh, it's pictures and they get to comment on them. And then the systems became incredibly powerful and all of the issues with them are deeply entrenched and really hard to fix.

And I've heard you say that you're basically trying to avoid that same thing happening with Web3, where right now people treat it like a joke, but there's also obvious issues. And if we don't engage with them now, by the time we do, it will be so much harder to fix them. Is that first of all, is that like an accurate assessment of how you feel about this and why you're reporting on it? Yeah, totally. I mean, that is the essence of why I think this stuff is important. I'm not a criminologist.

crypto fan. I'm not a crypto skeptic. I'm sort of a crypto moderate when it comes to all things crypto and Web3. One of my deeply held beliefs, though, is that the people who are involved in the early days of a technological shift get

outsize input into what that technology eventually becomes. So in the early days of social media, as you said, when people were sort of mocking like, oh, who wants to see pictures of my brunch? And like, you know, why would anyone tweet about what's going on in their neighborhood? Like it was just not seen as a serious thing. Yeah.

And like now, obviously, like it's the biggest force, you know, one of the biggest forces in politics and culture. And, you know, elections are won and lost on social media and it shapes the fate of, you know, democracies. And and so I think that right now we have this very nascent crypto industry.

That, you know, seems in a lot of ways like something you, you know, shouldn't take seriously. Like, it's got a lot of indicators of like, there are a lot of scam artists, there are a lot of, you know, there's a lot of fraud, there's a lot of just really stupid stuff. Um,

And I think the temptation is to kind of dismiss it all and like hope that it goes away and that you never have to understand it. It's one of these tech trends that just like comes and goes. And I think that's a real mistake because if this does work, if the crypto people are right, if this is technology that sort of reshapes finance and culture and ownership and art and all the things that they think it will do, um,

I want there to be people on the ground floor of that who are thinking about these risks and these big questions and what happens if, if,

crypto takes over the world? How do we make sure that it doesn't just become, you know, six white guys in San Francisco, like getting all the money again? How do we actually make this the best version of itself that it can be? So I want people to engage with it, whether or not they're skeptical. And maybe especially if they are skeptical, I think it's good for people to understand and engage with it. So what do you think that a regular person who's not a tech reporter and not living in Silicon Valley, what should they do to engage with

crypto and with these issues right now? Well, first things first, to self-promote a little bit, I did write a very long 14,000 word explainer of crypto and Web3 and DeFi and NFTs and all the other stuff that ran in The New York Times back in March. It's incredible. And it's also, I think, at least in my memory, the only time I've ever seen an entire section of the paper written by one person.

Truly incredible. Yeah, it was. It was wild. I just I started and I thought it would be a short little thing. And then it just kept going because it turns out it's sort of complicated. So that's sort of my attempt to give people who are a little bit intimidated by this topic, like just an easy way into understanding like the basic contours of what's going on. So I would start there. It's called The Latecomer's Guide to Crypto. It's on New York Times website.

And then I think just sort of experimenting with it a little bit. Like I wouldn't, you know, I'm not a financial advisor. I would be the last person you should ask about what to invest in. I found that my own understanding of crypto really kicked up a couple notches when I accidentally sold an NFT for a lot of money in a charity auction last year. And all of a sudden I had all this crypto that I was not keeping, but that I was sort of transferring to a charity. And it really

forced me to learn how this stuff worked because all of a sudden I had this like, you know, pot of money that I was the custodian of that I had to figure out like how to keep secure and how to transfer. And it really sort of threw me into the deep end and made me learn about this stuff. Well, for people who are listening and are sold on these ideas about the promise, but also the

potential perils of future technology. How can we be better participants in the future of tech? Is it being better stewards around regulation or how do we get involved and how do we make it so that the future is what we want it to be rather than what we fear it could be?

Yeah, I think the first step is to learn, is to really understand what's happening on the technological frontier so that you feel comfortable weighing in, so that technology is not just a thing that happens to you. It is a thing that you feel like you have some agency over. If you're signing up for some new service or new social network or new product, like figure out what is happening under the hood a little bit, be a more educated consumer the way that like

You want to understand what's in the food that you eat. You want to understand what's in your information diet and what forces are operating there. This idea in the tech world, sort of known as like friction, which is basically like, how do I, and it's usually used in the context of tech

products that are trying to get rid of friction. So making it as easy as possible to like watch a video or order something or like, you know, comment on somebody's birthday, you know, Facebook page or something. We stored your credit card. So it's just one click to buy. Exactly. But I've been sort of trying to systematically introduce a little bit more friction into my life because I think like things are a little bit too easy and it tends to put me onto autopilot.

And so I've been, you know, like taking the long way to go somewhere and like not following the Google Maps fastest route every time, like maybe getting something from the hardware store down the street instead of ordering something from Amazon, even if it's a little more expensive, trying to like be a little bit more thoughtful about what I consume. And then I think, yeah, just engaging in the democratic process, you know,

elect people who understand this stuff and are thoughtful about it, make your feelings known in a way that's, you know, thoughtful and respectful. But I think we're entering into an age where we're

the tools in our society are more important than they ever have been. And so it's incumbent on people to understand that and to weigh in and to not just, you know, wake up one day and find that the world has changed around you and you had no part in deciding how to live in that world. The show's called How to Be a Better Human. So what are you personally trying to do right now to be a better human in your own life? Well, right now I'm trying to raise a son. Yeah.

That's a big one. Which feels sort of cliche, but also like truly terrifying and challenging and, and, you know, tests me in all kinds of ways, um,

that I sort of feel like it's forcing me to be a better human, you know, to respond with compassion and empathy at 3am when there's a meltdown happening, as I did last night, that feels like it's stretching me in some new ways. So that's one of the ways I'm trying to be a better human. That's a huge one. That's a really big one. And then what is something that has helped you to be a better human, whether it's a book, a movie, a piece of music, an idea, anything? I am

an obsessive evangelist for this app called Freedom, which is basically the only reason that I have been able to get anything done for the past five years. Freedom is an app. It's on your computer. It's on your phone. And you basically tell it, like, do not let me go on social media for the next...

you know, X hours. Do not let me check my email. Do not let me, you know, surf YouTube. And you can put in sort of custom sites that you, custom lists of sites that you wanted to block, whatever your time wasters are, like, and your sort of just, I don't know, junk food for your brain are, you can set it to just

cut that off systematically for any length of time you want. And so it's how I write. It's how I focus. I have no self-control, so I need to outsource that to this app. And luckily, this app is very good at implementing self-control for me. So that is my shortcut to being a better human. Amazing. Kevin, thank you so much for being on the show. Thank you for all the writing and all the thinking that you've done about this, but also just for talking to us about it. It's really been a true pleasure here. It has been a real pleasure. Thank you for having me.

That is it for today's episode. I am your host, Chris Duffy, and this has been How to Be a Better Human.

Thank you so much to today's guest, Kevin Roos. His latest book is called Future Proof, and you can also check out his podcast with the New York Times. It's called Rabbit Hole. On the TED side, this show is brought to you by Sammy Case and Anna Phelan, both of whom are not robots. And from Transmitter Media, we're brought to you by Isabel Carter, Farrah DeGrange, and Wilson Sayre, all purely human, 100% human.

For PRX Productions, this show is brought to you by the unautomated, fully analog Jocelyn Gonzalez, even though she uses digital tools, and Sandra Lopez-Monsalve, also 100% flesh and blood. She's a human. She's not digital bits. Thank you so much for listening, and we will be back next week. ♪

Support for the show comes from Brooks Running. I'm so excited because I have been a runner, gosh, my entire adult life. And for as long as I can remember, I have run with Brooks Running shoes. Now I'm running with a pair of Ghost 16s from Brooks.

incredibly lightweight shoes that have really soft cushioning. It feels just right when I'm hitting my running trail that's just out behind my house. You now can take your daily run in the Better Than Ever Go 16. You can visit brookscrunning.com to learn more. PR.