cover of episode Dan Shipper — I, Writer (EP.237)

Dan Shipper — I, Writer (EP.237)

2024/10/10
logo of podcast Infinite Loops

Infinite Loops

AI Deep Dive AI Insights AI Chapters Transcript
People
D
Dan Shipper
Topics
Dan Shipper讲述了他从孩童时期对编程的兴趣,到创建软件应用,再到创立媒体公司Every的个人成长历程。他分享了在创业过程中,如何平衡个人兴趣(写作)与商业目标,以及如何通过坦诚面对自身真相,找到适合自己的发展道路。他认为,承认显而易见的真相,即使很可怕,也能带来自由和动力。他将Every公司的商业模式比作金字塔结构,从免费内容到付费课程和咨询服务,满足不同用户的需求,并应对市场变化。他认为,媒体是一个充满挑战的行业,需要多元化的收入来源。 Jim O'Shaughnessy与Dan Shipper就人工智能、商业模式、以及个人成长等话题进行了深入探讨。他分享了自己在投资领域的经验,并对Dan Shipper的观点表示赞同。他认为,在人工智能时代,分配能力将成为更重要的技能,并带来新的商业模式。他同时强调了直觉在决策中的重要性,并认为应该重新重视人类直觉在解决复杂问题中的作用。

Deep Dive

Key Insights

Why did Dan Shipper decide to embrace AI despite initial wariness?

Dan realized that AI could significantly enhance his creative and business processes, allowing him to focus on what he loves: writing and business. He saw AI as a tool for amplifying his capabilities rather than replacing them.

How does Dan Shipper describe the evolution of his company, Every?

Every started as a bundle of digital newsletters and has since evolved into an ecosystem of newsletters, podcasts, courses, and software products, all focused on exploring the question, 'What comes next?'

What is the 'Every Pyramid' and how does it structure the company's offerings?

The 'Every Pyramid' is a metaphor for the company's multi-tiered revenue streams. At the base is free content for millions, followed by a subscription service, higher-ticket courses, and consulting services at the top.

Why does Dan Shipper believe that AI is crucial for progress in fields like psychology?

AI can make predictions without requiring causal explanations, which traditional scientific methods struggle with. This shift from defining problems scientifically to solving them as engineering challenges can lead to significant advancements, such as better mental health treatments.

How does Dan Shipper view the reluctance of some writers to use AI in their creative process?

Dan understands the attachment writers have to their words and the fear of losing control. However, he believes that AI can be a valuable co-creator, enhancing creativity without diminishing the human element. He predicts that social norms around AI use will evolve over time.

What are the two ideas Dan Shipper would incept into the world's population if given the chance?

Dan would incept the ideas of 'be curious' and 'be kind,' believing that these qualities would lead to personal growth and a better world.

Chapters
Dan Shipper's journey began with a fifth-grade ambition to create a Microsoft competitor, leading him into coding and eventually building software applications for BlackBerry. His popular app, Find It, demonstrated his early technological prowess and entrepreneurial spirit.
  • Early interest in technology and business.
  • Developed software for BlackBerry.
  • Created the app "Find It", a precursor to Find My iPhone.

Shownotes Transcript

Translations:
中文

Hi, I'm Jim O'Shaughnessy and welcome to Infinite Loops.

Sometimes we get caught up in what feel like infinite loops when trying to figure things out. Markets go up and down, research is presented and then refuted, and we find ourselves right back where we started. The goal of this podcast is to learn how we can reset our thinking on issues that hopefully leaves us with a better understanding as to why we think the way we think and how we might be able to change that

to avoid going in infinite loops of thought. We hope to offer our listeners a fresh perspective on a variety of issues and look at them through a multifaceted lens, including history, philosophy, art, science,

linguistics, and yes, also through quantitative analysis. And through these discussions help you not only become a better investor, but also become a more nuanced thinker. With each episode, we hope to bring you along with us as we learn together.

Thanks for joining us. Now, please enjoy this episode of Infinite Loops. Well, hello, everyone. Jim O'Shaughnessy with another Infinite Loops. I was telling my guest just a moment ago that, gosh, this has got to be the first of a series because...

We have so much in common. We have so much in excitement, but also a little bit of trepidation about the world that we are moving into. My guest is none other than Dan Schiffer, the co-founder and CEO of Every, one of my favorite companies.

It's not a publication per se, but one of my favorite sources of all things interesting to me. You're also the host of the podcast AI and I. I can't resist, Dan. I hear my mother's voice in my ear saying, AI and me objectively. Yeah.

I know. I just could not resist the rhyme. It works. It totally works. But I think it's so funny. Like mom has been dead a long time, but she still squawks in my ear. Like no, Jim. That's how parents work. Exactly. So like,

You know, I'm so excited to talk to you because we share so many of the same interests. But if you don't mind, like every superhero has an origin story. So if you'll give us yours for our listeners and viewers.

And by origin story, are you asking like for more info on every specifically and that origin story or just like my origin story as a person? Like, where did I where did all this start from? I am very much interested in your origin story as a person. OK, cool. Yeah, well, I usually start the origin story stuff at around fifth grade.

I read a Bill Gates biography and I decided I wanted to start a Microsoft competitor and I was going to name it Megasoft. I actually have a biography somewhere around here in the living room over here. And so I was going to start Megasoft. And so I decided to learn how to code.

And I went to Barnes & Noble and my dad bought me a very expensive book on BASIC. At the time, that was the only way to learn how to code is expensive books, especially if you were in fifth grade because there are no classes at all for middle school aspiring software entrepreneurs. And I had a whole plan for how I was going to do this, which I have in a notebook, which I still have that

it's like a list of like my plan, my master plan. And it's like, the first one is write soft, which meant write the operating system. And I didn't even complete it with software and write soft, burn it to CDs. Cause at that point, the big distribution mechanism was CDs. I was very inspired by the way AOL got distribution by having CDs, like everywhere that you could pick up in the grocery store, they put it in your mailbox or whatever, write soft, burn it to CDs, put

put it in mailboxes, and then wait. Wait was the last step of my plan. And so needless to say, I did not end up building an operating system, but I kept programming because I was always very interested in business and always have loved technology. And I found that programming was the only way as a middle schooler that I could build interesting businesses because it was the only way to build a business where the only cost was my time.

And so kept programming in middle school, like sort of late middle school, early high school. I got into BlackBerry programming because I noticed that smartphones were starting to become a thing. This is prior to the iPhone. And I had learned from my Bill Gates biography that new hardware platforms were often good opportunities for software. So I started building software for BlackBerry. My most popular software app was called Find It, which was basically Find My iPhone before Find My iPhone came out. So

So the original version, you could if you lost your BlackBerry in your house, you could send an email with a special string in the subject line and then have it ring even if it was on silent.

And then I eventually iterated that into like a full fledged web interface where you could track your phone, you could lock it, you could back it up, like all that kind of stuff. Interestingly, that was not a SaaS service. It was a one time fee because Stripe didn't exist and it was way too hard to create a recurring subscription. So anyway, so that's how I paid for gas and food in high school is building apps like that. And I would have had a lot more money had Stripe existed back then.

And I've been talking for a while, so that's the first chunk of my origin story in case you want to jump in. So, yeah, my teammates always remind me, Jim, will you stop fucking talking so much on this podcast and let your guests talk? But the whole origin story of how this podcast began was I really wanted it to be kind of like you're too young to remember the movie My Dinner with Andre. Right.

But it was an early movie in like Wallace Shawn, you know, the inconceivable from Princess Bride. It was him and another actor. And literally the entire movie was just their conversation. And it was riveting. Like, because a natural conversation that doesn't follow a script, I just find much more interesting. Yeah. So, but what's great about that is that,

that you saw very clearly where the world was going, right? And before we started recording, I was showing you some stuff from a 1983 journal when I was 23. And that's what I was writing about. It was literally, computers are going to do this, computers are going to do that. We're going to be able to synthesize. It's going to be a whole new world. And then I had to wait 40 plus years, and here we are. But that's what's cool about it, right? Because

You had to adapt to the circumstances at hand, right? Yeah, it would have been much cooler if Stripe had been around and you could have made it a SaaS recurring revenue business. But that's what happens sometimes with pioneers, right? My more cynical friends tell me that pioneers are the ones that get all the arrows in their back. Yeah.

But I think that pioneers who survive actually have a huge advantage over others because it's already in your mind, right? What it can look like, what the world can look like. That's what I love about your work. And it also kind of leads me into two of your decisions really resonated with me.

Mm-hmm.

But unlike you, I'm more of the dark Dorothy Parker school, which is I hate writing. I love having written. I think every writer has a tinge of that. I'm no exception. Yeah. Yeah. But I think that that was very cool. But you also the other thing that I wanted to talk to you about was your decision. You were a little wary about AI and then you decided, you know, fuck it. I'm going all in.

And I reserve the right to jump off this rocket if, like, I don't like what's going on. So in my notes, I say, I have Jordan ask Dan about his relationship with AI with it's complicated. Hmm.

Great. Well, let's take it one at a time. Should I talk about some of the writing stuff first? Sure. Cool. Yeah, I think that what you're referring to is I've written two pieces in the last year or so about this. One is called Every's Master Plan, which is sort of like lays out like whatever he is and like where we're going and how I'm thinking about it and also like how I arrived there and some key decisions that went into that.

And one of the key decisions is thinking of myself as a writer who was also a founder. And there's a I have a post specifically about that on every called admitting what is obvious. And the basic idea of that post and the kind of like thought process for me is there are sometimes truths about yourself that are probably obvious in hindsight for sure. They're probably pretty obvious to everyone around you.

And they're probably sort of like they're definitely there all the time in your sort of consciousness without you like really even realizing that they're there because the perceived cost of noticing them would be too high for you. And so you don't necessarily allow yourself to really notice it.

And so for me, I think wanting to be a writer was one of those truths. I just like it was very hard for me, like basically to go back to my origin story for a second. I originally before I wanted to be Bill Gates, I want to be a writer. This is third grade and I want to write novels. And so I wrote a hundred page novel.

And then after I wrote it, I was like, I don't have enough life experience to be a writer yet. So I want to do business stuff and I'll come back to writing later. And that sort of carried me through to my first company called Firefly.

which I started when I was in college. And during that company, I did a lot of writing for it. And that helped us market the product and all that kind of stuff. And eventually when I sold it, I spent a couple years figuring out what I wanted to do. And I again tried to write a novel. So I was giving myself some time to do that. I wrote a couple drafts. But the whole time I was very conflicted about

Who am I? And like, am I going to lose my like founder identity if I go write a novel? And like I had sort of felt the loss of that identity when I sold my first company. And it's just like it's, you know, obviously it's one of those amazing things.

that happens to you, but it's also, there's no like manual or process for how to deal with that kind of, that kind of thing, like selling a company and no longer doing the thing that you do every day. It's very abrupt. And so, yeah, like I sort of grappled with that the whole time between companies. And then I feel like I started every as this sort of

Way to split the difference to some extent, because like we raised a little bit of money for every at the beginning and it sort of looked like a more traditional startup. Like it had always had a media element, but like the way that we pitched it was it's this bundle of newsletters. And so it's sort of like Substack, but we have like, you know, we have an editor and we're going to sort of grow it in this a little bit more centralized way. And everything is bundled under one subscription and all that kind of stuff. So we're able to like turn it into something.

a little bit of a company or a lot of a company. But the way it originally started is, is I was just writing a newsletter and it started going really well. And the way I allowed myself to even write the newsletter was I said, okay, I want to start another software company. And I want to do like a Rome notion tools for thought type company. And I need a way to an

I need a way to do customer interviews with smart people that have the kind of note-taking organizational problems that I want to solve and probably have solved them for themselves so I can understand what they need.

And so I was like, all right, I'll start a newsletter. I'll tell people I'm interviewing them for the newsletter. And then I will actually write it because I love writing, but I'll use what I learn in order to build a software product that I can then sell to the audience. And I just sort of got stuck for a very long time on the newsletter part of it and realized that it was growing really fast and I really liked it. So we put together a company, we raised a little bit of money, and it was going really well. And as soon as we raised some money, we sort of like...

I sort of made the decision to like not write as much because I was like, well, I'm running the company now. So we'll hire writers and whatever. And there's like a sort of two year period after that where the growth of it sort of of every sort of tailed off. Because one of the interesting and difficult things about media is if you start a media company and have a media product that's like written by a person and has their personality in it, if that person stops writing, you lose product market fit.

You can still find it again, but the ability to consistently produce things of a certain type that people like is a very rare thing. And so if you have someone like... So I was doing that and my co-founder Nathan also was doing that. If you have people who are doing that and you switch it, it'll probably take a while to find someone else. And there's no guarantee that the person you find will attract the same audience as you already have. So it just makes media quite a difficult business. And eventually, after a couple of years, I was like,

what do I really want to do? Like, why am I doing this? Like, what's my whole like thing with this? And I was, I was just like, I think if I like really take a step back and I didn't consider, uh,

loss of identity or however be perceived and I didn't really consider like financial upside and how I would make money because like the business was at a point already where like it was supporting me so didn't have to consider that as much like what would be the like the most like the thing in my heart that I would like want to do and I was like oh like obviously like I want to write a

And then I was like, I want, I want to write, I want that to be the core thing. But I also like, I love business. Like I really liked doing all this stuff. Like, so I, I still want to like have a business. And what's really interesting is once I, once I like admitted that thing, which going back to where I began is a sort of like obvious thing that I hadn't like really admitted to myself. Once I admitted it to myself and sort of flipped that identity to like, I'm a writer who also does business stuff.

I immediately found other people who were doing very similar things that I hadn't really noticed before. So a good example that popped up is Sam Harris, who's like primarily a writer, podcaster, who has a meditation app, Waking Up. Another one is Bill Simmons, who great podcaster, who has started The Reader, he started Grantland. Like all those kinds of people were doing things that are exactly what I wanted to do. But like for a while, I hadn't like,

even really notice them because they're sort of like off the beaten path of the Silicon Valley, like technology, like structure of like, you know, raising money and all that kind of stuff. Like they don't fit the mold of a typical VC.

VC investment. And so I immediately found other people that had sort of done this path. And then I also sort of began to immediately make decisions that like, I guess, changed the reality around me to support this. So I stopped doing meetings in the morning and

I was able to hire Kate Lee, who's our editor-in-chief. She's incredible. She previously was the publisher at Stripe Press. And so that took a lot of the day-to-day media stuff off of... Media company running stuff off of me. And we did a couple other things like that. And I think most importantly, it was... It allowed me to be like... To myself and everyone else, like, writing is the priority. So, like, I can't be bothered...

like in the morning or whatever, like I need to have enough time to like get out pieces every week. Cause like, that's the main thing that I'm doing. Whereas previously writing had been a little bit more like a guilty pleasure, but getting to do it all the time created a lot more growth and energy and, and, and stuff for every cause, um,

And for me too, because I think there's a significant cost to not, to doing things that you don't really want to do all the time. And so I think for me, like there's this thing where I was like, I feel like I need to act like a traditional CEO of a traditional tech company. And like, it wasn't, it just wasn't the thing that I really wanted to do. Like I wanted to be the CEO, but I didn't want to do it in that way. And so as soon as I like kind of,

I think let go of here's how it should be done and the scales dropped from my eyes a little bit and just figured out like here's how I think I could do it for me and this is the way it would work for me. Things just started to accelerate. The business started to get better. I was much happier. I so much am much happier. I started doing much better creative work which has led to all sorts of things from I think my writing is going a lot better. I think we're doing lots of cool creative stuff. We're incubating all these software products. We have a consulting arm. We've got tons of stuff going on. It's like I feel like I'm a

I'm a kid in a candy store all day. Like it's really, really fun. And I think it sort of starts from that like very like that sort of core decision to like let yourself be who you are and do what you want to do and like mold your reality around that.

Lots to unpack there. And again, on the similarities, I too wrote a fictional novel when I was 10 or 11. I still have it. It's horrible. It was science fiction, but I got 120 pages. And at some point, maybe our Infinite Books vertical will publish it or a different version of it. One of the questions that I sometimes ask myself

that I have found very clarifying.

Which is this. If I'm looking at something I might want to do or whatever, I ask myself the question, would I pay to do this? In other words, would I actually pay instead of expecting to be paid to do this? And there are vanishingly few things that I would pay to do. But that's how Oceania's Adventures came into being.

Like, yeah, I would because because I am paying right now. I prefer to look at it as investing, planting the seed. But the idea like resonated very, very strongly.

And back to your master plan, look, I found, because we also do a lot of seed and somewhat later stage venture investing, and one of the things that I really look for in founders is agility and the ability to pivot. And

being fine with that. Right. Like I'm a huge Walt Whitman fan. I really do believe we all contain multitudes and, and trying to like the, the whole mentality of stay in your lane, just like that doesn't appeal to me at all. Like if you through your actions learn, Whoa, wait a minute. Like this is probably a really great way to pivot and change and,

And I like the way you did it, too. You put old journals in a large language model and looking for insights. But then you did something very clever that I also do. You had it write future journal entries for you. I use it the same way. And then you got to the way your grand design and plan for every, which you present in a very Maslow-like pyramid, right?

with the base being the free stuff and that's available for millions, and then going up the pyramid. Take us up the pyramid of the master plan.

Yeah, so the way I think about every is sort of like a pyramid. It's hopefully not a pyramid scheme, but it has a little bit of a pyramid structure. So bear with me here. But yeah, at the base of the pyramid is like all the content that we make, particularly the free content that that reaches a lot of people. So we have a newsletter that we publish every day and we publish long form essays about what's next in technology.

And then we also have a podcast. So we have a YouTube channel. We have the podcast AI and AI. And then in the YouTube channel, we publish that podcast and other videos. So for example, I recently did a video on ChatGPT's advanced voice mode. And that's really cool. So we publish a lot of great stuff. And then above that, we have

our main offer. And our main offer is the thing that we're trying to sell mostly to people. And that's the every subscription. That's 20 bucks a month. And as part of the every subscription, you get access to all the content we publish. So a lot of some of the content is paywall. You also get access to discounts on courses that we run. So discounts on stuff you might want. And then we also incubate software products. So we have

We just launched an incubation called Spiral. We can get into that, but it helps you automate a lot of the repetitive creative work that you do every day if you're a founder or a creator. Another one is we incubated this company Lex, which my co-founder Nathan built internally at Every and then we spun it out and now it's its own company. And Lex is sort of like Google Docs with AI built in. So it's a document editor with AI. We have a couple other incubations in the pipeline. And so as part of the Every subscription, you get access to content and you get access to this like

growing library of other products. You can think of it a little bit like Amazon Prime or something like that in that way.

And then above that, we sell higher ticket items. So each level of the pyramid is like smaller and smaller numbers of people. So millions of people at the bottom, you know, maybe tens of thousands, hundreds of thousands, maybe more at some point in the sort of every main subscription level of the pyramid. And then above that, we sell courses. So those are like usually $1,500 to $2,000. And those are like B2C courses to our subscribers. Above that, we do...

A lot of consulting and training type stuff. So, you know, big companies who want to figure out where to integrate AI into their companies and how to train their people. We do a lot of work with companies like that. And those are that's much higher ticket. And then above that is like speaking or advising stuff like that, stuff that I need to personally be involved in. That's that's higher ticket than that and fewer people. And yeah,

I think the pyramid concept is interesting because again, it's like this shape of a business that's like,

totally anathema to like the traditional Silicon Valley way of running a company because you got to focus on like your one revenue stream. And this is like there's four different ones and I didn't even list some of them. Like we have a whole sponsorships business like you have sponsorships and then you have subscriptions and you do consulting and then you do speaking and then you have software like what the fuck it would be sort of the the like response. And I totally understand that. I think that that makes a lot of sense.

But I think for media, what I have come to learn is that for one, media is a very difficult business. So you need to have as many revenue streams as you can once you get to a certain point. You probably focus for a little while to get to a certain base. But if you want to grow above that, you need to have different revenue streams. And then you have different members of your audience that have very, very, very different willingness to pay. And if you want to maximize what you can get out of that business, which I think you need to do if you want to do

A lot more interesting stuff. You need to figure out how to create different products to reach the different people in your audience who have different needs. And I think this is particularly important because the environment is sort of constantly changing. And so, for example, about, I don't know, like seven months ago or maybe a year ago, Elon changed the X algorithm to like deprioritize links. And so our traffic went down, right? So because a big part of our top of funnel was X.

And so I had to make a decision that was like, well, what do we do about that? Right. And obviously, like there's lots of we need to find more growth channels. But what I decided to do was for now, we actually had a pretty sizable audience of people who who were, you know, running great businesses or famous investors or whatever. And I was like, well, why don't we just focus on bottom of funnel first? We'll expand. We'll

We'll take the relationships that we've already built with really interesting people and we'll expand those relationships by selling consulting or software or, you know, training like that stuff that we can sell. And then we can take that cash and funnel it back into the bottom of the pyramid. And maybe we run ads or maybe we like hire a growth person or whatever. Like there's lots of stuff to do once you have the cash. And that bet has actually been really good for us. And I think one of the...

One of the fun things about media is like all this stuff that we're doing, like whether it's consulting or building products or whatever, like these are all announcements that we can make that go viral, that actually just help our bottom of the funnel, regardless of the eventual success of those businesses. So like everything actually sort of works together in this really nice way. And I think like I actually didn't even have the kind of pyramid structure in my head. I was just like, well, my business is like this crazy like

concoction of things. And then I went to this creator retreat run by Tiago Forte, who's an incredible creator and author and course creator. And someone else, his name is Chad Cannon, I think, just drew that on the board as the way he structures his business. And then everyone was like nodding. And then everyone was like, yeah, this is how our business works. And I was like, oh, this is not stupid. This is just a different path that other people take that I've never heard of because I've been reading too much hacker news, you know?

And that was just really, really, really, really helpful and clarifying because

I think it's hard to it's hard to make decisions that sort of go against like what other people around you are doing. And it requires you to like kind of look at reality yourself and be like, what do I see here? And like, what are the like local decisions that I can make that seem to make the most sense as opposed to like, what did I hear is the right thing to do? Or what was my like original vision for what to do? You need to like kind of be more ground level at like what's actually going on? What do you need to do?

And you can arrive at a place that feels like a mess because it's not no one else around you says like, this is a good idea. But then you might meet other people who are like, actually, no, we found the same thing. And that's just like, oh, you're like, oh, great. And this is cool. I'm not crazy. And yeah, so that's the kind of story of the pyramids.

Howard Bloom, who is an author who I think is really underrated because I think he's written a lot of insightful books and everything. He uses the metaphor of how you can create a schnalling point, right? That it serves as a beacon to find your people, right? And he uses this great metaphor. If you take a beaker filled with water and you put a bunch of salt in it and you boil it and boil it and boil it, apparently the salt

visually disappears. So it looks like just a clear beaker of water.

But the cool aspect, and I haven't, full disclosure, I haven't actually done this yet. I really do want to try it to make sure that it actually works. But the metaphor that he uses is you take a single grain of salt and drop it into that beaker. And what happens is all of the salt is attracted to that single grain. And suddenly where you saw only clear liquid, you see this mass of salt. And he says, yeah,

Finding your people worked kind of the same way, but he also adds something that I deeply believe. You've got to be really courageous. You've got to be like most of the stuff that I have been able to do and, and been successful with, like,

When I would say to somebody, well, I'll give an example. So I decided, okay, the stock market is the Olympics of business. I want to go and see if I can get a gold medal over there. And as I'm thinking about it, I'm like, huh, I suppose that, and remember, this is prehistoric. No internet, no anything, really. And so I decided on one of my walks to

I have to write a book because I will get the author premium. And right back in the day, if you were going to be an expert on something, you had to have a book. And so I walked in. I was quite excited about that and said to my wife, yeah, so I've decided that the way to really jumpstart this company is for me to write a book. And she's like, okay, Jim, I know you were editor of your high school newspaper, but

Have you ever written a book? And then I pulled out the one I wrote when I was 10. And I said, yes. And she goes, well, yeah, but that's horrible. So, A, do you think you really can write a book? And B, you know, you got to get somebody to publish it. And I'm like, details, details. And the point being that a lot of my ideas that when I would spring them on people,

they were like, oh, that's not a good idea. It's one of the worst ideas I've ever heard in my life. And so I started to just kind of build an armor to those kinds of things because I wanted input. I wanted people's feedback because

Because God knows I don't know everything. And no doubt I was quite delusional, thank God. But I would use them to sort of reality test, right? And then I'd think, you know, that's a good idea. And that's why I was intrigued by your use of AI with your own work, your own journals. Mm-hmm.

Because I do that all the time. We are in the process of we call him Sim Jim and he has not debuted yet, but he's going to be based on 40 plus years of things that I've written. And, you know, I joke that I'm going to have used the AI to tell me who I am. And that's kind of what you did, right?

Yeah. And before we get into that, I just think that the salt metaphor that you use is really apt. And I think it actually connects quite a bit to the sort of courage idea that you brought up right after that, which is it's not only a good metaphor for finding your people. I think it's also a good metaphor for how progress tends to happen when you're doing anything new.

which is people think that progress is linear and it actually does not feel linear at the time. It happens in jumps. It happens like you see a clear beaker and then you drop a crystal of salt in it and it suddenly all just like turns into turns into salt. It's a similar feeling a lot. And I think that's that's why there's that sort of adage about like

It takes 10 years to have an overnight success like that is actually what it looks like from the outside is like an overnight success or an overnight whatever. But that's sort of the crystallization of all the salt that's been boiled in the water happening because of one little thing. That little thing doesn't matter. It's the years of adding salt to the water and boiling it that matters.

So to your point about having AI tell you who you are, I think that that's exactly right. It's a joke, but it is also real. I think these tools are incredibly effective at taking in lots of information and reflecting it back to you, reflecting back what they see. And I think for someone like me, being told who I am is like incredibly, it's incredibly important. It's incredibly valuable. I think it's useful for everybody, but like for me in particular, for example, like

I have a lot of tendencies that make it sort of easy for me to forget who I am. So like a really good example is just this stuff we've been talking about, about am I right or am I founder or blah, blah, whatever. And sort of subverting the writer identity because the founder identity was like,

A little bit more prestigious. A little bit more... It felt a little more like... I could just say that and people would get it. And like all that kind of stuff. And I think there's lots of stuff that's like that about me. But probably about everybody. But me in particular. Like I'm just...

My sense of self is, I think, a little too sensitive to to like what's going on around me. And I think having something that I can kind of like spill my guts to and have it reflect back like this is what I see, I think helps me create a little bit more of a stable sense of like, no, this is what I this is what I'm like and this is what I want and whatever. And so using it in that way as a little bit of a mirror, I think, is incredibly, incredibly valuable.

Yeah. And many of the use cases that I got excited about early on. So I go way back when they still called it machine learning. And I thought of another essay that you wrote. I had a conversation with a friend who was also a contributor to OSAM. And he loves stock markets, but he was literally a machine language expert.

And he had done so well with machine learning, or machine learning, not language, excuse me, that I called him. I'm like, so walk me through this. Tell me how it works. I'm really fascinated by this. I think it's kind of the next frontier for quantitative analysis, et cetera. And he said something that resonated because you wrote an essay about this. He said, here's what it is, Jim. He said-

The human desire to know why something happens is going to be the bottleneck for using AI. He said, Jim, I can build you a system that tells you what, that tells you when, that tells you how.

but will not tell you why. And he goes, knowing you as I do, you'll fucking love it. He said, but knowing other humans as I do, they won't.

And, you know, I was always kind of driven by Wittgenstein's maxim, don't look for meaning, look for use. So talk a little bit about that, because I love that essay, resonated strongly with it. But as my friend pointed out to me, like, Jim, you are kind of, remember young Frankenstein, Abbey normal? He said, you're kind of Abbey normal.

And he said, I think it's like one of the most significant bottlenecks for regular folk who are going to be like, but why? But why? Tell me why. I love this. You're really speaking my language. I think you're referring to a piece I wrote a while ago called Against Explanations. Yes. And the basic idea is that.

As a society, we're sort of obsessed with finding scientific explanations for things. And we can define a scientific explanation as sort of like it's causal. It's like if X happens, then Y will happen. So we understand the full system. And we've been obsessed with that probably since the Enlightenment, but probably even we can

we can probably go back to like Plato and the idea of like, um, uh, forms, right? Yes. Perfect mathematical constructs that like don't really have physical reality, but like all of physical reality is sort of like a inexact copy of that. And wisdom is about finding the, finding the forms, which, which then, um,

you can sort of trace to Aristotle who was Plato's student who located the forms not in this sort of like immaterial world but like as this interplay between like your observations like what's in the world and sort of like is contained the form is sort of contained within them and Aristotle's outlook which was a lot more empirical his father was a doctor his outlook I think filters into just the sort of the scientific method right and you can see that in

Darwin and Galileo and all these sorts of figures, Newton, that precipitated the like the Enlightenment, the scientific revolution, which has been incredible for us. Right. But I think what's really interesting about about that revolution, which is based on, you know, based on a lot of the stuff that happened in the Enlightenment, like Newton's discovery of the laws of gravity, we've been trying to make all of our sort of.

Any endeavor that makes progress, we are trying to reduce it to something like calculus or the laws of gravity where we can say it's a little equation. If X happens and Y happens. And I think that's been successful to some degree. But as we've levered up into more complex fields like biology, psychology, social science, economics, all that kind of stuff.

We found that it's actually quite difficult to do that. And we've been trying for a really, really long time. And I think, you know, you can look at something, you can look at a field like psychology and be like, well, the reason why there's all this p-hacking and all this kind of stuff is like psychologists like aren't just aren't good enough scientists. And if we just tried harder and had better methods, we would like be able to reduce it down into, you know, something like a calculus or

But I think my feeling is that we are drunk on explanations. We're really trying to find explanations and we're really drunk on our idea of ourselves as rational animals and that like being rational, having an intellectual understanding of something, which again comes back to Socrates, Plato, Aristotle, that that's like the highest form of human existence. And so if you don't really understand it, you don't have an explanation for it, then it's like it's not good enough. And to be clear, I think that that got us really far.

But I think the reason why there's so much emphasis on rational explanation, scientific explanation, causal explanation is before, you know, a couple of years ago, in order to make good predictions about the world, we needed to have causal explanations. So the good thing about calculus is you can like calculate the trajectory of a cannonball pretty easily. And so, um,

And we've been looking for explanations because we think that that's what we need to do in order to make predictions, especially predictions that are inspectable, that can be transferred from one person to another. Like a great thing about explanation is like I can explain it to you and then you can use it and you can do the calculation yourself. They're checkable, all that kind of stuff.

And what's really interesting about AI is it's the first time that we've had something other than a human that can make predictions that are unbundled from explanations. So we don't need the explanation anymore in order to make the prediction. And so we're and immediately what a scientist will say is, well, that's correlation. It's not causation. And I'm just like, fuck it. Like.

Let's do the correlations first. And so, like, I think that the correlation or the ability to make predictions without explanations that you can get from machine learning or AI is, like, really important because I think it can help us make progress in areas of study like psychology that...

Progress has been really, really hard to come by without it. So, you know, we've been studying the brain and human psychology for at least 150 years since Freud. And like, we still don't really know what depression is or anxiety or like all the stuff. Right. We don't have a scientific explanation for it.

And I think what's really cool about deciding to do away with explanations, at least for the time being, is that you can turn like a search for the search for a definition of depression, which again goes back to Plato, like Socrates. You turn that search for definition into sort of an engineering problem. So you're not asking, what is this? Which is a scientific question you're asking yourself.

how do I fix it or what do I do with it, which is an engineering problem. And you can do, you can, you can solve that ladder question by being able to predict it. So instead of knowing what depression is, we can just predict who's going to get it and which treatments are going to work for them. And so one thing that's really interesting is it's hard to help people immediately. So I think that's really important. And the other interesting thing is that I think that if you have a good enough predictor for a phenomenon like depression, you

There will probably be some sort of explanation in your neural net. And what's interesting about that is I think neural nets are like a little bit more interpretable than human brains are. So we may be able to like find a explanation for depression, for example, in a neural net that predicts it rather than like studying the brain. But I think to like sort of bring this all full circle, I think what we will find is probably the explanation for depression or something like that, some complicated higher level topic like that

is so big that it's not open to rational analysis. It's too big to fit in our rational brains. But what's really interesting is I think that anyone who's had an interaction with a really, really good clinician knows that some people have just an intuitive idea of like,

What's going on with someone and how to fix it that like other people don't. And it's just like them building up this like intuition that they can't even explain, but it's like in there. Right. And I think because of this whole thing that started with like Plato and Aristotle and Socrates, we've really like jettisoned our reliance on our intuition and said like intuition is like sort of bad, you know, to some degree we should be finding these explanations and

And that's for a lot of reasons, right? Like when you do, when you use intuition, which is really about correlation, like you're open to, you start to believe in spirits or ghosts or whatever, like non-empirical things. Right. And the emphasis on rationality allowed us to create provable explanations that could be transferred between people, which, which allows for better collaboration, which helps us make progress. And what I think is interesting about,

AI is building AI models that work a lot like human intuition in that they can make predictions without explanations. Those AI models have a lot of the properties of explanations. So they are transferable. They are inspectable and debuggable.

And so they take this innate human thing, intuition, and then they put it out into like a machine that can replicate it to some degree. And so I think we may shift our view of the value of human intuition because we suddenly have machines that can replicate it and that can have it be transferred from person to person and help us make progress. And so I think like what we should be doing is de-emphasizing our search for

causal explanations, our emphasis on rationality, and instead re-emphasizing our view of human intuition and the ability of machine learning and AI models to replicate its best properties and use it to make progress.

You know, that's why I started this chat with this is going to be a series because like literally we could do an entire show on this because I completely agree with you. And you got to remember my career. I owe my career to being, you know,

Right. Being totally analytical, replicatable models that you can that if you ran the same back test on something, you would get exactly the same results as I got. And as I did it, I became.

very aware of the fact that my dad and I, I love philosophy. He loved philosophy. And so he was a Platonist. And so I decided, well, I'm going to be an Aristotelian. And so, but as we would have great talks, really, and they were really educational because

Like, I would bring up a point, and he'd be like, oh, that's what I haven't thought about. But then he'd come up with a good example. And it really led me to this idea that really you've got to unite. You've got to be Apollo and Dionysus. You've got to be Plato and Dionysus.

Aristotle, or you're missing half of the way of getting insights. And I often say that one of our greatest challenges is that for the most part, most of us are deterministic thinkers living in a probabilistic world and generally hilarity or tragedy ensue. And so one of the things as I go back through my old

writing and my new writing is like, it was always...

acknowledging that, but I was really suppressing it because I wanted just the facts. Nope, we've got to make sure that we know what the cause is, we know what the general outcome is. And then I did a conversation with Rupert Sheldrake, who I think is one of the most brilliant scientists working today. And yet to the materialist citadel, he is the heretic of all heretics because that's what he's espousing, right? He's saying,

And you know what? These 10 laws of science, he has a great TED Talk where he just demolishes them all. But what we started talking about on our chat was this idea, intuition, and how do we test it, right? I look at intuition as, look, I've got a long career in finance, for example.

And when you see the same pattern time and time and time and time again, you get this saturated or imbued intuition. And it's something you're really not aware of at the conscious level, but you are incredibly aware of it at the intuitive level.

It's like in 2006, I was marching around saying to anyone who would listen, if you can short your house, short your house. Like real estate, it's going to be a debacle. And then they'd say, why? And I'd say-

I've seen this play time and time and time and time again. And so we're actually in the beginning stages of trying to work on an intuition app with Dr. Sheldrake in which we would make it fun. And the object of the app would be to, A, see how good your intuition is.

B, if it is pretty good, can we improve it? But then C, Dr. Sheldrake is willing to do it, scientific method, test on it.

And what I love about him is if he comes up with an old set, he publishes. He's like, well, I thought this, I thought that this could happen, but didn't bear itself out. And so I'm just a huge fan of giving the multiplicity of models, right? That's one of the reasons why I love AI so much. And one of the things you talk a lot about is we're moving from a knowledge economy to an allocation economy.

And in an allocation economy, again, we are so aligned, it's scary. I might try to hire you before the end of this, or maybe I'll just buy everything. We can talk about it. But like, are things that are really difficult to quantify, right? Good taste.

I have been saying for several years now that if you have good taste and you're a good curator, man, is the world going to be your oyster? Because I think that what's going to happen, especially with AI, is we're going to see a tsunami of AI-only generated content. And I think that the beginning of that, it's going to suck. It's not going to be great because it's going to lack the human spark. Now, do I think it could get and will get better? Yeah, I do.

But I think that for now, I'm a huge believer in kind of the Senator model, human plus machine making for the best. And Brian Romelli, who's a friend, he says we shouldn't call it artificial intelligence. We should call it intelligence amplification. And that's one of the things that I like and you advocate for in the allocation society or economy, excuse me.

Tell us a little bit about how you came to seeing that switch from the knowledge economy to the allocation economy. Yeah, I guess I was just observing my own patterns in both using AI and running a company. And I use AI all the time for writing, for programming, for decision making. And I realized when I was trying to figure out what's a good metaphor for what it is like to use a model a lot.

I realized it just reminded me a lot of interactions that I have with people that I work with at every as a manager. And then a lot of the skills of being a manager. So a really, really good one when to delegate and when to like micromanage or like look over their shoulder. Like that's a skill that every manager, every beginning manager has to learn how to do. And everyone fucks it up.

I've been cursing a lot. I don't know why. Something about this interview is... It's me. I'm pretty revealing. I'm a bad influence because I had Senra. I don't know whether you follow him. I do, yeah. So I had him on the podcast and we're friends. He's great. And Patrick, my son, has him as part of his Colossus network. And David, the first thing he said was, Jim, is this a cursing podcast? I went, fuck yeah. Yeah.

And he was like, he just was so relieved. That's great. I'll have to text him and let him know that I followed his lead and I dropped a couple F-bombs on your show. Yeah, but he still is the king. He's dropped more F-bombs than any other guest. I can't compete. I definitely cannot compete.

So going back to sort of the manager idea and this sort of like delegation versus micromanaging, it's a thing that every manager has to learn. I struggled with it. I still struggle with it to some degree. And every time I watch someone at every become a manager, I see the same thing. I realized that a lot of the same things are happening with people when they use models. So they'll be like, well, I delegated X tasks to it and it came back and it didn't do a good job. Well,

Or, well, in order to get to do a good job, I have to like tell it every single step. And I'm basically just doing the work for it. And so I realized, oh, yeah, this is like a very it's a it's a very overlapping problem with with the problem that managers have. And what's interesting about the skill of being a manager is that it's not very broadly distributed. I think there's like I have some stat in that piece and I can't remember, but it's like single digits of Americans are managers.

And that's because currently it's it's like very expensive to to have to like hire people. And so there's only a few people who get that opportunity to learn how to hire and run a team.

And I think probably what one outcome of the AI age or whatever age we're in is those same skills will become much more broadly distributed because the same skills will be used for not for human management, but for being a model manager. And models are going to be much cheaper and much more broadly available.

And so I started to think about like, okay, like what happens, what happens in a, in a world like that? And so I was like, well, it might mean that the way that you are compensated and what skills are valued are, are those sort of like management skills. So it's like one to delegate one to one to micromanager or your taste or your vision or your ability to break a task up into its smallest components and then know which resources to delegate each part of the task to like all that kind of stuff.

And so I was like, well, in that case, maybe we're sort of entering an allocation economy where you're compensated based on your ability to allocate intelligence. And that's sort of where it came from. Yeah. And one of the things that I, when I started Ostrowsky Ventures, like one of my North Stars was we are going to imbue AI into every aspect of everything we do. And it's really interesting because...

I got on certain things resistance that I really was not anticipating. And so, for example, I think that AI, since you can, you can, AI is really good at creating characters, right? And so like, for example, we have Infinite Books. It's a publishing company.

Now, we're going to have AI do a lot of the drudge work that currently is a bottleneck at other publishers, like simple things like, you know, when you send your manuscript in and you're really eager to see what it looks like as a book, right? They're supposed to put it, they're supposed to typeset it like it will look as a book. And if you've written a book, you know that reading it that way, you're like, oh my God, I didn't even see this, this or this.

And, and as the author is very eager for that in a traditional pub, having done four books, it takes a long time between when you give them that manuscript and when you get it back in a different format with hopefully some typos found and some other editorial choices. Well, with AI, we'll have it back to you the next day. And so, but yeah,

Another thing I thought was, you know what, like, and I love to see that you referenced it. Dumas with the Three Musketeers used a writing team. And I was super bullish on this idea of, you know, like every TV show, every movie, they all have writers rooms. It's not a single author.

And one of the things that I wanted was let's create an AI writer's room. I love history. So I particularly like the way Will and Ariel Durant write their books. Why don't I just download everything that Will and Ariel Durant have ever written and create an AI Will or Ariel Durant, not to write it, but to work with me, right? I'm a big believer. I've always believed that I was at best a co-creator.

Right. The universe and the muses and everything else. You got to give them credit. Right. Because you're you're co-creating. But when I when I when I pitch this to a lot of authors, they're like, never, never. I will. You know, I will. And that's an abomination.

And so I don't think like that at all. And so I put a lot of my ideas into AI and I say, show me my weaknesses. I did it with some of yours with a large language model. And so what I put in was I put in your idea that we're moving to an allocation economy.

And so I did this with a commercially available chat GPT from OpenAI so anybody can do it. And I said, so tell me, is this argument weak? And it came back with, yes, when you simply say knowledge orchestration is the most important bottleneck.

The AI came back to me and said, the statement is very broad and lacks specifics as to why. And then I said, okay. And then it goes, maybe data quality is the biggest bottleneck. Maybe computational power is the biggest bottleneck. And then like I do with my own stuff, I said, oh, cool. So take what you've just criticized and give me the improvement. And it did.

And so it goes, okay, so I think that what we should do is have concrete examples. And then it started listing them for me. And it's like, healthcare, our case study will be IBM Watson for oncology. What was the issue? It struggled to deliver good diagnostic guides.

It was not accurate in terms of predicting cancer solution, enhancing the knowledge orchestration. And then it gave all of the examples by giving the framework of clinical guidelines, real world data, all of this. And then he and then it goes, this suggests that it really is knowledge orchestration.

Because then it addressed each of the ones that it had said it could have been. Was it the data integrity issue? Was it this? And then it's like, no. But then it just kept going.

And it gave financial services, JP Morgan. But it took it through everything. And I just do that by rote these days, right? Like every idea I have, I throw it into, well, we have our own internal AI stack. But you can do it with any of these. Why do you think that people are reluctant? Why do you think that, like, I just look at this as, God damn it, this is the coolest tool in the world.

I mean, I feel I obviously feel the same way. I spend my whole life playing with this stuff. And I was honestly doing something so similar today. Like I was reading one of the reasons why all this like Greek stuff is on my mind is because I've just been on a Greek kick because I'm writing a whole like kind of piece that's tracing the philosophical emphasis on explanations and all that kind of stuff. So as part of research for that piece, I'm reading this book called The Trial of Socrates by I.F. Stone.

Oh, yeah. Good book. Do you know it? Yeah. It's really cool because Socrates is this... He's an intellectual hero. He's a martyr for a reason, right? He's the base of Western culture in a lot of important ways. And...

I have stone is, but the Athenians executed him. And usually like if you read Plato, you're like, well, obviously the Athenians are stupid. Like, why would they do that? And I have stone is like, no, like he was kind of an asshole. Like he, he sort of deserved it a little bit. You know, he was, he was pretty anti-democratic and a lot of his, like a lot of his pupils were the ones that like led the overthrow of democracy a couple of different times in Athens prior to his execution. Yeah.

And Athens was like remarkably tolerant of a lot of his anti-democratic ideas until, you know, a couple of a couple of revolutions or oligarchic takeovers of Athens sort of led them to be a little bit more like sensitive about it.

And so anyway, so I was reading that. And what's really fun is I have the new chat, LGBT advanced voice mode. And I just like opened it up, put it on the table. And as I was reading, you know, a, I was like asking questions. I was like, who is this person? Like, tell me the, tell me the background and give me a little background. And then I go back to reading and then be like, I think what's funny is the, you know, I think the usual like platonic or Western case for Socrates is pretty biased. And I think I have stone, like,

Is bias in the exact opposite direction. Which is kind of funny. Like it's not balanced. He's just like Socrates sucks. And so he would say something in the book. And I'd be like I'm kind of curious. Like what's the other side of it. And so I would just ask like yeah. What's the counter argument here. And it just gave me a really really good counter argument. And then I was looking for like. A more balanced perspective. And one thing that I find is like.

kind of interesting is thinking about someone like Socrates, like you either villainize him or you, or you kind of knight him or think of him as a little bit of a saint and, and you kind of glorify the, his emphasis on, on rationality, on, on, on sort of an internal idea of, of right and wrong that kind of both led him to found Western philosophy and probably led to his execution. But I think, um,

When you look at it that way, you sort of dehumanize someone like him. And it's really interesting to think of Socrates as a human and to think of him in psychological terms. And so I was like, okay,

pretend you're like an expert clinical psychologist and you're examining Socrates, like how would you talk about him? Use clinical terms. And so it started talking about, yeah, like he has this like deep internal locus of control. He has extremely high standards. He has like a sort of a rigid thinking style that can't accommodate like a

Yeah.

pretend like he doesn't know anything and ask questions so that he doesn't like piss people off but then he pisses them off anyway anyway it was like fascinating it was amazing i loved it it was so cool and it felt like yeah a little bit of this like co-created discovery of like a new view of socrates that i think is actually true and and and and interesting and valuable and so i think these tools are like incredibly helpful for that and and kind of make me feel like i have superpowers um

And as to why people are afraid, I mean, I think that there's I think there's many different answers for many different people. I think writers in particular have this real attachment to like their words and their thoughts and nothing interfering with that.

I think like you and I might be a little bit more open to it. A, I think we're just sort of like early adopter nerd type people. But B, if you run a company, you're a little bit more used to the idea that like, even if you didn't do every single part of this, it still can be yours or, you know, you get to put your name on it to some degree. And so I think we're a little bit more used to that. It's maybe a similar kind of managerial skill or experience where it's like,

we did this all together. And also like it came from me, like I, you know, it's, there's, you both can be true in certain ways. And I think people who haven't had that experience may be a little bit more sensitive to the idea that they're like, they're kind of giving up something really, really important by allowing something else into their process. But I think that the, like the, the actual, like the, the more balanced view of creative work is that like, there's,

tons and tons of contributors to any given creative work. And it's never the precious thing where you like went up on the mountaintop and you were uninfluenced by anything. You were like sitting in an isolation tank and you're uninfluenced by anything. Like it's just not how it works. And I think we can debate the, the relative pros and cons or strengths or weaknesses of different influences on creative work. But I think it's sort of inarguable that for a certain kind of creative work for a certain kind of person, like language models are like uniquely effective and empowering. And, and,

I think there will be a new generation of creatives that use it to build stuff and there will be new social norms around them. Like right now, it's sort of like, I don't know, it's like a little bit shameful, especially in certain circles to admit that you use it for writing where it's like a little bit, it feels a little weird to like talk to Chachi Petit if you're in a room with other people. Like all that stuff is like weird. I don't think it'll be weird in 10 years or 20 years or whatever. Like I think people forget and don't know that, you know, 100 years ago,

maybe a little more than 120 years ago ish, listening to music alone was considered weird because music was a communal social activity. And so, and what you, you made it together with people around you. And so listening to a recording of it was like, it was this, a lot of similar feelings as, as people feel like, you know, listening to AI generated music or reading AI generated texts. Like it's, it's shameful. It's,

It's missing some core element. And I think what the core element that's actually missing is we have this whole sediment of experience and memory and culture that have grown up around like the idea of recorded music that allows us to connect with it in this very deep way. And language models like have none of that.

And so I think people like us are drawn to them because they're new and interesting and we like that stuff, but they're missing the like sediment and the sediment is rich and important and it's amazing. And I think we tend to sort of mistakenly think all these things are like frozen in time and will always be the way that they are. And the reality is that like the sediment is going to build up and in 30 years or 40 years, like we're going to like,

look at today is like the golden age of like when language models were like so cool and so and so open and so like there's there's so much stuff that that they're doing right now that forever more they will never be so innocent you know it's like the golden age of cinema was like you know at the beginning of it before it turned into like a hollywood big mega machine or whatever like we're right there with ai stuff if you're if you're willing to pay attention to it and i think that the mores and the social value of it will change but it'll take a generation

Yeah, I think that, have you read Joseph Heinrich's book, The Weirdest People in the World? Yes, I love that book. It is literally sitting right in front of me. Me too. And one of the things that the really good points I think he makes is that we have cumulative aggregate cultural evolution.

and it stands outside of normal evolution, and it actually does change our physical characteristics. He makes the point that literate people, their brains are literally shaped differently. What happens with highly literate people is the ability to be literate colonizes the area that was given to visual acuity.

And when you take a film of an illiterate brain versus a highly literate person, they're literally different shapes. Yeah. And the idea is very consistent with what you just said. Unless you are really into it, you're going to need the aggregate cumulative cultural evolution to happen before you're like, oh, okay, yeah, that's fine.

And I think it's also, it's everywhere, by the way, right? Like who are some of the artists that modern people think are the greatest artists of all time, right?

Probably a lot of impressionists in there, Vincent van Gogh in there, et cetera. But when they came about, like everyone hated them. Yeah. They were like, and in fact, their name was given them as a term of derision, right? You know, that looks like a child's figure painting, right? They did impressions, right? And so like Van Gogh sold one painting during his own lifetime.

And now he's considered the greatest artist of all time. And that fluidity, I think a lot of it is due to the need for cumulative cultural evolution. And if one of your edges is that you can see that shit a lot sooner than other people, that's a really nice edge to have because like it's almost invaluable.

And, and if you are, I, I've, and, and it, that is not mentioned any way to diminish people who don't like look at the world that way. I think Bucky Fuller had this great idea, which is don't, don't bang on people because they're not tuned in yet. If you're tuned in earlier, be happy about that. But remember, you're probably wrong. I always remind myself, I'm probably wrong. I'm probably wrong.

But like being tuned in, if you are like really into something like you and I are into AI, of course, you're going to be a little more tuned in because you're like all over it. And the example that Fuller gave was the microscopic world.

Before we invented microscopes, we had no idea that there was this entire other world there. And then I can never say the guy's name. Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou, Lou,

You got tuned in, but then how long did it take other people to get tuned in? It took decades. Totally. And so I think that one of the changes is we're getting tuned in faster than we used to, but there's still a huge lag. And in that lag lay massive opportunities.

Agree? I totally, totally agree. That's one of the things that I feel so lucky to do is like, I actually, it doesn't necessarily take so much smartness to like,

you know, see this in this world in this way. It's just like, I literally, I just get to live it cause I'm like interested in it. Right. And like a lot of this stuff is, are very simple, like projections out from my, my own experience. Like to some degree, like because I have a chat GBT advanced voice mode, I get to like live a couple of weeks or months ahead of other people, which is crazy. But because I sort of have been using this stuff and use it

a lot every day. There are a lot of people who it will take them a couple years or maybe more to like to do the same thing. And so I get to see a little bit further ahead.

And obviously, like, that doesn't mean I'm like some kind of like genius or whatever, or like profit or whatever. And it obviously, it doesn't mean that other perspectives are not valid. Like, I think there's a lot of AI skepticism that's warranted. I think like, in general, the discourse kind of goes from everything is amazing, and it's going to be AGI to like, everything is going to, we're all going to die. And I think both are wrong. And different people with different personalities, like all, everyone has a sort of, like,

place in the world it's good to have people who are a little bit more conservative and aren't like jumping on to the next thing all the time and are like stalwarts of society and like all that kind of stuff it's great but yeah I think I wake up every day like super super energized because I'm like holy shit like well I get to see things that no one else gets to see and like I'm like all the stuff that I've been thinking about for a long time is like finally I can do it and like and I am I'm lucky enough to be in a position where I like have the time and the energy and the money and the team and the skills or whatever to like take advantage and so I'm just like

trying to do it as much as I can. And it's super fun. I love it. Same with me and exact same feeling, right? It's like I sometimes get short tempered with people in my family that I love, like my wife, for example. She like was one of the original AI haters. She's an artist. She's a photographer.

And so I decided that I was going to just kind of slow walk it with her. And then I finally, like, I was just like triumphant the other night when she's like, we're having dinner. And she's like, Hey,

could my your models give me examples of what i could maybe improve in my photography and i'm like yes they can honey but they're not going to change it right i'm like not if you don't want it to if you don't want it to they won't change anything but they can give you all sorts of suggestions yeah and then she's kind of like

okay, would you install that for me on my computer? And I'm like, yes, yes. But I have found that that is a better way to approach it, right? Like, I'm crazy. And, like, one of the things that I'm also happy about is, like, the cost of being a heretic today has really plummeted.

In the old days, for most of human history, if you were not in the lane of society, boom, your head gets chopped off, you get burned at the stake, the authorities are like, Aaron, take, kill him, burn him. And now we get our own podcast. And the worst thing that could happen is people are like, shut up, we're a fucking idiot, that guy is. Right.

I love it. Yeah. Yeah. I just, I, and so my method is just to be like, Hey, you might like this, just give it a, give it a look, see, and not to push. Right. Because like, you're never going to convince anyone that here's a great example. Like,

I personally believe I were caring for my wife's 97 year old mother. She's got still got all of her marbles. She's an amazing, she's an amazing human. I want to interview her and like put it on the podcast because like literally somebody born in 1920, whatever. Right. And what she's seen in her life and everything. But one of the other things that I see is,

Like we have her living here with us and she had to go to a rehab facility after she got COVID. And Dan, I got to tell you, when I went to visit her, I was fucking horrified because there were all of these elderly people literally unattended in their rooms. And it was the saddest thing ever.

that I had seen in quite a while. And like literally walking down to her room, there was this poor woman in her room, basically just repeating, would somebody please help me? Would somebody please help me? And like my heart got broken. And so immediately I started thinking, well, what about an AI use case that would be like literally a companion, like somebody that they could talk with?

And then maybe we could emulate their children, their grandchildren. And the reaction that I got from people that I thought would be, find that like an exciting idea. They kind of looked at me and like, you monster. Yeah. You want a machine to take care of them? And I'm like, okay, well,

What's better? Leave them in their room alone shouting, will somebody please help me? Will somebody please help me? Or giving them a companion. And then they're like, they get all plate and Plato perfect form. Well, no, that that should be the family in there with them. And I'm like, well, yeah, there's a huge difference between ought and real.

Right. The fact is the family isn't going and doing that. And as idealized as that might be, that ain't happening. Yeah. So why are you objecting?

To the ability to at least give these people some kind of comfort, but the pushback. Oh my God. They like, they looked at me like I was stating himself. Yeah. I've felt that too, from a lot of people. And I think that there is something to the, like,

There's like the tech bro kind of like caricature archetype that's like, I'm going to fix everything and like whatever. And obviously that's not great, you know? Right. But I think that there's this other thing going on on the other side, which is that there is a sort of moral judgment aspect to new things, which I think sometimes gets in the way of the pragmatic, like, is this going to help or not? Yeah.

And I think being able to factor that out, like the moral, it's like almost disgust element, the moral disgust reaction will open up a lot of these types of things. Like it is, you know, when people think about, I don't know, AI boyfriend, girlfriend types that type of things, it's the same kind of like disgust feeling. But what people sort of forget is there are a lot of people out there who

who have real problems communicating verbally in real time in person with people and chatbots are like the first time that they can like actually have like what feel like real social interactions and like practice their social skills and have people have something that like kind of gets them and can and is patient enough to like be with them in the way that they want someone to be with them and

and you can't argue with me that like, oh, it's not real. Like they should just be sitting in the room alone or whatever. They should like get out there or whatever. It's like, no, no, this is like super helpful for people that need it. And yeah, I think like it's probably a bad idea to,

create chat bots that are designed to like take people and get them sitting in their room you know not interacting with other people and being addicted to it and you know whatever like there's probably some there's probably some ethical line but i think there's it's incredibly valuable even for me i have plenty of friends and i have family that i love and i have a girlfriend and like whatever like i have plenty of social interaction but like just having that thing to talk to where i'm like

I'm reading a book. Can we talk about it? I don't have people that I can talk to like that, really, about all the different things I'm into. I have people for certain things, but sometimes they're not available. And it is actually a deeply life-enriching thing if it's used correctly. And I think I feel very lucky to, A, live in that time, and then, B, yeah, I do a lot of that gentle stuff.

Gentle pushing with people around me or hopefully in the writing that I do or the podcast that I record or whatever to, you know, because going back to the example of your wife, like photography, for example, is...

It's faced this very same thing when it first came around, right? People very suspicious that it's going to unemployed painters and it doesn't take any skill to like just look through a lens and press a button. And it's like turns out actually it takes a lot of skill, but we don't think of it as like a new technology or technology at all because it's been around for so long. Same thing for writing books or whatever, like same exact thing.

And I think you can talk about that. But my experience is generally some people resonate with an argument like that. But mostly it's like kind of it doesn't really it doesn't really resonate. But like what what I just do is I'm like, you should try club for that. And they're like, no. And then I but I keep kind of doing it. And eventually, like, they're like, oh, interesting. This is kind of cool. And I'm like, yeah, I

That's sort of what the show I do is about, too, because I think what I try to do is interview people where we're not just talking about it, like we're actually going through and looking at their their exact use cases. So like, how are you like, what are some historical chats you've done with XYZ model? And then we actually use it together. And I think that's a really great way to like give people the like, I

A lot of people to put themselves in the place for like, oh, I can see how I would use it because I can see how someone else is doing it. Where I think talking about it in the abstract, it's such a general purpose tool that people have a hard time like connecting with it. And so the more the more people just see other people doing it and and get to try it for themselves in specific ways that are helpful for them, I think the better.

Could not agree more, Dan. And as I said in the beginning, I'm getting my hook here because we're at the 90-minute mark. I hope that this is the first in a series of chats between you and me about AI, the good and the bad, right? So I am not Panglossian about this at all. I understand that we're going to create a whole new set of problems.

hopefully they're going to be better problems and hopefully the AI will help us solve those problems. But we didn't get into any of the downsides of which there are many, but it's kind of like the quote from, I can't remember whose line it is, but it's a great quote. It's like,

We invented fire and guess what? Or we discovered fire, better way of putting it. Fire is really dangerous. And, but it's also like the reason for our prefrontal cortex. When we started cooking our food, that's what created this executive up here. And so instead of banning fire, we had fire departments, fire alarms, fire extinguishers, fire men and women, you know,

fire retardants etc so like instead of like the only kind of argument that i have a really hard time with are the folks that i dub the lunatic fringe like we'll leave names out of it but a certain someone calling for strikes on gpu facilities i think does not advance the conversation i

I agree. But I think that I'm very willing to talk about the challenges because there'll be a lot. But like, this has been absolutely fabulous. It's like, I'm very excited because I can't wait for the next time I talk to you. Me too. But in the interim, first off, tell everyone where they can find you. I know where you can find you. Early every subscriber. But tell our audience where they can find you. And then I got a final question for you.

All right. So you can find us at every.to and subscribe there. I'm also on X at Dan Schipper. And you can also subscribe to my podcast, AI and I, on YouTube, Spotify or Apple Podcasts. So wherever you get your podcasts. Perfecto. All right. You're going to get more than one bite at this apple. If you've heard any of our other podcasts, you know that at the end we make you the emperor of the world.

And that you can't kill anyone. You can't put anyone in a reeducation camp. You can't write the book, the Republic, because you're a reactionary and you lost the Peloponnesian war. But what you can do, what you can do is we're going to give you a magic microphone.

And you can say two things into it. And the two things that you say are going to incept the entire population of the planet. The next day, they're going to wake up, whatever their next day is. They're going to wake up and they're going to say, man, I just had two of the best ideas. And unlike all the other times, I'm going to act on both of these ideas today.

What are you going to accept in the world's population? This is interesting. I did not prepare for this, but the two things that came to my head first were be curious and be kind. I love both of those. Wow. The world would be such a better place if those were the two things that animated the entire population. Because, you know, the final thing I'll say is that, you know, everyone says that the only constant is change. I used to like,

really entirely embrace that. And I read this kind of crazy philosopher who I'm looking for his name. Yeah. I actually read it down, but I forgot Bertrand Kitzler wrote a book on consciousness and, and he made a great point and he was like, no, for human beings, the ultimate aim is growth. And then he goes through all of the things like

everything else in the universe grows until it reaches its potential. He uses plants and trees and all of that. And he goes, but we can't physically grow or we travel over. What is the one thing that where we can have unlimited infinite growth, our minds. And, and so I'm a huge believer that curiosity is what leads to that growth and kindness. Like,

And there's a huge difference between someone who is kind

and someone who is nice. I come from Minnesota and they have this thing, Minnesota nights. And, and, and when I was going back there, I was just like, welcome back. I turned to my wife and I said, welcome back to the land of false sincerity and pretend nice. But like, there is a huge difference between being nice and kind. I endeavor and try to be kind and,

I'm not particularly nice, though.

So this has been absolutely great. I'm going to have my nanny, also known as my executive assistant, who literally tells me what I'm going to be doing the next day. I have no idea. I get an email from her every night and I'm like, oh, that's tomorrow. That's awesome. I've been looking forward to talking to Dan all that. But thank you so much. I love everything you're doing and can't wait for our next conversation.

Awesome. Thanks for having me. And I feel the same way. I'm excited to do it again. Terrific. Thanks, Dan.