cover of episode Harnessing the Power of AI | Avichal Garg

Harnessing the Power of AI | Avichal Garg

2024/11/10
logo of podcast The Ben Shapiro Show

The Ben Shapiro Show

Chapters

Crypto uses cryptography and distributed systems to move money and perform computations. It started with Bitcoin in 2009, aiming to create a secure and decentralized alternative to traditional financial systems. Bitcoin's core idea is to prioritize security and self-sovereignty over efficiency, eliminating the need for intermediaries like banks.
  • Crypto utilizes cryptography for secure data and proof of facts.
  • Bitcoin was created as a response to the 2008 financial crisis.
  • The crypto ecosystem is currently worth around $2 trillion.

Shownotes Transcript

Find a fresh, healthy take on grocery shopping at your new neighborhood Sprouts Farmer's Market, now open in Leesburg on Edwards Ferry Road Northeast and Route 15. Discover the season's freshest produce, unique products around every corner, high-quality meats, an assortment of vitamins and supplements, and so much more. Sprouts makes it easy to find your healthy with our huge assortment of plant-based, gluten-free, organic, and keto-friendly products. Head over to your newest Sprouts, now open in Leesburg.

And bringing back to the crypto conversation we were having earlier, I mean, this is, you see this in the data. You can look at Pew data or Gallup data or any survey data and you see, you know, do Americans, and frankly it's all Western democracies, you know, how much do you trust your public schools? And it used to be, in the 1970s it would be 70% and now it's down to 30%. How much do you trust newspapers? It was 70%, now it's down to 30%. How much do you trust the banks? It was 70%, now it's down to 30%. So you can go through basically every institution in society that we sort of looked to to make the system work.

And it's just inverted from the 60s and 70s. Avishal Garg is a successful venture capitalist, serial entrepreneur, and founding partner at Electric Capital, where he focuses on investing in innovative technologies in the Web3 space. Garg's background in computer science and engineering at Stanford led him to executive roles at Google and Facebook,

In today's episode.

Avichal outlines the problems with legacy big tech and cryptocurrency's role in the declining power of the American dollar. He also talks about why he's an optimist about the burgeoning AI revolution and how we can all best prepare for the changes it will almost certainly make in our everyday lives. Stay tuned to hear Avichal Garg's perspective on the intersection of technology and politics on this episode of the Sunday Special.

Avicel, thanks so much for stopping by. I really appreciate it. Good to see you. Thanks for having me. Yeah, absolutely. So, you know, your specialty is the crypto world and crypto. So for people who have like no background in crypto, what is it? How does it work? Why is it important? Yeah, so it depends how far you want to go down the rabbit hole, but...

The old school version of crypto is just cryptography. And all that is is math that lets you secure data and prove facts about the world. And it's actually really important to the internet in general. It's why we can do things like credit cards on the internet. That's cryptography. Crypto, as it's become to be known over the last decade, is really how do you use cryptography and distributed systems really to move money around and do new kinds of computation. And so this really started with Bitcoin circa 2009. That sort of kicked off what we call the modern crypto era.

And that came out of sort of an observation or sort of really a political belief that the legacy financial systems were messed up and broken. And so if people go back to 2008, the economy was falling apart. The banks had sort of done a bunch of things that they shouldn't have been doing. And so there was a computer scientist or a scientist by the name of Satoshi. It was sort of a pen name. And they published a white paper about

articulating how you might do peer-to-peer cash without the need for a bank in the middle. And so how could I transfer money and transfer value to somebody else without a bank in the middle? And it actually was a combination of both computer

computer science, like legitimate technology breakthroughs, and some really interesting social design. And kind of the fundamental thing that they did as a trade-off, they said, well, the legacy systems are really designed around efficiency. And what if instead of saying, I'm going to optimize for efficiency, I optimize for security? And so you optimize for things like self-sovereignty, I'm going to own my own data, etc.

And I don't want to trust an intermediary. And so rather than trusting these banks, I choose to not trust them. Could I do the same things? Could I actually move my money around? And it turns out you can. And it was kind of a crazy idea. It was published anonymously. A white paper was put out. Some open source code was put out.

And it started to take off. And here we are almost 14 years, 15 years later. And that core sort of insight around Bitcoin is now almost a trillion dollar asset. There's been a whole sort of explosion of other crypto assets around that and cryptocurrencies and crypto projects around that. That whole ecosystem is worth somewhere around $2 trillion at this point. And so a lot has evolved over the last 15 years. But kind of the core original insight was what if I wanted to do things and I don't trust these intermediaries anymore?

which I think touches on a lot of sort of things that people have really started to feel over the last five or ten years in particular. But this was sort of the computer scientist version of

if I want to do the things I wanted to do before, how to do them without trusting any intermediaries. So you can certainly see why established institutions would not like this because it's quite disruptive. If you're, if you're the banking industry and you're used to people using you for those mechanisms of exchange, no longer that's being used. You're not going to like that. If you're, if you're a government, you might not particularly like it because you've been in charge of the monetary supply. And now when you're looking at something like Bitcoin, it has its own established value of value established by the market. Uh,

I want to get to sort of the institutional opposition to Bitcoin and to crypto generally. But first, what do you think that crypto has become? So there was a lot of talk early on about wide adoption of crypto as a mechanism of exchange, as opposed to a second property of it, which is a repository of value that basically it's like digital gold. I'm hedging against inflation in the economy. And so I buy a bunch of gold, I put it in my safe and

you know, a year from now, it's worth more money. Some people have used Bitcoin like that, but there are obviously other uses for crypto. What are the various uses for crypto? Yeah, so what it's really become is sort of from that original insight of...

potentially the ability to move money around, I think people started to say, "Well, actually, this is just a new computational infrastructure." What it really does is, if you look at Amazon or Facebook or Google, they really built their entire infrastructure around speed and efficiency and scalability. And so you got massive centralization, which you see on the internet. You got these intermediaries like Facebook or Google or Amazon that sort of concentrated all of the users and concentrated all of the money and all the power.

And it's partly an outgrowth of what the infrastructure lets you do. So the way the internet was designed, you sort of got this necessary centralization effect.

because scale sort of makes things more efficient. In this world, if you start from an assumption of, well, what if I don't want centralization? What if I don't want to trust my counterparty? And what if I'm willing to trade off some efficiency to get these sorts of properties? You sort of start from a different premise. And so the infrastructure has grown in a different way. And Bitcoin was just the first application, this idea of a non-sovereign store of value, almost like a digital gold.

And as that sort of started to take off, people started to say, well, you know, what if you could write code around this? What if it's not just about transferring the value, but what if I could, because as soon as the money becomes ones and zeros, well, ones and zeros is just a thing that a computer can recognize. And so what if you wrote code around that?

And so starting in about 2015 with Ethereum, people started to say, well, if I can write code around money, I can now program money. And when you start thinking about a lot of the world, a lot of the infrastructure of the world, whether it's wills or trusts or escrow agreements, if you bought a house or insurance, securities, derivatives, like what are all of these things? Fundamentally, what they are is here's a pile of resources, here's a pile of money, and I have some rules around who can access that money.

and have some rules around what happens over time with that money. There's cash flows that come off of it or in the event of my passing, something needs to happen with this pile of resources. And now instead of all of that being written in legal documents, we could write it in code. We could write it in software. And that insight sort of kicked off an entirely new sort of arm and branch of what's known as crypto today that started to explore this idea of what would it look like if you could write code that owned money and operated around money.

And so now the kinds of use cases that you're starting to see that have really taken off are, for example, stablecoins. And this is the idea that actually a lot of the world wants to transact US dollars. And it's quite painful to do that. The example we always use is if you wanted to get money from, let's say, San Francisco to London, and it's Friday afternoon.

you are actually better off going to the bank, pulling out $20,000, putting it into a briefcase, buying a flight, getting on a plane, and going to London, and then handing somebody the suitcase. Because if you send a wire, it's not going to get there until Tuesday. The banks are closed on the weekends. They don't settle wires. It'll take a T plus one to settle on Monday. And so maybe the money shows up on Tuesday, and you could actually have the money in a suitcase there Friday night or Saturday morning. And that's kind of crazy. In 2025 almost here, you

There are very few parts of the world where atoms actually move faster than bits and money continues to be one of them. And so now you can actually send money via USD stablecoins and it's 24/7. It shows up instantly. It shows up in somebody's wallet and then they can off ramp it back to their bank account when the banks open.

But it's actually the best, fastest, cheapest way to move US dollars in the world now. And so you're seeing on the order of about a trillion dollars a quarter of money movement happening with this stuff. And for a lot of international use cases. So you're seeing, for example, a lot of remittance volumes start to go through stable coins in the Gulf or in Southeast Asia where a lot of people want dollars and they can't get access to dollars. And there are large populations of people in the West from those markets.

And it's just a better way to do business. I mean, we use this for investing in startups because we can just send money over the weekend after you invest in a company. And so you're starting to see now businesses start to look at this and adopt this as one of the novel and interesting things. And so once you realize that you can actually... Money is digital and you can write code around money, it opens up this whole new universe of, well, what are all the things that...

We're technology built in the 1970s, and we can do those things in a drastically different way today. And so you're starting to see new emerging use cases even beyond just the basic sort of U.S. dollar money movement into things like lending, into things like stable swaps, so like Forex sorts of use cases, like really sort of base infrastructure that we haven't really had to think about since maybe World War II, you know, Brenton Woods and...

In the 1950s, we reinvented a lot of this infrastructure. But we're reimagining it now with 2024 computer science, which is just faster, cheaper, better, more scalable. So, I mean, all of this sounds pretty amazing, and yet there's tremendous regulatory heartburn over all of this. So, to sort of steel man the regulatory worries, what exactly are people...

most worried about that they are now suggesting there needs to be a heavy regulatory regime around some of the use cases or around the original technology? Yeah, so the backdrop here, so I think it's important to also contextualize this globally. So it turns out the U.S. is a bit of an outlier in this. Actually, most of the world is very, very happy to have these technologies. I think it's an interesting history lesson, which is if you go back to the 90s and the U.S. internet, you know, there's Section 230 that sort of happens and it was part of the Telco Act of 96.

And it was a really, really important piece of legislation because what it did was it opened the door to the internet, the modern internet as we know it. And really what it said was, you know what, these technologies are a little bit different. They're not exactly like newspapers that have editorial staff and you can sort of control these things in the way that you did the newspapers. We've got to think about these rules differently. And so, for example, aside rules, it said, well, you know, if you're a digital forum like a Facebook or YouTube, you have different rules than a newspaper might have.

And actually, like if you, you know, as a media person, you don't need to go to the FTC to get a media license, a radio license to publish on the Internet. And so these sort of basic ground rules understood that the technology was different in some fundamental ways. And so we needed some new rules to account for how it was different. That ushered in the modern Internet.

The reason America won the internet was because some of these rules were put in place in the late 90s. And so we got Amazon, we got Facebook, we got YouTube, and we got Google and all the sort of mega tech companies. Now, the rest of the world looked at this and today has a set of challenges. Like if you're Europe, if you're Brazil, if you're India, if you're China, you're looking at these things and you're saying they're so, so, so important in society as sort of fundamental pieces of infrastructure, but you have zero leverage over them.

Like, you know, if you want to call in one of the CEOs of these companies in your Europe, are the CEOs going to come? No, they're American. They're American citizens. They're in America. Like, they're subject to US law. They will abide by most of these laws in these other places. But you have relatively little leverage, actually, over these companies because fundamentally, they're American companies. So it's a tremendous national security asset that these ended up being American companies. So now the rest of the world looked at this and said, oh, we kind of lost the internet. We don't want to repeat that mistake. And

And so if you look at what the rest of the world is doing around things like crypto or AI, they're really leaning into it. And what they're trying to do is figure out how do we make these companies successful in our domestic markets? Because if we can do that, we'll actually in the long term have leverage and we won't be beholden to these American companies over whom we have no leverage. So actually, the rest of the world has gone very pro-crypto.

UK, Germany, Singapore, many of these markets are actually embracing it and leaning into the technology. They're giving visas to people. Dubai, they're giving visas to people to come build these companies.

United States, on the other hand, I think we learned some wrong lessons, which is because we were the beneficiary, it's almost like a resource curse. We sort of won the internet and we forgot in many ways why we won and we sort of take it for granted and how powerful these entities are and how important they are from an economic perspective and a national security perspective. And so we're kind of botching it a little bit. And so there's a lot of sort of dragging your feet. There's a lot of opposition. There's a lot of

sort of fear about what these things might do rather than sort of optimism about how amazing things could be. And so the U.S. is a bit of an outlier. And specifically in the U.S., a lot of the concern fundamentally comes down to, in my opinion, a philosophical concern, which is who gets to control these things. And I think from a government perspective, you know, the legacy financial system and a lot of these computational technologies

companies like Amazon or Google, well you can call the CEO. You can have them come testify. You can yell at them. You can get your PR moment. Or you can find them. How do you do that with a decentralized crypto network? How do you really think about that? And the reality is you can't. There's no who you're going to come yell at about Bitcoin. You can't. And even if you could, there's nothing they can do about it. The code is open source and it's running and it's autonomous.

And this presents a real challenge, I think. And we don't really have a great framework for how to think about that. I think the last time, in my opinion, that we had to think about these kinds of issues was frankly like the invention of the LLC, like in the 1850s. So if you go back and look at the history of limited liability corporations,

And we invented, it was actually a fantastic technological breakthrough, something like the Dutch East India Company, which said, let's pull a bunch of capital and we'll go pursue this venture. And that charter, you had to go get that charter from the king. So the king would give you a charter to start a corporation, and you could pull assets and then go pursue risk opportunities. And it was really a separation of capital risk from the people pursuing that with their skills.

It took about 150 or 200 years, but a lot of people started to say, well, wait a second, why do you need to go to the king to talk about these things? Like, why can't I just start a company? And so around 1850,

in the UK and the US, we created really simple ways for people to create corporations. And it's actually really fascinating. If you go back and look at some of the debates that were happening at the time, there were really interesting questions like, well, are these things people? Do they have rights? What happens if there is a fire and somebody dies? Who's liable? Is it the owner of the thing or is it the company or who pays? Will these things have free speech rights? Will they be able to...

participate in elections. What does that mean? Will they have too much power and money? And there were actually a lot of really interesting questions that were surfaced at the time. But what we said was, you know what, this is a useful new technological construct, and so let's create some rules around things like corporations and LLCs. And that was a really big breakthrough because that was a big part of what allowed things like railroads or the Industrial Revolution, because now you could aggregate capital to pursue opportunities at a scale that you just previously could not.

And that sort of fundamental set of questions of like, how does this work? And what are the rules? People had to really sit down and think about things like liability and ownership, free speech rights and all these things. We haven't really had to face that for 150 years. We are now faced with a similar set of questions here around, well, these are decentralized networks. There isn't a CEO that you can just call. So how does liability work?

How does free speech work? If I can write code and just put it on the internet, it doesn't have free speech. What happens if something goes wrong? How do we think about things like KYC and anti-money laundering? And how do we prevent terrorist financing? There's some really fundamental questions here that we need to sort of sort through. I don't think the answer is to just say no.

Because the reality is that that's not an answer because this stuff is happening globally now. And so I think the U.S. has actually sort of dragged its feet and there's been a lot of opposition, but I think it's actually to the detriment of the U.S. to not have people thinking about these questions and trying to answer them.

This past year, we've witnessed not only the ongoing war in the Holy Land, but also something even more disturbing in some ways, an absolute explosion of anti-Semitism across the West. When Jews in Israel need support most, I'm proud to partner with an organization that's been there for decades, the International Fellowship of Christians and Jews. For over 40 years, the fellowship has been building bridges between Christians and Jews. Their work has never been more critical than it is today. Right now, they're on the ground in Israel and Ukraine, providing real help, food, security, and essential supplies to those who need it most.

Your support during this critical time makes a concrete difference. Every gift you give to the fellowship goes directly to helping people with food, necessities, and security measures

that save lives. Here's what I need you to do right now. Go to benforthefellowship.org. That's benforthefellowship.org. Make a gift today. In the face of these many threats, the fellowship's ongoing work providing security to Israelis has never been more important. Remember, that's benforthefellowship.org. They're doing important work on the ground, in the Holy Land, every single day. A lot of suffering people there. Give generously. Head on over to benforthefellowship.org. God bless and thank you.

So when you look at sort of the current regulators and the regulatory environment, give me some examples of some badly thought out legislation or badly thought out regulation. Obviously, if you spend any time in

this world at all, even like me sort of casually. The name Gary Gensler comes up a lot. There are regulators who are widely, shall we say, disliked in sort of these communities, in the crypto community, for being heavy-handed, for believing that you can sort of put the genie back in the bottle. What does bad regulation look like? What are the problems? Yeah, well, bad regulation is actually no regulation. It's regulation by enforcement. And so there's no legislation. And so what we really need are

clarity of the rules. And it's actually, I've never, it's pretty crazy. I've never worked in a space that is demanding legislation the way that the crypto industry is. People go to senators and congresspeople and are like, please regulate us. Tell us what the rules are because we want to follow the rules. Because in the absence of these rules, what you really end up doing is empowering the bad actors. What you end up doing is the people who are willing to skirt the law, who are not willing to spend time working with the regulators or the legislators to figure this stuff out are the ones who just go off and start doing scammy things. And then people ultimately lose their money.

And so, for example, if you look at the really famous example of this from a few years ago is FTX, which is sort of an American company. It was a bunch of American founders, but they moved to the Bahamas to run their business. And so it looks like an American company, but in practice was not an American company. It was not regulated by the United States. They were not subject to disclosure rights in the U.S. But a lot of people trusted them because they looked like an American company.

And it ended up being a $10 billion fraud and Sam Bankerfried is going to jail and so on. But that's a perfect example of the lack of legislation and clarity creates the opportunity for these sorts of bad actors to come in and exploit the system. Meanwhile, the good actors are trying to figure out how to comply and it's unclear how to do that.

And so when it comes to some of these agencies, what you really see happening is that they are taking this ambiguity and in the absence of clarity from the legislators are essentially deciding the rules and dictating those rules.

And sometimes those rules are applied arbitrarily and capriciously. These are literally words that judges have used. And a handful of companies that have the resources are now taking the SEC to court and winning. And it's actually pretty remarkable. I mean, for anybody who's studied the history of the agency model and Chevron and sort of these recent rulings that have happened, it's pretty fascinating because historically, you know, if the SEC gave you a Wells notice or said, we're taking you to court,

Their hit rate was so good. They were so trusted. Like your stock would tank if you got a Wells notice. These days, because they've taken such a different strategy of issuing Wells notices and frankly losing in court multiple times, the market just doesn't even take that seriously anymore, which is a real, I think, hit to the credibility of the agency at the end of the day. And so what's happened is we actually don't have legislation. We don't have regulatory clarity. The agencies are trying to do this through enforcement. And then when it goes to court, they're in many cases losing credibility.

And that's just an extremely expensive way to run an industry. And so this is why the industry is going to, you know, there are many great legislators on both sides, on Democrats and Republicans. You know, Senator Gillibrand from New York. You have Richie Torres from New York on the Dem side. You have Ro Khanna here from California.

You have Patrick McHenry, who's the chair of the House Financial Services Committee. You have people who are trying to push this forward. But that's what we ultimately need, is we need some sort of legislation to start clarifying what the rules actually are and actually telling the agencies what the rules are. Because in the absence of that, the agencies are taking this into their own hands.

So you mentioned economic growth and innovation. Obviously, the area that's seen tremendous amounts of investment right now is in AI. So let's talk about AI. I see a lot of people very, very nervous about AI, about what it's going to do. There's sort of the tech optimist view, which is it's an amazing new technology. It's going to allow for innovation at a level that we've never seen before, because essentially at a certain point, you are going to reach artificial general intelligence, which means a

Why don't you define it? I don't want to mystify this. Why don't you define what artificial general intelligence is? Yeah, well, it's interesting because AGI sort of has a couple of different flavors of it. I think one version of it is... Well, there's another test we can talk about, which is the Turing test. But AGI, generally the way people will describe it is for an arbitrary task, this software could perform as well as an arbitrary human. So, you know, an average typical human. Now, there's a lighter weight version of that that you might say, well, for...

some universe of tasks will this outperform. So it's not universal in the sense that, you know, a human is very good at learning how to drive a car and ride a bicycle and also do their taxes and also learn email and learn new software. And so maybe you don't get AGI that is necessarily good at doing all of those things, but you might have some sort of human level intelligence that for certain types of tasks, it does as well as human. I think we tend to get, we're more likely to be able to get that level of intelligence relatively quickly. The

the broader, like it can learn anything with very limited data. You know, humans are phenomenal with a couple of data points. You know, like you can teach a four-year-old how to ride a bike in a couple hours. It's pretty remarkable that we can do that. That, I think, is some time away. And so if you define AGI that way, that's probably many years away. If you define AGI as four certain types of intellectual tasks, it will do as well as a typical human. I think that's on the order of like two years away. Now, that whole sort of

can of worms opens up a whole set of questions. Um, I think this is the, this is the white collar analog of what happened in the eighties with manufacturing as robotics sort of entered. Um, and it has real consequences on society because I think there will be something, if I had to guess something like 50 to 70% of jobs that people do today, a computer will be better at doing in two to three years. And so what this means is there's real economic pressure for businesses to not have as many employees to get the same output. Um,

Over time, I don't think this means there are fewer jobs. I think this means that companies will do more with fewer people. And so you can actually do 10x as much with the same staff.

But it does present some challenges in the short term, which is that a lot of these jobs that humans are doing today take things like doing your taxes, scheduling emails, think about the number of financial transactions that you do that involve just filling out some PDF and sending it back and forth. A huge swath of the economy, actually, computers will now be better at than humans.

And we don't know kind of what the consequences of that will be societally. From a productivity perspective and a growth perspective, it's going to be phenomenal. Like, it's going to make every business that much more efficient. It's going to make, you know, the economy boom and sort of take productivity through the roof, I think. But the second-order effects, I think, are still a little bit TBD. Yeah, so when we get to those second-order effects, which is, I think, what people are mostly worried about, the societal effects of what happens when, you know, half the population, at least in the temporary term, hasn't adjusted yet to the shift. Because, as you mentioned, if you look at the history of American...

economic growth, there have been a number of industries where it used to represent a huge percentage of the workforce and now represents a tiny percentage of the workforce and people moved into other jobs. But there is this lag effect. I mean, America used to be a heavily agricultural economy. There was a heavy manufacturing-based economy. Now it's a heavy service economy. And so when that is taken over by AI, when AI is doing a lot of those tasks, what do you think is the sort of next thing that human beings are still going to be better at then?

than machines. Presumably, there will be a cadre of experts in every field who will still be better than the machines are because they'll be the geniuses who are, you know, if you're a Nobel Prize winning nuclear physicist and you have a bunch of grad level students who are working for you, but they're all AIs. It wipes out the grad level students. It doesn't necessarily wipe out the nuclear level, the Nobel Prize winning nuclear physicist. But

If you're not that guy, if you're everybody else, which is 98% of the population, what exactly do you think is left for those people to do?

Yeah, I think a lot of it will become human-to-human interactions. So there are certain things that humans ultimately will still want human interfaces for. So for example, going to the doctor. So an AI is going to be better at diagnosing you. AI is already better at diagnosing than most doctors today. But there's something fundamentally human about going to the doctor's office and having somebody talk to you and listen to you. That doesn't go away.

And so the modality of how you do your job in a lot of human-to-human interactions will change pretty fundamentally, and we'll have to retrain a bunch of people on how to use these tools properly. But I think what you'll see is anything that involves a human, you know, real-world interaction, I think will probably persist. And then beyond that, I think it's a big open question. You know, I think this is a big, big economic alignment. And so, for example, what does education mean in this new world?

yes, teaching is still a very human thing, but how do we prepare our students for this new world? Or how do we think about, you know, you're talking about the Nobel Prize winning researcher.

I think it's entirely possible that in seven to ten years for some of these domains that the AI is actually smarter than the Nobel Prize winning physicist or whatever. And so when you get the superhuman intelligence on the other side of that, what are the consequences? And I don't think anybody really has a great answer to that yet other than the human systems will persist. Anything that involves information, anything that involves data, anything that involves computer code,

the AI is probably going to be better than humans in relatively short order. So, I mean, that sounds pretty scary to a lot of folks, obviously, especially because we're an economy that is optimized for

lack of a better sort of term for IQ, or an economy that is optimized toward intelligence. And now you're saying that basically intelligence is no longer the metric for success because it's going to be available to everybody. In the same way that the manufacturing resources have now become unbelievably available to everybody. You don't have to have a manufacturing facility to get anything you want at your front door in a day. It's going to be like that except with intelligence. And so what the sort of where the mind goes is two places like, okay, so now you're going to be optimizing for

emotional quotient, right? You can be optimizing for how well people deal with people. If you're an extrovert, you're going to do really well in this new economy. If you're an introvert, you might be a little bit screwed, right? The engineer types are going to have a real tough time. But if you're somebody, if you're a face for the AI to teach kids, then that'll be okay. It also raises serious questions, presumably, about

the educational process itself. What do you even teach kids when... And I see this even with my own kids now, right? I mean, like my son, who's a smart kid, I'll bet he was delayed by about a year because he understood how voice-to-text worked. He would just grab my phone and instead of actually typing in the question the way that he might if he were able to read, he would just...

you know, actually just voice the question and then you'd have the phone read it back to him. He never actually had to do that. So what exactly are we going to be teaching kids and how does that affect the brain? Because obviously, I mean, these are all open questions. When it comes to, like, it seems to me that you stop teaching kids math, for example, because it's so easy to just do everything using AI. And doesn't that atrophy a part of the brain that you might actually have to use? I mean, there are general uses that are taught via specific skill sets. Yeah.

and then apply more broadly. Yeah, all excellent questions. I don't think anybody has the answer. I think there are two ways to sort of think about it, two data points you might consider. So one is, a lot of this was sort of what happened on the internet. So imagine you're in 1990, and you can see the internet coming. You can see e-commerce coming. You can see video conferencing coming. You can see social media coming. What would you do? That is the question I'd be asking myself if I were a parent right now, if I were a teacher, if I were a white-collar worker.

If you understood that the internet was happening in 1990, what would you have done? And I think probably what you do is you start getting really, really familiar with the technology. You start writing emails. You start buying stuff online. You start buying PCs and sort of staying on top of it so that by the time you're in the year 2002, you've sort of upskilled yourself. You've gotten to a place where you understand what these tools are capable of, and you've reskilled, and so you're able to sort of move into this new world.

You know, could anybody have predicted that, you know, in 2010 or 2020, you would have, you know, the entire media landscape would look fundamentally different than it did in 1995? I think that was a big leap. But if you understood the tools, you were well positioned to sort of move into that new world as it happened. So I think that's

One mental model you might have is what happened in the 90s and what would you have done? If you knew everything that you knew about the internet today, but you're in 1990, what would you do? And I think that's probably the best you could do is let's just get really, really familiar with the technology. Let's make sure we buy our kids a PC. Let's make sure that they play with it every day. And by the time they're 10 or 12 or 15, they'll have an intuitive sense for how this technology works and then they'll be okay. The second thing I would observe is that all of these transitions are very scary in the moment, right? Whether it's the internet, whether it was PCs, computers,

Whether it was factories, whether it was railroads, whether it was the car, all the way back to the printing press. And there's always this fundamental debate that happens. There's always this fundamental question that happens between two groups of people. One group of people says, you know what, the current system is fine. And actually we don't need this technology, we don't want this technology because it's going to be so disruptive that a bunch of people will be left behind. And there's another group of people who say, no, no, no.

As scary as this thing is, it's a tool that will make life better. And I think every time you look at the tools that humanity creates, if we choose to adopt them, life gets better. So take something like the printing press. It was a big driver in why we have literacy. And I think you would be hard-pressed to argue that people being able to read is a bad thing. There were actually people around that time that said, actually, learning to read is a bad thing. We don't want people to learn how to read.

This creates negative social consequence. What if people can actually read the Bible and understand what it says? That's really dangerous. We don't want that. But I don't think anybody would make that argument today. And so I think AI, we're probably going to look at similarly in 20 years. I think we're going to say, you know what? Yes, it was a transition, but on balance, it was extremely positive.

And it does create some societal questions about how we manage that transition for people so we don't end up with a Rust Belt, the way that we sort of, I think, abandoned a bunch of people in the 80s and 90s and didn't reskill them. And this is a problem that we need to think about. But I think on balance, it's going to be extremely positive. And the best you can really do is just immerse yourself in it. So, for example, at work in our company, what we've done is

Literally every person is required to use an AI. And every day you have to use it. And we share productivity tips. And so we have a Slack channel where people are sharing these tips.

We have an engineering team internally, and we task them with going and actually getting these open source models and running the code and actually trying to build agents that will be able to do the job of a finance person or an operations person so that we can scale our business 10x without hiring more people. And so I think the people that just sort of jump in the deep end today will be at a tremendous advantage in two to five years. All righty, folks, let's talk about dressing sharp without sacrificing comfort.

If you're tired of choosing between looking professional and feeling relaxed, I have great news for you. Collars & Co. is revolutionizing menswear with their famous dress collar polo. Imagine this, the comfort of a polo combined with the sharp look of a dress shirt. It's the best of both worlds, giving you that professional edge without the stuffiness.

Gone are the days of floppy collars that make you look like you just rolled out of bed. These polos feature a firm collar that stands up straight all day long. The four-way stretch fabric means you can freely move comfortably throughout the day. It's office-approved, so you can look professional without feeling like you're trapped in a suit. And get this, it travels really well, so whether you're commuting to work or jetting off for a business trip, you'll arrive looking crisp and feeling great. But collars & Co. isn't just about polos. They've expanded their line impressively. They've got merino sweaters, quarter zips, stretch chinos, even a performance blazer. They call the maverick. It's

It's versatility at its finest. These pieces look great by themselves under a sweater or with a blazer. Look at what I'm wearing right now. Don't I look spiffy? I mean, check out this jacket. You see this thing? This thing is great. So if you want to look sharp, you know, the way that I do, feel comfortable, support a fast-growing American company, head on over to collarsandco.com. Use the code BEN for 20% off your very first order. That's collarsandco.com, code BEN, collars and co. Because you shouldn't have to choose between looking good and feeling good.

So a couple of the challenges that people brought up as possible theoretical challenges is one is called the data wall. The basic idea being that what happens when right now, LLMs, large language models are learning based on the amount of data that's on the internet. What happens when people stop producing the data because the LLMs are actually producing all the data and then they're really not producing the data. They're synthesizing the data that's already there. So do we hit a point where because there's no new human created data that's actually being input that

basically the internet runs out of data. And there are some people who sort of suggest that that's not going to happen. There are some people who suggest that it will. What do you make of that question? Yeah, it's a very interesting question. I tend to think we will probably not run out of data anytime soon. I think there are large domains that are private data that are not yet fully tapped. And so I think when you think about things like medical records, bank records, financial data, government data, there's a lot of data actually that's not yet been ingested into these systems.

I also think that the AI creating stuff, it does create sort of a self-referential feedback loop, but I think a lot of that data is quite valuable. And so you will have a lot of quote-unquote AI-generated data or potentially even synthetic data. And I think if you can put a layer on top of that to have some sort of filtering that says, actually, this data is good data, you might actually get really interesting feedback loops that actually the AI can sort of learn from the AI.

So I think it's a big open question. I'm sort of on the side of actually I think we'll get better and better using the existing data that we have, and there are still large packets of data that are untapped. And so I don't think we've hit sort of the top of the S-curve in terms of being able to hit diminishing returns with data yet. When it comes to another question I've seen raised about this is the question of innovation. How much can AI actually innovate as opposed to sort of

sophisticated recombination, which I suppose actually goes to what exactly is innovation? Is it just sophisticated recombination or is it something else? It's sort of like you're in the shower and the flux capacitor hits you

hits you when you fall off the toilet or like, well, what exactly is that? How far can AI go in terms of what we would perceive to be innovation? Also because whenever there's a sophisticated technology, human beings tend to then map their own model of the brain onto the sophisticated technology. So for a while it was the brain is a computer and now it's the brain is an AI. And how accurate is that? Yeah, I think, um,

It's a tricky question. I think there's going to be a class of innovation that this stuff is really good at, which is when you have large amounts of data and you can interpolate inside those data sets, you can infer a lot. You can learn new things. And so you see, for example, things like protein folding. We have a lot of data and we can start to learn how these things might work and how these systems might work. I think there are some people who think that actually extrapolating beyond the data, creating entirely new frameworks and new ways of thinking is something that humans are uniquely good at.

I tend to be on the side of, you know what, every time humans say that somehow we're special and unique, it turns out to be incorrect. And so whether we're talking about voice, whether we're talking about the Turing test, for a long time, Alan Turing was one of the pioneers of computer science. He created this test that basically said,

if you can talk to a computer, and maybe that's textual, maybe that's voice, but let's say it's text, and you cannot distinguish, let's say you have a black box and you sort of are asking it questions and it gives you answers, if you cannot tell if on the other side of it is a human or a computer, that is intelligence that's indistinguishable from a human. And we've passed the Turing test at this point. You can talk to an LLM and it looks like a human. For all you know, it's a human typing that response and it appears to be self-aware and cognizant.

It's not, but it appears to be that way. And so we actually passed this really important milestone that most people don't talk about, which is the Turing test. And so I look at that and I say, well, there are a whole set of things that humans have claimed for a long time are uniquely human that these things based in silicon are now doing.

why if we have an existence proof for a set of things that we thought were uniquely human that now these computers can do, why would it be the case that there are a whole set of other things that these computers would not be able to do? So I'm kind of on the side of it's just a matter of time that these things are able to replicate all of these things, including the ability to extrapolate and the ability to create new frameworks and to be creative.

I think it may take longer, but I think those things will ultimately fall too. So when we look at the world of AI and tons of money is pouring into AI, you see sovereign wealth funds all over the planet, people building their own AIs. What exactly does that mean? We say we're building in AI. What does that mean?

Yeah, that's a good question. There's a lot of pieces to it. The thing that most people ultimately see is some sort of a model that they're interacting with that you can type in text to or maybe speak to. The voice transcription models are getting really good. And the text-to-voice and voice-to-text, all that stuff, all those pipelines are getting really good. That's generally what people

think of. There's a whole bunch of infrastructure behind the scenes that makes that possible. Everything from how do you acquire the data, to how do you clean the data, to how do you get some sort of evaluation on it, to how do you actually train these things on very, very large clusters with lots and lots of GPUs, and doing that at a scale that requires many, many tens of thousands of GPUs and coordinating all of that. There's a lot of infrastructure and computer science that goes into that. And so that's where all the money is going. Ultimately, it's how do you take large, large amounts of data, the entire internet,

and compress it into these very, very small models that hopefully can run on your phone one day. And the entire process of doing that is extremely capital intensive because the sheer amount of computational resource and data centers and power that you need, the GPUs that you need, are so immense to actually produce these models that you then interact with as an AI.

It just requires tens of billions of dollars to be able to actually have all that computational power to create these models. And how do you tell the difference between the models? Are they all using the same data set, or is it just they're using different data sets? Is it weighting? What is the difference? It's an excellent question. I think at the end of the day, they will basically all be using the same data sets. And so a lot of it comes into how do you do the training. And there's a lot of know-how and proprietary knowledge around how you actually train these things. A relatively small number of people in the world know how to do that today.

And out of that come these weights. And those weights effectively let you understand what, given a word, what is given a context in a word, what is sort of the next word that the system would predict. And so those weights, in the case of, let's say, a Lama or open source, in the case of something like OpenAI or Anthropic or not open source, but ultimately that's what you're trying to produce. You're trying to produce, sort of take all the data,

compress it, and the compression ultimately is producing this sort of set of weights, and the weights are just all they're really doing is saying, given a word, what is the next most likely word? I'm glossing over a lot of details, obviously. And that's what we call sort of the AI model. And there's a lot of sort of how do you do the compression. That's sort of all the proprietary stuff, which speaks to actually this sort of really, it kind of goes back actually to this technology problem we've talked about, which is there are

And that stuff is very opaque, actually. Even though you may be able to see the weights on the final model, you actually have no idea the data that it was trained on. That is not disclosed. And so very subtle biases could be entering into the data that we have no idea about, which presents a real challenge because these are private corporations. They're not subject to our vote, ultimately, at the end of the day.

And so we don't really know what's going in to these data sets. We don't really know what's being kept out of the data sets. That's all opaque. And there's no real accountability at this point in terms of what that means.

And so there's a lot of sort of, I think, open philosophical questions here about, like, who should get to dictate what goes into the data sets and why? And what does that mean downstream for what these models are? And so there's a real push from some in this community to make all of this open source. Not just the weights, but actually open source the entire process. Like, tell me what data you're using. Tell me what data you left out. Tell me why you're using this data. Tell me what your eval scores are. Like, and open source the entire thing. And we'll see if that happens. You know, there's sort of some tricky...

economic questions there because there's so much money to be made that you kind of don't want to disclose those trade secrets because you don't want somebody else to be able to replicate this. And so right now, all of that whole pipeline is effectively closed. The closeness of it does create the possibility of some pretty dystopian issues. I mean...

Just to take a quick sort of practical example, I mean, when you used to use Google, you would Google a term and then you'd get a series of web pages and then you sort of, you prioritize them in a certain way, but you could always scroll the page too and sort of figure out what exactly you were looking for. Now you use Google's AI and it spits out

an answer that it's destroying internet traffic. Nobody has used a click-through link on Google for several months at this point. And by the time my kids are using it a lot, they won't be using it at all. I mean, it'll just be they type in a prompt and then the answer comes back out. And then how you determine what that answer actually looks like, how they got to that answer is going to be very difficult. And so you can certainly see the possibility of bias in the informational environment. And obviously, it

you know, conservatives are worried about this all the time. Remember when we were first seeing ChatGPT, conservatives were typing in things and it was coming out very left-wing. Yeah. And we were saying, well, what are the parameters that are being used on all of this? Yeah. And there's no visibility into that. And actually, and you know, the argument on the other side is, well, this stuff is really dangerous. We don't want to open source this stuff. What happens if it gets exploited by a rogue government? Or what if bad actors take this technology and do things with it? And so there's this really strong sort of safety movement around some of this. And I don't think it's...

misguided necessarily, but I think the form that it can take is essentially trust us. Like, we are the arbiters of truth. We will do the right thing. But I don't think that's actually the right way to go about doing these things. Just to bring it back to the earlier conversation we were having, I think that there's actually a very interesting historical analog here around religion, which is the sort of

handful of people that really understand how these systems work and ultimately are making decisions about what data goes into these systems, it's sort of like the church in this sort of 1400 era where there's a set of people and they get to dictate what is truth. And

Over time, even with the best of intentions, even with the best of people, if these things are long-lived enough, there is generational transfer. There are new people that come in. And if there's no accountability, those systems have a tendency to become corrupt. And this is what Martin Luther noted when he banged those 95 theses into the wall at the church was, wait a second, the system has become corrupted in all sorts of ways, things like indulgences.

And there's no accountability. And what people on one side of it said was effectively, hey, look, knowing how to read and read the Bible and understand this stuff is really dangerous. You need an interpreter. You need an intermediary to help you understand how to do this and trust us. We're going to filter it in the right way and we'll do what's right for you and we'll do what's right for society. And there's another argument that said, no, no, no, you should have a direct relationship with God.

You should be able to read the Bible and decide what you do. And actually that sort of the 95 theses and the printing press happening at the same time was sort of this confluence of philosophy and sort of technology coming together. And I think there's a very similar philosophical technological argument intention happening right now. Do a small group of people get to interpret the data and get to decide what goes into these models and they get to dictate truth and you have...

very little say over that and whatever they say is true is true? Or do you get to have a direct relationship with the AI? Do you get to have a direct relationship with what data goes into it and how you train it and so on? And there's a lot of people that sort of fall into this world, but it's a scary world because what you have to do if you take that position is you have to believe that humans are fundamentally good. That if you give them this technology and you give them this knowledge, they'll do the right things with it. On balance, humans are good. But that's a big leap for a lot of people. It's a scary thing to hand over that powerful technology to everybody in the world.

So when you look at the risks of AI, you see sort of the catastrophist risks, the idea that it's going to take over the nuclear missiles and just start firing them at people. How seriously ought we to take this sort of World War III end of humanity risk, the Skynet scenario? Yeah, I think it's probably overblown, personally. I think it's...

It's unlikely that this thing runs away from us in a way that we can't control. I mean, literally, you just shut off the data center, right? We have a kill switch. And so that's not to say you shouldn't think about these things and battle hard in your systems and so on. But I think there's a lot of sort of religious fervor around this. And I think the place where it actually comes from is a desire for control.

And it's a fear-based motivation. And it's sort of a degrowth mindset. And I think if you have that kind of a mindset, it's very easy to fall into this trap. And then, of course, the question is like, well, then who gets to decide the safety? And the answer is, well, me. And so it's actually sort of like a perverse incentive, I think. By and large, the people that are pushing this argument are very self-interested in wanting to push this argument. And hence, I think the argument gets overblown. I think this also sort of feeds from Hollywood. I think for the last 20

20 to 30 years, the way that the media has really thought of technology has been through this dystopian lens. I think it's actually a really fundamental problem in society. If the interface that most people have with technology is sort of they use the technology, but then they see media about technology, and by and large that media is negative, then people are going to have, their gut reaction is going to be dystopian.

Whereas if you go back to the 60s, you see these beautiful posters that we're going to have cities on the moon, and we're going to have underwater cities, and we're going to be living on Mars. And so there's this sort of positivity around technology. And so I think a lot of this is actually cultural and societal. Actually, I think it's not about the technology. It's actually kind of where are we as a country and as a society. And I think there's a lot of dystopian sort of undercurrents just generally in society right now. It's a fascinating point. You can see where that's coming from, meaning lack of social cohesion leads to mistrust

of one another, mistrust of one another means we're afraid that the technologies that we all control are going to go completely awry and we'll use the technologies to murder one another. If it were you and your family and you're just like, wow, this is a great new technology that we just adopted in our house, then you're really not super worried that your family is going to blow each other up. Yeah, totally. But, you know, it...

I'm not sure how to, it seems like very often I come back to this, but without the institution of social cohesion coming back, then you will see a sort of revanchist Luddite-ism that's going to rise. And I think you do see that on the rise. I think you see people being like, well, this is all moving too fast. This is all too scary. We need to burn the looms. Yeah, that's right. And to bring it back to the crypto conversation we were having earlier, I mean, this is

You see this in the data. You can look at Pew data or Gallup data or any survey data, and you see, do Americans, and frankly, it's all Western democracies, how much do you trust your public schools? And it used to be, in the 1970s, it would be 70%, and now it's down to 30%. How much do you trust newspapers? It was 70%, now it's down to 30%. How much do you trust the banks? It was 70%, now it's down to 30%. So you can go through basically every institution in society that we sort of looked to to make the system work.

And it's just inverted from the 60s and 70s. It was that people trusted these institutions and they don't anymore. And I think it's actually warranted. Most of these institutions, if you think about it, were created after World War II. And they've really atrophied.

And it is time for a reboot in a bunch of these institutions. But I think you're right that the root of it is social cohesion. It's like, how do we think about who we are as a people, as a society, as what are our values? Because the institutions are a reflection of that. And the institutions atrophying and the social cohesion being lost over the last 50 years, I think, kind of go hand in hand. Do you think that technology can exacerbate that? Meaning that when you look at crypto, it's actually a great example of this in the sense that...

let's say that you, you know, there are higher levels of social cohesion and you did want to transfer money from say San Francisco to London and you wanted to do it over the weekend, but all the banks are closed. So in old days, right. Or if you had a close knit community, what you might do is get on the phone with somebody who you knew and say, take,

a certain amount of money out of your bank. That's Friday afternoon. Take that money out of your bank and I'll pay you back on Monday. Yeah. Right. Like, and, and people, okay, well, I know you, I know your family. I know you're good for it. No problem. Right. And, and as social cohesion breaks down, it's like, okay, we actually need to now control for that by creating trust-free systems that allow for that sort of monetary transfer. And the more you rely on the trust-free systems, the less actually there's a requirement of trust societally in order to, in order to get to the same sort of outcome. Yeah. That's an interesting take.

You know, as a backstory, this is how the Rothschild empire started, actually. Exactly. The brothers all trusted each other, and so you could do this kind of stuff. This was their market mover advantage. It really was. I mean, you had brothers all over the continent, and they would just wire money to one another or send a note to one another in code. Usually, it was actually in Hebrew script. It would be like transliterated English, but in

but in Hebrew. So people think it was Yiddish, but it didn't read like Yiddish. And that's how they had that first mover advantage. It's still how what they call, what Thomas Sowell calls middlemen minorities tend to thrive in most societies specifically because of this. So to take like another Jewish example, Hasidic diamond dealers, right? Everybody's very angry at Hasidic diamond dealers because it's

It turns out that they'll have a cousin in Jerusalem and their cousin in Jerusalem be like, okay, I'm going to call Chaim over in New York and he's just going to do it for me because we're cousins. And so it turns out kinship networks are like an amazing source of social cohesion and data transfer. And as that sort of stuff comes apart, as we become more atomized, you have to create tools that control for the atomization, but then

That actually tends to create a spiral of, okay, well, I don't need to trust the person, so why even bother to trust the person? Yeah, I think that's all right. I tend to think of these things in different buckets, which is just the invention of money as a technology to scale this idea of I don't need to trust you because I can pay you money and you'll give me the service that I need, and then therefore I don't have to trust you and have this barter system or have a debt-based system.

And so I think they kind of have to coexist. And the reality is we exist now in a global world. We exist in a highly interdependent world. The internet allows us to talk to anybody in the world and communicate and learn from anybody in the world. And so the technologies allow us to sort of interact with each other and transact with each other

without knowing each other, I think are great enablers for productivity and learning and advancement and progress and innovation and all the things that we need at the species level. I don't think they're a substitute for social cohesion. I think it's a separate problem. We need to solve both. I think we need to solve the problem of what does it actually mean to be in a society and be in a community and how do we foster those kinds of bonds because at the end of the day, we're still atoms and we're still humans. We're still social primates and that's a very important part of society. We also have to solve this problem of

there are now 7 billion people on earth and we can create tremendous productivity. We can create tremendous growth. We can educate. If you think about, just say from a meritocratic perspective or sort of an opportunity perspective, of the 7 billion brains on earth, how many have we really tapped? There's one Elon. There's one Zuck. There's one Bezos. I would argue maybe

Maybe we've tapped 7 million people and properly fed them and given them the resources and education and given them a smart phone in their pocket. And really, that number should be like the top 10% of humans are actually pretty smart. So we should have 700 million people who are of that caliber. And so there's like 100x left in humanity if you can get the resources to these people. But to do that, you have to have these tools that can operate at global scale. So you kind of, I think, need both. I think that one is not a substitute for the other.

Avicil, this has been awesome, really interesting. And where should folks go to check out more of your work? On Twitter, probably just at Avicil on Twitter. And we, I publish a lot of thoughts there. But a lot of it, frankly, is in private conversations. Like the really fun stuff is in private. Well, it's been awesome. Really appreciate the time. Great to see you.

Ben Shapiro's Sunday special is produced by Savannah Morris and Matt Kemp. Associate producers are Jake Pollack and John Crick. Production coordinator is Jessica Kranz. Production assistant is Sarah Steele. Editing is by Olivia Stewart. Audio is mixed by Mike Coromina. Camera and lighting is by Zach Ginta. Hair, makeup, and wardrobe by Fabiola Cristina. Title graphics are by Cynthia Angulo. Executive assistant, Kelly Carvalho.

Executive in charge of production is David Wormis. Executive producer, Justin Siegel. Executive producer, Jeremy Boren. The Ben Shapiro Show Sunday special is a Daily Wire production. Copyright Daily Wire 2024.