cover of episode EP 393: When will we achieve AGI? And one secret aspect holding us back

EP 393: When will we achieve AGI? And one secret aspect holding us back

2024/11/1
logo of podcast Everyday AI Podcast – An AI and ChatGPT Podcast

Everyday AI Podcast – An AI and ChatGPT Podcast

AI Deep Dive AI Insights AI Chapters Transcript
People
J
Jordan Wilson
一位经验丰富的数字策略专家和《Everyday AI》播客的主持人,专注于帮助普通人通过 AI 提升职业生涯。
Topics
我们比大多数人认为的更接近人工智能通用 (AGI) 的实现。AGI 的定义一直在变化,这可能是我们尚未"正式"实现 AGI 的原因之一。OpenAI 与微软之间的合作关系的细节可能是阻碍 AGI 发展的因素。传统的 AI 基于一组规则和算法,而生成式 AI 则不同。生成式 AI 标志着人工智能的新时代,它降低了使用人工智能的门槛。生成式 AI 使任何人都可以通过简单的语音或文字与 AI 系统交互并获得令人印象深刻的输出。人工智能通用 (AGI) 能够理解、学习和应用知识,完成各种任务,类似于人类。AGI 与生成式 AI 的区别在于,AGI 可以执行广泛的任务,其能力与人类相同或更高。大型科技公司现在公开致力于 AGI 的研发,这与十年前的情况大相径庭。许多大型科技公司现在公开追求或合作研发 AGI。对 AGI 到来的时间预测一直在变化,部分原因是专家对生成式 AI 的理解不足。在 GPT-3 出现之前,专家们预测 AGI 还要 80 年才能实现;而现在,这个预测已经缩短到了 8 年以内。对 AGI 的定义随着时间的推移而演变。OpenAI 与微软之间的合作关系及其未来可能的变化,可能是 AGI 发展中一个鲜为人知的因素。OpenAI 可能有意推迟宣布实现 AGI,以维护与微软的有利合作关系,并促进未来的融资。大型科技公司公开致力于 AGI,以及 AGI 技术的成本降低,是 AGI 发展加速的原因。我们很快就会实现 AGI,根据旧的定义,我们可能已经实现了。AGI 的实现比我们想象的要快。

Deep Dive

Key Insights

Why are the predictions for achieving AGI changing so rapidly?

The predictions are changing because advancements in AI, particularly generative AI and large language models, have significantly accelerated the timeline. Experts initially estimated AGI to be 80 years away, but now, post-ChatGPT and GPT-4, the forecast is less than eight years. This rapid shift reflects the exponential pace of AI development.

What is the role of big tech companies in the development of AGI?

Big tech companies like Microsoft, OpenAI, Meta, and NVIDIA are openly working toward AGI. Microsoft collaborates with OpenAI, Meta is developing an open-source AGI, and NVIDIA provides the hardware and software for AI development. These companies see AGI as the next step in AI evolution and are actively researching and investing in AGI technologies.

How has the definition of AGI evolved over time?

The definition of AGI has evolved as AI technology has advanced. Early definitions from 10-20 years ago focused on a system's ability to learn and solve problems across different areas, similar to a human. However, as large language models and generative AI have become more sophisticated, the goalposts for what constitutes AGI have shifted, making it harder to officially declare AGI has been achieved.

What impact does the OpenAI-Microsoft partnership have on AGI development?

The partnership between OpenAI and Microsoft is pivotal for AGI development. Microsoft has invested $13 billion and holds a 49% stake in OpenAI. However, once OpenAI declares it has achieved AGI, the partnership terms change, potentially altering the dynamics of their collaboration. This relationship is crucial as it provides OpenAI with significant resources and insights, but also complicates the declaration of AGI achievement.

Why might OpenAI delay declaring AGI achievement?

OpenAI might delay declaring AGI achievement because doing so could alter their partnership with Microsoft, which provides significant funding and resources. Additionally, declaring AGI could complicate future fundraising efforts, as investors might see less potential in a company that has already achieved its primary mission. Therefore, OpenAI might continue to move the goalposts to maintain its current advantageous position.

How has the cost of AI development changed, and what does this mean for AGI?

The cost of AI development, particularly compute power, has decreased exponentially in recent years. This has made it feasible for more companies to pursue AGI. Additionally, companies like OpenAI and Google are offering AI capabilities for free, further reducing barriers to entry. This democratization of AI resources is accelerating the development and potential achievement of AGI.

Chapters
This chapter defines artificial intelligence (AI), artificial general intelligence (AGI), and artificial superintelligence (ASI). It highlights the differences between traditional AI and generative AI, emphasizing the democratizing effect of the latter. The chapter concludes by defining AGI's benchmark as the ability to perform any task at a human or higher level.
  • AI performs tasks requiring human intelligence.
  • Generative AI democratizes AI capabilities.
  • AGI performs a broad range of tasks at a human or higher level.
  • ASI surpasses human intelligence in all aspects.

Shownotes Transcript

Translations:
中文

This is the Everyday AI Show, the everyday podcast where we simplify AI and bring its power to your fingertips. Listen daily for practical advice to boost your career, business, and everyday life. When will we achieve AGI, Artificial General Intelligence?

It's a question that I think about personally a lot, but I think it's something that we should be talking more about because I think the definition of both artificial intelligence and AGI is changing by the day. And I think that there's maybe one kind of secret thing holding us back from achieving AGI. All right, I'm going to be talking about that today and more on Everyday AI.

What's going on, y'all? My name is Jordan Wilson, and I'm the host, and Everyday AI is for you. It's a daily live stream podcast and free daily newsletter helping us all understand

AI, and who knows, maybe AGI, so we can grow our companies and grow our careers. So if that sounds like you, maybe you are brand new here. Thank you for joining us. Make sure to check out the podcast show notes for more related episodes and to get to our website. You need to get there. If you haven't already, go to youreverydayai.com, sign up for the free daily newsletter. So yeah, this is a podcast and live stream, but

We have a newsletter every single day recapping both the show and literally everything else you need to stay ahead of AI to grow your company and to grow your career. It is a free cheat sheet, so you should be going there. All right, so I'm excited today to talk about

when we will achieve AGI. So let's get into it, y'all. Let's talk about the big thing here. When will we achieve AGI? And I think one kind of secret relationship or one kind of fine print that might be keeping development. All right. So

I'm super, super excited for today's conversation. So I'd love to hear from our live stream audience. Yeah. Hey, podcast audience, you know, you might get tired of, I don't know, hearing questions from the live stream audience. You might be like, oh, I wish I could get my questions answered. Well, join us. We do this live every single day at 730 a.m. Central Standard Time. So yeah, let me know, everyone, what are your questions on AGI? Do you think we're going to get there? What do you think is holding us back right now?

All right. So let's just go ahead and start at the end. I'm not going to make, I'm not going to make you wait. All right. But I think right now we are much, much, much, much closer to artificial general intelligence than most people think. All right. Don't worry. I'm going to get to what it means, the definitions and the differences between. Don't worry. One of the reasons why I think we haven't quote unquote officially established

achieved AGI or artificial general intelligence is because the goalposts are constantly moving. All right. I'm going to have a little bit, you know, we always bring receipts y'all it's it's hot take Tuesday as well. I should have called that out y'all. So let me know, should I be, should I be middle of the road or should I really bring the heat? Um, but here's what I think is holding us back. It's actually some of the fine print and some of the details that

between open AI's partnership with Microsoft. So that's high level y'all. And we're going to dive into it now, but I think if we were using old standards, if I'm being honest, I think we would already technically have achieved AGI, but the goalposts are always moving. I think as large language models are getting more advanced, I think the goalposts are moving on what AGI even means artificial general intelligence. All right.

It looks like Michael said three flame emojis. So we'll see. Maybe we'll keep it tame. It looks like everyone wants a tame show today. That's fine. We can do that. All right. So let's go ahead and put some definitions out there. All right. Because, yeah, maybe if you are new to this whole AI scene, generative AI, maybe you're not sure. Maybe you don't know what AGI even is. So let's go ahead and define it. Okay. Yes, because the definitions...

constantly changing. So let's take a look at artificial intelligence, artificial general intelligence, and then artificial super intelligence. So let's start with AI. And let me just start by saying this, AI is not new.

Right. Artificial intelligence has been used in many different industries for decades. It actually goes back to the forties and fifties. Um, and it's been widely used by many industries since the seventies and eighties. Artificial intelligence is not new. We've had machine learning. We've had, uh, kind of this deep learning phase as, as well. So artificial intelligence by itself is not new, but let's go ahead and define it. So artificial

These are my definitions too, all right? So artificial intelligence is when machines or software are designed to perform tasks that typically require human intelligence, such as recognizing images, translating languages, or making decisions. And I will say traditional, quote unquote, traditional artificial intelligence is really based on a set of rules, a set of algorithms, decision trees, right?

It's programmed. There's bits and bytes almost, right? Traditional artificial intelligence. So like I said, an easy example that's been around for decades, right? Is when banks are giving out loans and they have essentially algorithms, right? You go into an office, you probably give someone information, you fill out a form, they enter that form. And then there's an artificial intelligence program.

algorithm that says, okay, is this person really qualified for this loan or not? Maybe if it's a big loan, you might've filled out a ton of paperwork. So there's a ton of different pieces of data that you're essentially giving the bank. And then the bank uses artificial intelligence to essentially assign different values and to see if you are too risky for the loan. So AI has been around for a very long time. It's not new. And you've probably been exposed to

to artificial intelligence even in your daily lives well before chat GPT. All right. But obviously, I will say this. And I was at the NVIDIA GTC conference a couple months ago. And its CEO, NVIDIA CEO Jensen Wong, kind of said that generative AI in large language models

marks this new era of artificial intelligence. And I absolutely agree, kind of this generative AI, right? So generative AI is a little different than artificial intelligence, kind of, you know, quote unquote, old school. But, you know, generative AI, not to be confused with artificial general intelligence, is different. Generative AI, it brings...

it democratizes, right, AI for everyone. Because before, let's say 2020, right, yes, ChatGPT was, you know, released to the masses in November 2022. And I do think that's the turning point. But, you know, this generative AI technology was available through other providers. You know, OpenAI made their technology available to third-party developers back in 2020. So pre-2020,

I say that's traditional AI, right? Yeah, I'm slapping my own labels on this, right? Now we have generative AI. It is kind of this next advancement of artificial intelligence that lowers the learning curve to like zero, y'all. I mean, to take advantage of artificial intelligence pre-2020,

I mean, if I'm being honest, you either had to be a specialist working at one of these companies that had niche use cases for artificial intelligence, or you had to be a deep learning machine learning expert, right? You had to have a degree in artificial intelligence to essentially take advantage. So generative AI, so let's say, you know, the 2020 to, you know, now range, that

That brings AI to all of us, right, through large language models. And what generative AI is, simply put, is when anyone can simply speak or type to an AI system

and get a pretty impressive output, right? These large language models, these generative AI systems, you know, like ChatGPT, Google Gemini, Anthropic Cloud, you know, and then you look at more image or creative-based tools like Runway, like MidJourney, like Dali, Adobe Firefly, et cetera, right, where you can put in a simple text prompt

or speak in many cases and get something visual, right? You can get a photo, you can get a video, you can get an audio track, right? You can get a voiceover. So this new generative AI phase, not to be confused with AGI, has really democratized how the US works and what we can all accomplish with artificial intelligence. But it is but a small footnote in the larger umbrella of artificial intelligence. All right, so now let's look at what

artificial general intelligence is. All right. So this is a type of AI that can understand, learn, and apply knowledge across a wide range of tasks, similar to how a human would effectively performing any intellectual task. Okay. That's the difference. For the most part, AI or generative AI, if you will, performs a more narrow,

Right. Not even going to get in a narrow intelligence, but AI generative AI performs a narrow base of tasks or a narrow set of tasks. Right. Hey, recap this PDF. Hey, you know, write this blog post. Hey, create this, create this image. Right. You're kind of working on one task at a time. And it's a task that kind of the AI system has clearly been trained on.

So AGI is slightly different. That is when a machine or a system or, you know, who knows a software, right? What shape AGI will eventually take, but that's when it can perform a broad range of tasks at the same or higher level than a human, right? Now let's talk about artificial super intelligence, right? We got to get the whole acronym soup here, y'all.

So artificial super intelligence or ASI is a more of a hypothetical AI that surpasses human intelligence in all aspects, including creativity, problem solving, and emotional intelligence. ASI would be capable of self-improvement. That's a big thing and could potentially outperform humans in any cognitive task leading to profound implications and potential risks. All right. I simplified it here.

Think of AI like this. AI is like, oh, look at what the machines can do. AGI is, oh, the machines are way better than my job than me. And ASI is, oh, be fearful of the machines.

All right. That's a very oversimplified way, but that's the way that I think about it in my mind. Right. And yeah, we've been at this. I think we've been teetering. Right. If I'm being honest, we've been teetering on this AI, AGI line, I'd say for the past six months. Right. Between like, oh, look what the machines can do. And oh, is this.

AI actually better than my job than me, right? And I think that, again, if you are looking at narrow applications, I don't think there's any denying that with certain skill sets, AI is way better, way, way better than a single human, right? So let's just say if you are a single human and all you do is data analysis, right, all day, AI is way better.

than the best human out there, right? But that's a single task. That is a narrow focus. That doesn't mean that you can talk with an AI system and it can literally do anything and everything that a human can do. That's kind of when we talk about AGI, that's kind of the benchmark, right? It can perform any task without the need to train it

without the need to necessarily show it examples, right? We're essentially with a zero shot prompt or with little training, with little upfront information, you could type or chat or interact with a system that automatically is going to outperform almost the best human on any task. I think we're getting close. I think we're getting a lot closer. All right. So now let's

And let me give a quick shout out to our partners again from Microsoft work lab. So why should you listen to the work lab podcast from Microsoft? Well, it explores the questions business leaders are asking. How can they guide their organization's AI adoption journey? How can AI help them maximize value and create new products in business models?

How should they help their teams reskill for this new era of work? And why is it important to be completely transparent about when and how you use AI? So find the answers on WorkLab. That's W-O-R-K-L-A-B, no spaces, available wherever you get your podcasts. Now let's talk about some of the reasons why I think development and the distance between where we are now and whatever that AGI finish line is.

is diminishing quickly, if I'm being honest. And I think one of the biggest reasons, actually, is now you have the tech titans of the world just racing toward AGI, literally. And let me just call this out. The concept of a company pursuing or contributing to artificial general intelligence, AGI, was straight up taboo, right? 10 years ago.

Open AI was actually, in 2015, it was one of the first companies that was saying like, hey, we're working toward AGI. I mean, at the time, they were a little known startup. Open AI was not what it is now back in 2015 to 2019. Very few people, even very few people who worked in the tech space knew what open AI was unless you had a

a niche in or around AI or were particularly interested, you didn't know OpenAI or that this small startup was working toward artificial general intelligence. But now it's different. It's different, y'all. And let me just say this. Up until two-ish years ago, you did not have the largest companies in the world openly working toward AGI.

Like I said, 10 years ago, it would have been taboo for a big tech company to say this. It would have been controversial. But I think through this, we'll just call it this experiment over the last 18 months, maybe since ChatGPT was released, now companies and the US economy sees and understands the value of large language models and the value of businesses using AI top to bottom.

So not that it's become trendy to work toward or talk about AGI, but I think that the business world has opened its eyes, I think because they see the dollars, right? The more you talk about AI and your earnings calls, the higher your stock price goes, right? But now you have all, not all, but almost all of the four or five of the top seven largest companies in the U.S.,

openly either chasing AGI or openly collaborating toward achieving AGI. Like I said, this is straight up taboo 10 years ago, right? If a C-suite executive said, yeah, we're working toward AGI, it would have been a red flag.

Right? They would have, the board would have put out a memo. It would have been bad. Now it's almost the opposite. Now, if you're a, you know, a tech conglomerate and you're not openly working toward AGI, the board might just say, why, why not? This is something you should be doing. Whereas before it was, oh no, we shouldn't be doing that. When we achieve AGI, what happens to our jobs? Right? Let's look at the proof. Nvidia, they've been the largest company in the world. Now they're top three.

They provide the hardware and software and tools for AI development. They support AGI research through partnerships and infrastructure and their CEO. I was literally in the room a couple of feet from Jensen Wong when he said this. He did say that we would achieve AGI within five years. Okay, let's keep going. Microsoft. Microsoft is collaborating with OpenAI on AGI development. And OpenAI has obviously been the leader in the pursuit of AGI.

Or one of the leaders, all right? I'll say they're probably some of the first, okay? And Microsoft sees AGI as the next step in AI evolution, and they're actively researching AGI technologies. Y'all, it's two of the three largest companies in the world. Although they're not, you know, that's not their mission. Achieving AGI is not their mission. They're supporting the very companies that do, and they are collaborating and they are researching it.

Meta, all right, here we go. Top six company in the US. Meta, formerly known as Facebook, right? They are actively working on AGI.

They have a new project just announced to build an open source AGI and their CEO, Mark Zuckerberg, views AGI as a key company goal, right? He's really shifted his focus from, you know, it was social media a while ago, you know, during Facebook's early days. Then it was the metaverse. And now there's been a very hard pivot over the last year and a half toward not just artificial intelligence, but artificial general intelligence. Okay. Yes. The CEO.

of one of the largest and most powerful companies in the world said achieving AGI is a key for his company. Would have been blasphemous 10 years ago, y'all. Anthropic, all right? So Anthropic believes that AGI is imminent within a couple of years. They estimated human level AGI by 2030. And even the chief of staff, this made big headlines a while back, even the chief of staff believes that AI might take her job within three years.

Yeah, the chief of staff, the person in charge of staffing, one of the most influential and powerful companies in the AI space said that.

Then we obviously have OpenAI. They've been actively pursuing AGI since their inception. That is part of their mission is achieving a safe AGI. So they believe AGI is inevitable and desirable, and they aim to develop AGI that outperforms humans on most tasks. And kind of their definition of it is a highly autonomous system that outperforms humans at most economically valuable work. All right.

More receipts, y'all. Let's talk about this. We're going to take a little bit of a historical view here and talk about why these AGI predictions keep changing, right? So I just mentioned there, Anthropic, OpenAI, Meta, NVIDIA, some of the either CEOs or leaders of these companies are saying, we're going to see AGI in a couple of years, maybe five years, but definitely by the end of the decade.

Which, if I'm being honest, can be a scary concept, right? Because it very quickly reshapes, AGI does, reshapes what humans are capable of and what machines are capable of. And that obviously has both swift and longstanding impacts to society, to business, to our worlds, right? I'm not out here...

you know, speaking in hyperbole, that's the truth. So let's talk about why these AGI predictions keep moving. All right. So a very kind of famous graph here, y'all. And hey, for our podcast audience, this is one of the ones where you're going to want to check the show notes and come back and maybe watch this on YouTube or LinkedIn. And you can leave a question too, and I'll do my best to answer it or tag someone that can. All right. But this is one of those charts I think you have to see.

But I'm going to do my best to describe it. So this is a chart from ARK Invest. Okay. And the title of this chart is expected years until a general artificial intelligence system becomes available. All right. So essentially these are predictions charted over time on when leading experts. So these are averages. Essentially how long until leading experts say that we will achieve AGI. Okay.

Okay. So if you look at before GPT-3, so like I said, the GPT-3 technology was actually introduced in 2020 and made available, right? Not, I'd say most of the world didn't know about this technology until chat GPT in 2022. So if we look at pre 2020, the average expert said it would be at least 80 years. Okay.

Let me repeat this. This is five years ago. The average expert said it would be 80 years until we achieved AGI. Okay. And then this graph here from ARK Invest kind of plots different key milestones and then how those milestones seemingly change.

impact these experts, right? Because this is ARK Invest, I believe, was doing these studies on an ongoing basis and plotting them on an ongoing basis year by year, or it looks like actually multiple times a year. All right. So then we look when GPT-3 was announced that 80 years went to 50 years. Okay. And then Google kind of got on board, right? With its Lambda

models. All right. And then it went from 34 years, the average, Oh, how long is it going to take one from 34 years to 18 years? All right. Then chat GPT launched shortly there followed by GPT for launch the premium and more capable version now. And you know, I'm going to be interested to see, uh, the next time this chart is updated. So essentially in late 2023, now we're at eight years. So again,

post chat GPT post GPT for now, the average expert kind of forecast for when we will officially achieve AGI is now a little under eight years. Okay. So again, let's talk about this in five years, y'all in five years, the smartest people in the world, the biggest, brightest experts in five years, they went from saying we are 80 years from AGI to

To now we are eight years. All right. And if I were to guess, I would say within a year, this chart will, the average will be three years or less.

All right. And then you kind of see, uh, you know, if this forecast error continues, uh, that would plot this at about the end of 2026 or 2027, early 2027. So in, uh, less than three years, or if the forecast continue as they are now, essentially it's the end of the decade, right? We bring receipts here. Y'all we bring receipts, but I will also say so many experts don't understand how

They don't understand generative AI. They don't understand large language models. They are, you know, future casting and futurists. And, you know, they're talking about in theory, I wouldn't, well, maybe at this point I'm an expert, right? I put out hundreds of podcasts, live streams, thousands of hours of contents, talking to some of the smartest people in AI in the world and bringing them to you all too as well, where you can ask them questions and learn from them.

I'm surprised sometime, right? When I read reports, right? I read people who, who write these papers on, on AGI and AI. And I'm like, these people have no clue what they're talking about. I think even the average quote unquote expert is disastrously misinformed or ill-informed. And I think about AGI and AI development. And I think this chart proves that this chart proves that, that the quote unquote smartest experts in the world are

We're laughably underestimating the power of artificial intelligence and the pace of its development. I think it is going faster than we understand. All right, y'all. And hey, if you do have questions, let me know. John's asking if you think we can get Sam or Jensen on the show. We'll see. Maybe in the future here. We'll see. That would be great. Yeah, Brian, same. Yeah, if I see one more AI bubble article.

I did a full rant on AI is not in a bubble. All right. So let's keep going here. We're not going to keep this one going forever. This isn't going to be an accidental two hour episode, but I want to talk again a little bit more about this concept of the goalposts are constantly moving and you know, we bring receipts y'all. So I went back in the archives, right? So I was reading articles from as far back as, as Google caches could, could find, um,

So I was reading articles from 30 years ago. Yeah, some of the earlier ones. The website design was good. So reading articles from 30 years ago on what is AGI. There weren't a lot of them, but I read many of them from 30 years ago, 20 years ago, 10 years ago, because I wanted to see something.

I wanted to see something. Has the very definition of what artificial general intelligence is, has it changed? Has it changed as AI becomes more powerful? Are we intentionally moving the goalposts back? Maybe because, I don't know, maybe because once you achieve AGI, it just makes things weird. So instead of saying, okay, hey, it looks like by most definitions, maybe we're there. Instead of saying that, we just keep moving the goalposts and saying, oh, okay, well,

Maybe it means something else, all right? So this was from the Machine Intelligence Research Institute. And this article was from about 10 years ago, all right? And here's essentially a shortened version of what their definition was 10 years ago. So I'm summarizing here, but it said, artificial general intelligence is the ability of a system to learn and solve problems across different areas, much like a human.

AGI can achieve complex goals in various situations while using limited resources. AGI can also apply knowledge from one domain to another rather than just being good at specific tasks. Weird.

I don't know, at least according to this article, which it seems like one is one of the most prominent, uh, you know, articles or pieces of research around AGI 10 years ago. I don't know if I'm looking at these bullet points. I'm like, yeah, we're there, right? Yeah, we are. All right. Here's another one. This one was from about 11 years ago. Sam Altman. Maybe you've heard of him. He used to blog a lot. All right. So this, this article, uh,

Sam Altman essentially said, all right, I'm paraphrasing here, but we'll put it in the newsletter. You can go read it for yourself, but he essentially kind of defined artificial intelligence as something that possesses the ability to understand, learn, and apply knowledge across a wide range of tasks, similar to human cognitive abilities. Aren't we kind of there, right? So the goalposts keep moving. So it's like, have we achieved AGI? I don't know.

Seems like we keep changing what AGI is because at least according to some of these things from 10 years ago, right? You can look at these and be like, yeah, we even, yeah, we're here.

Obviously, as researchers learn more, they start to set maybe more detailed and specific, I guess, boundaries or milestones on what achieving AGI even is and what it means. Sure, I get that. You know, 10, 20 years ago, I think...

AGI was actually for the most part, a little more theoretical, right? And then as we actually get closer and closer to potentially achieving artificial general intelligence, then we start to set stricter guidelines on what are the hurdles to clear in order to officially say that we've been there. All right. We got two, two more points here, y'all. And these are big ones. So here's, I think one of the kind of secret reasons, uh,

that may be why we haven't technically achieved AGI. And I think it's actually this open AI and Microsoft partnership. Okay. So I'll give you the super, super high level overview here. So this partnership between Microsoft and open AI reportedly Microsoft invested about $13 billion and has a 49% stake in 49% ownership stake in open AI. However,

Once the OpenAI board says that it has achieved AGI, that agreement changes. All right. And keep in mind that Microsoft did previously have a seat on the board at OpenAI, right? Huh. Interesting there. It no longer does. All right. So interesting, right? Let's keep going. So essentially, Microsoft doesn't have any stake in

in OpenAI's future AGI tech, right? So again, I'm sure they have legal documents, right, that are, I'm sure, hundreds of pages long that aren't publicly available. But what is publicly available is essentially future AGI technology from OpenAI. Microsoft does not have a stake in that, right? So that is when their current partnership

could start to change. And I guess it obviously depends on what's in those documents that the rest of the public does not really have access to. But my question is, how do they separate it? Because my outsider's viewpoint, right? I would consider myself a very informed outsider, but I'm an outsider nonetheless. So take my hot take here on Hot Take Tuesday with a grain of salt, right? But I feel...

You know, and I believe this investment was made about five years ago. I think at the time, neither company could foresee what this partnership would mean and how big artificial intelligence and large language models, how pivotal they would become in our day-to-day operations.

I guess maybe best case scenario when Microsoft made this large investment, maybe they said this is going to power the future of our operating system. Maybe they said best case scenario. Maybe they were just taking out an expensive flyer on very promising technology. But think now, if you're a power Windows user, you're probably using Microsoft 365 Copilot. Maybe your organization is

is dependent on this co-pilot technology that is baked into the operating system of Windows, which is, guess what? Powered by OpenAI's GPT-4.0 technology. So I don't know if when this agreement was originally staked, if either company could have seen into the future and could fully understand how important this partnership would be to not just companies,

both of their respective companies, not just to the business world, but to the US economy. I don't think people understand. And I've gone over this in depth. I'm not going to go over it again. Essentially, I said, hey, if you're a knowledge worker out there in the US, you have no clue, but you are constantly using OpenAI's technology. You just don't know it. In your daily lives, in your personal lives, in your business lives, your company, the software you use, everything.

whether you know it or not, is somehow powered probably by OpenAI. But also, if OpenAI, if their board says, yes, we have achieved AGI, we have the technology, how do they separate it, right? OpenAI made a pretty big and I think a smart move when they came out with GPT-4-0, and O means omni, where essentially everything is being done under the hood by a single model, right? And presumably,

OpenAI is working very hard to make that singular model more and more powerful and to give it, in theory, capabilities that would reflect artificial general intelligence. So how do they separate it, right? We know that the partnership is both as important, I would guess, it is equally as important to Microsoft and to OpenAI. So what would be OpenAI's...

reason to come out and say, yes, we have achieved AGI. Therefore, we must somehow restructure our technology. We must separate our technology. We must dilute somehow or start to dissolve or rework our partnership with Microsoft. What incentive do they have to say that, to do that, right? And also,

OpenAI, achieving AGI has been mission critical to them and they've raised billions of dollars. I don't know. Maybe I'm speculating here because it's Hot Take Tuesday. But if I'm an investor looking to potentially invest hundreds of millions or billions of dollars into OpenAI, right? There's been reports that Sam Altman is looking to raise $7 trillion for some future projects centered around intelligence, centered around compute power, etc.,

I don't know if said company, OpenAI, has quote unquote achieved its mission. If one of its founding missions is to achieve AGI and the board comes out and says, yes, we achieved AGI, doesn't that make fundraising exponentially harder for Sam Altman and OpenAI? I'd say yes. So what incentive then, right?

Because if OpenAI comes out and says, yes, we've achieved AGI, it's great, right? Big box checked off. Huge step for humanity. Huge step for technology. Huge step for the future of business. What does OpenAI actually have to gain? I don't know. I'd say they might have more to lose to say that they've officially achieved AGI, right? So I think, and I wouldn't blame them, I would say OpenAI continues to move the goalposts.

Because OpenAI, I think, also loses in some way, shape, or form. Again, I could be ill-informed because I don't have access to the documents and the public doesn't as either. But it is clear that the partnership changes. The OpenAI-Microsoft partnership changes. So yes, OpenAI has greatly benefited from the funding and the support and the architecture that Microsoft has provided. But at the same time, I think OpenAI has

is benefiting in a huge way as well. If I'm an open AI executive, I don't want to lose that partnership. Think of how much insights they have to gain, right? When they are presumably getting reports back from co-pilot users to improve the GPT-4 technology. That's some of the most important information from a startup is essentially when us humans are telling them what's a good output and what's not.

That saves them years of development time and probably billions of dollars of development time in the end. Okay. So we have to think about that. All right. Let's wrap this thing up, shall we? So why are we closer than ever? Why are we closer than ever to AGI? Well, like I talked about, it has been unprecedented up until this point.

that some of the largest companies in the United States are openly working toward achieving AGI or actively supporting and openly supporting those companies that are doing that. Like I said, a decade ago, 15, 20 years ago, it would have been straight up taboo as a big company, as a public company to say, yeah, we're working toward AGI. People would have called you an apocalyptic crazy.

And your stock, if you were a public company would have went in the toilet. Now, now, if you're a big tech company like Amazon, like, like meta, like Microsoft, you almost have to either be explicitly and outwardly working toward AGI or indirectly, you know, working there because I think the dollars have started to make sense of what AGI means to business and to the U S economy.

Someone write that down. That was good, right? Yeah, this is unscripted and unedited, aside from some bullet points I put up on the screen. All right, so that's number one. Also, two of the most powerful AI startups in the world are either directly working toward it or indirectly supporting it in OpenAI and Anthropic.

And then last but not least, y'all, and this was kind of referenced at the beginning of the show when we talked about Google Gemini giving away 1.5 billion tokens per day for developers. That's wild. That's wild, right? And then also OpenAI similarly.

said, hey, GPT-40 mini, it is free. They announced this more than a month ago. They said it is free to fine tune through essentially the end of September. So these companies have essentially been giving compute. They've been giving tech like this technology, this inference power. They've been giving away intelligence for free.

It is now to the point, there's this saying in the AI community, intelligence too cheap to meter. One of the biggest obstacles to achieving AGI 5, 10, 15, 20 years ago was the cost. It was the cost and the availability. If you were a tech company 10 years ago, or if you were a scrappy startup 10 years ago, and if you wanted to work toward AGI, it would be nearly impossible.

It would be nearly impossible right now. Not so much right now. If you want to give AGI ask capabilities to your business, right? It is essentially free to do right now. You y'all like 10, 15, 20 years ago, you had to have millions, millions or billions of dollars. It's free now.

right? For businesses at least, right? Obviously these companies, this is a cost for them to bring people into their platforms and to keep them there, but there's a race for this, but also even for these companies, the cost of compute has gone down exponentially, right? Essentially GPU chips. So these, these chips that these big companies need to create next generation AI are

10 years ago, the chips used to be much more expensive and much less powerful. The cost of compute is going down exponentially. Most estimates say even in the last couple of years, it's gone down tenfold, tenfold. The same thing with cloud computing, tenfold in years. So the rate at which intelligence and GPUs and the cost to get to AGI has gone down exponentially.

exponentially in the last, geez, just the last couple of months, it has turned into a sprint into giving companies essentially intelligence, giving them compute for free. All right. We are in unprecedented times, at least when it comes to AI development and if, and when AGI is possible. All right. I'll wrap it up by saying this, y'all I'll wrap it up by saying this. I think we're going to be there very soon.

I think we are going to achieve AGI very soon. I just gave you all the receipts. If we're looking at how AGI was commonly defined 10, 15, 20 years ago, we've already achieved it. The definition of today, I think we'll be there in less than five years. It's a given. Will that definition continue to move? I would say so.

But y'all, whether you are just an individual who's interested in AI, a business leader, or maybe you are someone who's working in the AI space, HEI is inevitable, right? You saw on that chart five years ago, they said we were 50 years out or they said, no, sorry. They said we were 80 years out five years ago. Now they're saying eight.

We've gone from an 80-year projection to an eight-year projection. And like I said, I think when the next time these charts are updated, it's going to be like three years. We are going to get there very soon. All right. So what does that mean for the future of work? What does that mean for the future of business? What does that mean for your future career? I don't have those answers yet, but that's why we're going to be here every day helping you figure that out, bringing on

experts from across the world, from all these big tech companies. We're giving you examples of how companies large and small are growing with generative AI, how people are changing the trajectory of their careers. Real use cases. That is what everyday AI is all about. All right. So when will we achieve AGI? I'd say sooner than we might think. Like I said, by old definitions, I think we're already there, but I think it's going to be a matter of years.

All right. I hope this was helpful, y'all. If so, please repost this, share this with someone in your network. I don't know. Maybe if you do, I'll send you even more spicy takes that were too spicy for the show. All right. So if this was helpful, please let us know. If you're listening on the podcast, please click that follow button. Leave us a rating if this is helpful. We put so much time, energy, and effort into cutting through all the nonsense and giving you what actually matters. Please join us tomorrow and every day for more Everyday AI. Thanks, y'all.

And that's a wrap for today's edition of Everyday AI. Thanks for joining us. If you enjoyed this episode, please subscribe and leave us a rating. It helps keep us going. For a little more AI magic, visit youreverydayai.com and sign up to our daily newsletter so you don't get left behind. Go break some barriers and we'll see you next time.