cover of episode "AI Is About To Make You Irrelevant" (How To Get Ahead & Future-Proof Yourself)

"AI Is About To Make You Irrelevant" (How To Get Ahead & Future-Proof Yourself)

2025/1/5
logo of podcast The Koe Cast

The Koe Cast

People
D
Dan Koe
Topics
Dan Koe: 视频探讨了人工智能,特别是 AGI(人工通用智能)的快速发展对人类未来工作和生活的影响。作者认为,虽然 AI 的发展可能导致一些工作岗位消失,但人类的独特之处在于创造力和自主性。他从控制论的视角出发,解释了 AI 和 AGI 的区别,指出当前的 AI 缺乏自主性,而 AGI 则具有发现和追求未知目标的能力。 作者认为,人类的意义在于创造知识,而知识的创造是通过猜想、批判、试错等过程实现的。人类能够理解和创造任何符合自然规律的事物,并且可以通过计算、转换、变异、选择和注意力这五个方面来提升自身能力。他引用 David Deutsch 的观点,认为问题是无限的,而解决问题是人类的幸福来源。AGI 的出现可能会加速问题的解决,但不会改变问题无限的本质。 作者还强调了人类的转换能力、变异能力、选择能力和注意力,并指出人类能够改变视角来解决问题,而固守特定意识形态会限制思维和创造力。他认为,面对 AI 带来的变化,成为创造者是应对挑战的最佳策略,这意味着积极主动地创造价值,而非仅仅是创作内容。成为创造者是一种生活方式,意味着持续学习、不断进步,并始终保持自主性。

Deep Dive

Key Insights

What is the significance of OpenAI's new model O3 and its performance on the Arc AGI benchmark?

OpenAI's new model O3 scored 87.5% accuracy on the Arc AGI benchmark, a significant improvement over previous models. This performance indicates exponential progress in AI capabilities, sparking widespread discussions about the future of jobs and human relevance.

What is the difference between AI and AGI, and why is AGI considered more advanced?

AI (Artificial Narrow Intelligence) is task-specific and requires human input to assign goals and context. AGI (Artificial General Intelligence), on the other hand, is capable of discovering unknown goals and pursuing them autonomously, making it a self-governing system. AGI is considered more advanced because it mimics human-like general intelligence.

Why is creativity considered essential for human survival and happiness?

Creativity is essential because it allows humans to solve problems, innovate, and create meaning in life. Without creativity, there is no progress, purpose, or happiness. It is the foundation of knowledge creation and adaptation to new challenges.

What is the concept of 'cybernetics,' and how does it relate to AI?

Cybernetics, derived from the Greek word for 'helmsman,' refers to self-regulating systems that error-correct toward a goal. It is the foundation of intelligent systems, including AI. AI operates as a cybernetic system but requires external governance, unlike AGI, which can self-govern.

What is the role of 'agency' in distinguishing humans from AI?

Agency refers to the ability to set and pursue goals autonomously. Humans possess high agency, allowing them to navigate the unknown and create new goals. AI lacks agency and must be assigned goals, making it a tool rather than a self-directed entity.

What is the 'hard problem of consciousness,' and why is it relevant to AGI?

The 'hard problem of consciousness' refers to the challenge of understanding how subjective experiences arise from physical processes. It is relevant to AGI because creativity and consciousness are intertwined, and without solving this problem, replicating human-like creativity in machines remains elusive.

What is the significance of 'universal explainers' and 'universal constructors' in human intelligence?

Humans are considered 'universal explainers' and 'universal constructors' because they can understand and create anything within the laws of nature. This ability to generate knowledge and build complex systems, like rockets, sets humans apart from AI and underscores their unique role in the universe.

Why is the concept of 'holons' important in understanding reality and AGI?

Holons are 'whole parts' that exist as both independent entities and components of larger systems. This concept helps explain the interconnectedness of reality, from atoms to galaxies. Understanding holons is crucial for AGI development, as it highlights the complexity of building systems that mimic human creativity and consciousness.

What is the role of 'error correction' in achieving goals, and how does it apply to humans and AI?

Error correction is the process of adjusting actions to achieve a desired goal. Humans and AI both use error correction, but humans can discover new goals and adapt, while AI requires predefined goals. This ability to navigate the unknown is a key distinction between humans and AI.

What is the solution to future-proofing oneself in the age of AI and AGI?

The solution is to become a high-agency creator, focusing on value creation rather than specialization. By embracing creativity, problem-solving, and adaptability, individuals can thrive in a rapidly changing world. This involves continuous learning, innovation, and the willingness to navigate the unknown.

Chapters
This chapter explores the capabilities of AI and AGI, discussing the differences between them and addressing concerns about job displacement. It emphasizes the importance of human agency and creativity in navigating the future of work.
  • OpenAI's O3 model achieved 87.5% accuracy on the Arc AGI benchmark.
  • AI lacks agency, requiring human input to define goals.
  • AGI, possessing self-directed goal discovery, represents a potential paradigm shift.

Shownotes Transcript

Translations:
中文

Okay, we need to talk about this. Last week, OpenAI announced their new model O3, and tech Twitter went absolutely insane. There's a lot of talk about it online, and if you don't know how to make sense of it all, then it can lead to you feeling pretty uncertain about the future. So what's so special about O3? First off, it scored with 87.5% accuracy on the Arc AGI benchmark, which is a significant improvement from previous models. And as you can see the graph, it's

going exponential. Now, without the model even being out yet, there's just been more talk online about what's been being talked about, which is that jobs won't exist in two to five years. There's different time ranges that people are talking about. Could be two years, could be five years, could be 10 years. Depends on how you view the situation because

We don't know the future. Now, the main questions I'm interested are, will we be considered insects to our AGI overlords? If AGI can do everything humans can and more, what the do we do? If jobs won't exist, what do we focus on if we want to thrive? So I've been studying this for a few years now, trying to make sense of it in my own head. And of course, you get flooded with a bunch of different perspectives. So I've collected the ones that I believe are

one, the most optimistic and two, the ones that just make the most sense. So I hope that I can give a positive viewpoint of this and allow you to make better decisions going into the future, because right now your behavior matters more than ever. Sitting around and doing nothing is going to lead where it would have led in the first place and has been leading. So if you're doing nothing, then please do something. But this is for those that want to do something.

Now there is a difference between AI and AGI. AI is the thing that we have right now. Some people call it ANI, which is artificial narrow intelligence, and AGI is artificial general intelligence, which depending on the definition means that it is on par with human capabilities. It can do the same things that humans can do. So to understand AI,

First, we need to understand the origin of that term if we want to understand what it means for us. So before the term AI, there was the term cybernetics, which was popularized or just talked about by Norbert Wiener in 1948 in his book, Cybernetics. Cybernetics, the term is ancient Greek for helmsman or another word for governor. And it's the idea of automatic self-regulating control in a system.

Acting, sensing, and comparing to a goal is a fundamental loop to intelligent systems. Now his key insight was that the world should be understood in terms of information, that complex systems like organisms, brains, and societies error correct toward a goal, and if those feedback loops break down, the system breaks down. In other words...

entropy. So the easiest and most simple way to understand cybernetics is think of if you are the captain of a ship and you are heading toward a lighthouse, you're trying to get to the lighthouse and the wind blows you off course to the left. So you steer right and then you veer too far right. So you steer left and you error correct.

to get to the specific goal if you're an unintelligent system or an unintelligent person you're going to get blown off to the left and then you're going to keep going to the left and you're not going to reach the lighthouse we talked about this in my video how to become more intelligent than 99 of people and this cybernetics alone is illustrative of naval's quote the only real test of intelligence is that you get what you want out of life

So as we'll discuss, error correcting toward a goal is really just how you get what you want out of life. The problem is the difference between AI and AGI, where AI, it isn't the governor of its own system. You still have to assign it the context, the goal, everything else in order for it to do what it does. The main mark of AGI, in my mind, is the ability to discover unknown goals and pursue those goals through error correction. In other words, AGI is the governor of its own system.

Now here's the important part. Two years after Wiener's introduction to cybernetics, he published "The Human Use of Human Beings." This book is now out of print, but the central idea relevant to today's world is that we must cease to kiss the whip that lashes us. Wiener knew the danger was not in machines becoming more like humans,

but humans being treated like machines. Does that sound familiar? So this is what AI is. It's a cybernetic system that needs a governor. It needs to be assigned the goal. So what we know as AI today in its current manifestation, which is pretty much chat apps like ChatGPT and

agents are becoming a thing and swarms are becoming a thing. They're just very, they're not very common or in the public eye. But for the general use case that most people use it for, which is to Google search something that they could find the same information online in maybe 10 more seconds if they understand how to Google. If you don't understand how to Google information, which is a skill in itself, then you won't know how to get the right information out of chat GPT. There isn't too much of a difference in what the normal use case of AI is right now compared to what you could do in the past.

But these chat apps and AI right now lack one crucial trait, which is agency. If you haven't watched the Devin Erickson podcast that did extremely well and was very popular a few videos ago on my channel, I would encourage you to watch that to learn about agency and the importance of it. AI is a specialist that needs a generalist.

A tool that needs a master. AI is useful for achieving a goal it is assigned, like playing chess or beating Go. As long as AI must be tested or assigned a goal to determine its intelligence, it is not even close to human intelligence. The one thing I'm seeing right now that is making people kind of freak out is that AI is starting to learn how to error correct toward different goals. But if it's still a program that is totally

to discover unknown goals, then that's still a goal and it is still not the governor of its own actions. The system is not complete. It's not universal, but it's still incomplete. Now, if you want to know what I mean by this and how powerful AI can actually be, I'm not against AI. I actually think it's

quite powerful in my own workflow, especially if you know how to use it. But in Cortex, our writing app and note taking app, Cortex AI will be going in by the end of January. So you can try that out if you just want to see cool ways to use AI, because I think we've cracked the code for social media content, newsletters, etc. Like the content that our Cortex AI spits out with our prompt engineering is better than any that I've seen on the market.

But that's the exact problem. Most people have been trained to be specialist tools, not innovative humans in control of their journey in the unknown. It's no wonder they're scared of replacement. They should be. Since the day they were born, they've been assigned goals and have error corrected to fit the mold their parents wanted them to fit in. Go to school, get a job, retire at 65. Entrepreneurship? Yeah, right.

Save your money, get good grades, and listen to authority. Take the safe route. Stay subservient to the dominant system. And before you know it, your mind will scratch and claw its way back to the life that it was sold to be comfortable. You may have the occasional desire to change, the depth of your soul crying for you to break free, but the code in your head is so powerful that it quickly squashes that feature mistaken for a bug. Now, do you see why I talk about this in almost every video, whether it's related to AI or not, just human agency and...

rejecting the conventional path. It may sound like I'm beating a dead horse at this point, but specialist conditioning is arguably the greatest destroyer of humanity. It's the destroyer of creativity, and without creativity, there's no life. There's no meaning. There's no happiness, because happiness comes from solving problems.

So the question is, what's the solution? We'll get to that, but we have a bit more of this story. So around the 1960s, a new perception of technology started to emerge. The perception was that by inventing computers, a universal computer, we had externalized our central nervous system.

We'd externalized our brains or minds or intelligence so that we're now operating within one collective mind. We have all potential information at our fingertips. And now if you want to get spiritual or philosophical about this with idealism or consciousness, I think it's a fun mental exercise to do, but we're not here to dig into that today. So unfortunately, we don't hear much about cybernetics today. And it's interesting because this new perception of technology fueled poor incentives. In other words,

People wanted to profit off of this. And someone in specific, John McCarthy, didn't like Norbert Wiener, the person behind cybernetics, so he coined the term artificial intelligence and became a founding father in what we know today, which I feel like is kind of messed up. I like the term cybernetics more. But now let's talk about the good stuff. Because with the meteoric rise in the talk about intelligent machines,

It's kind of left us wondering if we're significant or not. That's been a question for a long time. Is there something significant about being human? And for being the only species that has made it to the moon or built rockets or done what we've done, there has to be something there, right?

Now I've been studying David Deutsch a lot recently. I feel like his teachings in philosophy are very relevant to what's happening right now. And David was influenced by Karl Popper, and both of them believe there is something significant about being human. And our significance lies in our ability to create knowledge. He considers us universal explainers and universal constructors. These are certain types of beings in reality.

So it starts with the need for creativity, the process by which all knowledge that is created happens through conjecture and criticism, trial and error, variation and selection in Darwinian terms. So in other words, guessing and correcting one's guess is how you accomplish anything you set your mind to. This is how we learn, innovate, make progress, and understand almost anything in the universe. The difference is that humans can set their minds on anything

not just the goal it was assigned. It can discover new goals that shape the perception of opportunities, allowing our mind to error correct toward that goal. So along with this thing of being a universal explainer is the fact that we are capable of understanding anything within the laws of nature or the laws of physics. We create explanatory theories that reveal the deep structure of reality, allowing us to guess and predict in a more efficient way that breeds faster progress with time. Now, I don't know if this is how Deutsch, uh,

thinks about it. But in his book, The Beginning of Infinity, he kind of, he doesn't reject them, but he proposes a better option compared to reductionism and holism, where reductionism is breaking things down into parts and seeing the parts as significant. Holism, on the other hand, is thinking in wholes and seeing the wholes as more significant. But with David Deutsch and someone like Ken Wilber with his integral theory, it's about both. It's a

David probably wouldn't agree with this term, but holons. Arthur Kossler coined the term holons, which stands for whole parts. And his insight was that everything in reality is a holon. It is a whole part, like atoms to molecules to cells to organisms, so on and so forth, spanning everywhere.

or like me to this room, to this apartment complex, to the city, to this country, to this nation, to the world, to the cosmos, et cetera, et cetera. Each thing is a whole and it's a part or it's a part within a whole and a whole within itself. And these parts and holes emerge, transcend, include all of these other things. So the insight here being that we can understand any hole and we can understand the parts within the hole. And if we can understand the parts within a hole and the hole itself, then we can

Build that thing. Now, the big problem with AGI is that we don't know what creativity is yet. We haven't cracked the hard problem of consciousness. We don't understand the parts and the whole of that thing. Therefore, we cannot build it. And it's very unlikely that we're just going to mash things together in a way that isn't consistent with how they're built and expect it to emerge spontaneously as creativity.

It is in us. This knowledge allows us to understand things we've never directly experienced, like stars and galaxies. We can understand a rocket even if we've never built one. And if we can understand it, we can eventually build it if we have the knowledge to do so. There is a logical sequence of steps or order of operations, and each step requires you to have the knowledge for that step.

In your practical life, you can't achieve something for the simple reason that you either don't understand it or don't have the knowledge to achieve it. For high agency humans, this is liberating. For low agency machines, this is blasphemy. And this is what many people get wrong about AGI. AI or artificial intelligence is an incomplete system that must be assigned a goal, like many animals or low agency employees are programmed to.

AGI, or artificial general intelligence, is a complete or universal system, like a human who is not limited to a small subset of things that are possible. AGI may have more computational power or memory, but there's no concept that it can understand that we can't ourselves. And that doesn't rule out the fact

that we can use universal computers or augment ourselves with more computation and memory. The point is that you can achieve anything within the realm of possibility, but only if you have the knowledge to do so. You are not doomed to the default path of society or the rule of AGI, because if AGI is AGI, you are AGI. You're...

Practically the same thing. One just has more computational power. But computation isn't the only difference slash similarity. So we need to understand the five human capabilities and if AGI can become on par with those or surpass those so we understand our place in the world.

Now, the thing with all of this, with AGI and even ASI, which is artificial superintelligence, is that people can't settle on a definition for it. So I'm not going to talk about ASI here because, frankly, I don't know and absolutely nobody knows whether it's going to be a thing or if it can even become a thing or when it's going to become a thing. For AGI, that's kind of within reach and it is possible. So the questions here are, are human capabilities actually limited?

Do we not have the capability to learn and do anything that AGI could do? Ultimately, are there any limits on what we can think and how we think? Mainly, we need to pay attention to computation, transformation, variation, selection and attention, as noted by AGI researcher Carlos de la Guardia. He's also influenced by Deutsch and you can find him on YouTube. It's AGI with Carlos. So all of these ideas are kind of a culmination of

His, Deutsch's, Popper's, Darwin's, others. So the first is computation. And the question is, is there any limit to what we can compute? And the answer is no, because once you have a universal computer in your hands, then it's just a matter of time and memory to compute anything.

So if AGIs have that, then they have as much computational power as us and therefore no advantage over us. And even further, if we can augment our brains, which I also don't see outside of the realm of possibility and something that we can build, then we will remain on par with AGI as it accelerates. And I think once AGI becomes a thing, we can't...

tell this now we don't know. But once it does become a thing, I think we're going to be pretty enlightened as to how underwhelming it may seem as it's happening, as things do throughout your life. It's like one major spark one. Oh, OK, that happened. Now it's normal life. Our mind adapts to these things. And one thing that I haven't touched on just yet is

David Deutsch's belief that problems are infinite, that there will always be problems to solve and that problems are soluble. And if happiness stems from solving the right problems, then that is our source of happiness is to create. So problems are infinite. Knowledge creation is infinite in that aspect. So when AGI becomes a thing, things may speed up, things may go exponential. But when you zoom out on the graph,

over the course of 50 years, that exponential curve may just look like a normal line that we're going through right now. That happens all the time. It depends on how you zoom in and zoom out of history to see what's going on. When you zoom super far in right now, yeah, it's an exponential curve and it's scary. But once that thing actually comes along and problems can be solved faster, we're only going to be exposed to this new set of problems and only then can we decide we're

what we're going to do and what our role or job here is. Now, after computation is transformation. And this is one that I personally find fascinating. So transformation is creation. We can turn raw materials into rockets given the right knowledge. And the thing with this is that human hands and bodies seem to be particularly good at transforming things.

because they can perform any sequence of operations. Even if we can't do it, we can build the thing that does it. Humans are generalists that build tools to survive in any environment. Nature is a very harsh place. If we didn't have jackets...

or shelter to live in a freezing environment, we would die within three hours. People see nature as this like hospitable and sacred thing. And I'm sure there is hospitability and sacredness in it. There is. It's the biosphere. If that thing gets wiped out or we don't build another one, then the new sphere doesn't have any ground. Holons, right? Parts and holes. If the parts get taken out from under the hole, then the hole ceases to exist. So human life ceases to exist if the

biosphere ceases to exist. That was a tangent, but the point is that we've built rocket ships and telescopes. We can build the thing that builds the thing. So the question here is, with transformation, is there a limit to what these basic operations can do when strung together in the right way? And again, the answer is no. If humans could teleoperate a gorilla or a squirrel, as Carlos puts it, there is a sequence of steps to build a rocket with telescopes.

Now, it's not particularly that a squirrel can build a rocket, but imagine if Elon Musk was the person teleoperating that squirrel. He has the knowledge to do it. He could find a way to build the rocket, not with just the squirrel. What would Elon Musk do, right? He'd get the team. He'd get the materials. He'd get the resources. There would be a way to do it, given time.

time. This is what many people outsource in their life is that if they want to achieve something, they eventually hit a roadblock and then they can't error correct hit the goal and then they quit altogether when there is a string of basic operations that you can do to achieve anything you want in life within the realm of possibility. Now the thing here is

time transformation takes time it doesn't matter how fast agi can compute and a singularity won't change that just as the big bang didn't or the enlightenment didn't the enlightenment didn't put rockets in the sky the big bang didn't put rockets in the sky it birthed a new set of problems a new set of ideas and new potential for knowledge so far the agi worry seems to be stemming from a fundamental misunderstanding of what reality is now the last three

Variation, selection, attention all have to do with how to create knowledge. In other words, it's about navigating idea space or the unknown because we can compute and we can transform. But do we have the knowledge to do so? Can we acquire the knowledge to do so? Knowledge serves two functions. The first is to make specific things happen, preferably good things, of course, rather than bad things.

The second is to capture patterns in reality. This allows us to store information in an efficient way so that we aren't always starting from scratch in our pursuits. We understand big picture concepts like the sun rising and falling each day and seasons changing every so often. And without this understanding, much of our lives would fall apart. We wouldn't make cumulative progress. So capturing patterns in reality allows us to

plan by proximity. We understand that we would freeze to death in a cold environment so we don't go into the cold environment until we have the tool to survive in that environment. And something like a jacket or a hotel is just a deposit of

The other thing with this pattern recognition and efficiency of understanding is that in the future, I believe that we're going to be speaking a slightly different language than what we are now. We're going to speak be speaking in this more abstracted high level layer that allows us to understand things that fall under it.

When everything becomes this natural language type deal, like us speaking natural language to spit out code from an LLM, as language evolves, as it does, we're going to be speaking in a much clearer, more efficient way that allows us to navigate idea space of the future. Because if a lot of the problems that we have now are solved and we're open to this new set of possibilities and knowledge, and if

language paves the road of thought. Like thought can only go as far as the language allows. We're going to be speaking a different language so that we can perceive different things. Now we've talked a lot about idea space, but idea space is what you have to explore in order to create knowledge. So the idea space, think of it as the unknown. You can think of the unknown as a map. There's light spots on the map. Think of in a video game, like you're, uh,

mini map where there's light spots on the map and those are the areas you've explored. That's the known. And then the dark areas are the unknown, the places you haven't explored yet. And the dark spots are where your potential lies. If you just operate within your bubble of comfort or your comfort zone or the known all the time, then you aren't going to discover anything new. You're not going to be happy. You're not going to find meaning. You're not going to have problems to solve.

the map or the unknown is a surface area for ideas that can be discovered and tested against reality to verify their validity when those results do not move you closer toward your goal or move you further from that a problem is revealed and you must error correct toward the goal so the first

Part of this is variation. Is there a limit to the number of new ideas we can come up with to survive and achieve what we set our minds to? With computation, we can navigate the entire space of ideas. With agency, we can take any step within that space and eventually stumble across a good idea after many bad ones. With creation,

We can move in unique ways like flying over a forest rather than walking through it. So by creating new things and creating new knowledge, it allows us to explore this unknown or just this idea space in new ways to find new ideas. You find ideas in the sky rather than on the ground because now you can fly through the sky so we can understand anything, create anything and discover an infinite set of new ideas to solve an infinite string of problems.

Again, AGI can do the same and we are both bound by the laws of nature, but any possibility inside of that is within reach. Now after variation comes selection. We can come up with any idea, but can we find the good ones? The potential problem here is that it is difficult to make cumulative progress without learning from mistakes.

It wouldn't be fun to start off from scratch if we wanted to build an electric car after a gas car. We wouldn't be very developed as a species. As universal cybernetic systems, we can become more efficient at navigating idea space to avoid wandering lost. We error-correct.

So again, there's no fundamental difference between AGI and humans here. Now, the last part is attention. This one is specifically from Carlos de la Guardia, and I like it because I have a book called The Art of Focus that actually goes into a lot of the deeper nature of attention.

It's not a very scientific book. I feel like when people, some people purchase Start a Focus, they're like, oh, I'm going to learn how to do deep work. No, it's more so my life philosophy. So one other aspect that humans take for granted is our ability to change our focus by changing our perspective.

When a problem occurs, where does your attention go? If you want to build a rocket, does it help to ask the old gods to do it for you? Or can you change lenses to view the situation in a way that allows you to perceive opportunities? What I'm saying here is that in the past, even now, when you encounter a problem, people's minds often default

To the worldview or belief system that they were programmed to believe in when people have money problems, they think, oh, they start praying to God and asking for help. And while that can help orient your attention in a way that allows you to overcome that problem, it usually isn't the best lens to actually solve that problem.

I'm not saying that it's an invalid lens or an invalid belief system or worldview. I'm just saying that it's not the overarching one that's going to solve all the problems in your life. To me, that's more of a sense-making or a meaning mechanism to allow you to find peace and clarity just in your place in the world.

In terms of solving a problem like money, that's a completely different skill set and belief system in order to do that well and efficiently and not waste a lot of time. And so this presents a massive problems for humans is just paradigm lock and ideology because we do have the ability to change our perspective when new problems come up. And by attaching to a specific ideology, you

box yourself into this narrow set. You can't explore idea space very well or very far. So in other words, humans have the ability to put on a spiritual lens to find peace and a scientific lens to find progress and any other lens in between.

One big problem that I see a lot is just identifying with a purely ascending philosophy, because that's no different from being an incomplete system that will fail to solve certain sets of problems. Spirituality is a great lens or tool. It's a very big

picture, lens, or tool that you can expand into when you need to, when you're not focused on a problem. It's kind of like the solution to everything is to surrender to every problem and just hope that it passes you by or to release tension. But it sounds big picture, but reality isn't that myopic. It's a good solution and it's a good tool, but a bad master for

many problems in your life. And it also just fails to integrate. You're focused on the whole. It's holism, right? That that is more significant than anything in the world, but you're neglecting all of the parts that make it up. It's not integral. You're practically rejecting survival, and by doing so, you cause a lot more pain and suffering that you're trying to solve with a solution that, one, isn't a solution, and two, isn't the best way to go about solving the problems. It's very one-dimensional. So with that said...

All of that said, AGI does not seem like it can surpass us very far, especially if we can become very similar to AGI, which is, I would assume, a big part of the future. But the question is, okay, what do we do? This doesn't help the fact that life as we know it will change, jobs will be replaced, and the unknown creeps nearer. So in my opinion, the only option is what it has been the entire time, to dive in.

Let's start from a recent tweet from Naval Ravikant. Be a creator and you won't have to worry about jobs, careers, and AI. Now, as I've discussed in like 80% of my videos, and as Naval eloquently put here, the answer is to become a creator, specifically a value creator. Now, when I say this, many people assume that I'm saying...

become a content creator, like this myopic little job that people want to become now when they grow up, because it seems like you just don't do anything. You just talk to a camera all day. And it's definitely not a business model with certain skills that you need. And it's not...

You just sit around and you record videos all day and you're good to go. So no, being a creator isn't about posting content. Being a creator is about digging down to the essence of your being and the reason behind most of your actions, which is to create something positive.

worth using, to be valuable, to feel as if you have a place in the world, to feel a connection to something greater than yourself, to contribute. And with creativity, something we don't understand yet. So let me string this together to just help it click. The word entrepreneur is a French origin. It comes from the verb

entrepreneur, I'm not French, which means to undertake or to do something. In the 16th century, when translated to English, it became a noun referring to a person who undertakes a business project. So in other words, an entrepreneur is someone who is doing something. It is not a title or label, but an

act. It is the commitment to a high agency life, a commitment to doing things without permission from someone else, to set your own goals and navigate the unknown to achieve them. And in lies the problem. People want to be told what to do when that's exactly what will get them replaced. That doesn't mean to stop learning from other people. It means to not treat any form of a job, skill, or career as an end.

This is going to be something extremely important in the future is not attaching to one specific way of doing things or one specific outcome for your life. Everything's going to be changing a lot faster. Problems are going to emerge. Knowledge is going to emerge. We're going to be pivoting left and right. Things are going to seem so unstable unless you understand how to navigate that or unless you're a high agency person who can handle that. There's going to be very little certainty in the future and that is an amazing thing. That is not a bad thing at all. So in this sense, a creative

creator is someone who is creating something. It's not a title or a fancy new kind of job where you can sit in front of a camera and do minimal work. It is a way of being. And it just so happens that the highest leverage place to create right now, at least, is on the internet. It is the path of high agency.

You don't need permission to create something and post it on the internet. You don't need permission to navigate idea space and find the information you need. This may change in the future, but that only reinforces the point. No matter if it's the internet or intergalactic space, the answer has been and always will be

to become a creator, of course. And as we've learned, that doesn't mean it's easy. That doesn't mean you can skip trial and error. That doesn't mean that one course is going to give you all the answers because that's not how the mind works. That's not how time works. That's not how AGI will work when we're exposed to a completely new set of problems where

our current level of intelligence isn't going to do it. Now, I'm kind of obligated to promote my two-hour writer course and one-person business launchpad course because it's highly relevant, but I just want to make the point here about the content creator thing. Like, don't go into this to become a writer or a content creator or a one-person business or whatever it may be. Those are concepts to help you understand the parts that compose that thing. It's supposed to equip you with a new way of looking at things and

and the skills to achieve certain things in your life. So if you want to achieve those things in your life, then yeah, two-hour writer, one-person business launchpad can help you. But they're not a quick fix. You still have to error correct toward a goal. You're not going to do it for 30 days and gain a billion followers at once. You're going to learn how to write, and you're going to get better at writing, and you're going to learn the skills that require to start a business, and you're going to get better at those skills. It's a

commitment for life to error correct in a high agency direction. And that's what creators do. They solve the infinite set of problems that life presents. Without problems, there is no creativity. Without problems, there is no purpose. Pain and suffering stem from the inability to understand problems and more so relinquishing your ability to solve them. A world without creativity and purpose is a world without life. Now, the mark of a sovereign individual

is that they learn how to learn. They have an evolving vision for the future. They build a meaningful project as one stepping stone. They identify problems that prevent progress. They generate ideas and test solutions. They become more efficient with time. They deposit their creation and knowledge. If valuable, they are rewarded by the monetary system of that society. If not valuable, they error correct until valuable. And lastly, they never ever quit and

because someone else's vision trumps their own. Do it all. Write, design, market, sell, film, code. Be the generalist you were born to be. Be the orchestrator of ideas, the governor of thought. AI is simply a tool that now allows you to learn and do all of these. Once it becomes your master, you lose. Nobody can tell you what to do in the future. But has it ever been any different? At this point, we're just talking in circles. You're looking for the quick fix as always, when the quickest fix is the longest path.

The principles haven't changed and they're not going to change in the future. You just haven't taken the leap. Thank you for watching. If you want a free life reset planner, just join my newsletter and it'll get sent to you in the welcome email link in the description for all of those sign up for Cortex. AI stuff is coming late January. Mobile app, desktop app and offline mode are being dropped throughout January. If you sign up for Cortex, you'll receive an email with all of those things and you can join the discord as well. All right. Thank you. Like, subscribe. Bye.