cover of episode EP51: Is AI all hype?

EP51: Is AI all hype?

2024/10/31
logo of podcast Geopolitics Unplugged

Geopolitics Unplugged

Frequently requested episodes will be transcribed first

Shownotes Transcript

Summary: We discuss the limitations of artificial intelligence (AI), specifically focusing on its inability to replicate human creativity, empathy, and dexterity. We argue that while AI is useful for specific tasks, its reliance on narrowly defined objective functions makes it unsuitable for activities requiring genuine creativity, emotional intelligence, or complex physical manipulation. We also criticizes the tendency to overhype AI's capabilities, emphasizing the importance of separating scientific fact from media sensationalism and understanding the limitations of current AI technology. Questions to consider as you read/listen: What are the main limitations of current AI technology, and what are the potential consequences of these limitations? How does AI technology impact human creativity, and what are the future implications for different types of jobs? How do current media portrayals and public perceptions of AI influence the development and acceptance of AI technology?

Long format:  Is AI “all hype”? I was on a forum where a contributor was in essence arguing that AI is all fanboy hype and would never amount to anything and is a big old bubble ready to burst.

He quoted an article that is an interpretation of the actual peer reviewed study with quotes from the primary author. Here is the actual peer reviewed article: https://arxiv.org/pdf/2410.03703) Their own conclusion quoted verbatim is: "Through this work, we sought to understand the impact of LLMs on human creativity. We conducted two parallel experiments on divergent and convergent thinking, two key components of creative thinking. Taken together, these experiments shed light on the complex relationship between human creativity and LLM assistance, suggesting that while AI can augment creativity, the mode of assistance matters greatly and can shape long-term creative abilities. In closing, we hope this work offers a template to experimentally evaluate the impact of generative AI on human cognition and creativity." The authors of the study clearly see a place for AI (not that it is all hype) but warns that if relied upon too much that overall creativity may suffer. And that is an interesting thought. —— I am not here to defend AI or say that even if you don't like it, it cannot be fairly dismissed as "all hype" as time will prove that AI is here, it is useful and it will change things. AI will change things for the better and for the worse like all tools. Fire has good and bad. Cooks food. Burns down house.

I don't know of anyone who is a true thought leader in AI that says that AI will supplant human creativity. In fact, quite the opposite. When you look at all of the literature that is in the mainstream and is at a high level, they all speak to the fact that AI will not ever be good at creativity. Even the mainstream super proponents of AI clearly think AI cannot create, conceptualize, or plan strategically. While AI is great at optimizing for a narrow objective function (see definition below and discussion for narrow objective function), it is unable to choose its own goals or to think creatively. Nor can AI think across domains or apply common sense. And it most likely never will.

So yes relying on AI to be creative is like relying on a hammer to be a great screwdriver. It is not its intended function or purpose. Every tool has a use. AI’s use is not in creativity and no one I have read who is a subject matter expert in AI says that it is. — In terms of creativity. Yes, jobs that require vision, creativity, outside of the box thinking and “seeing around a corner” are most likely very safe from AI supplanting them. The jobs most at risk of automation by AI tend to be routine and entry-level jobs. Another words the poor will become poorer. AI will also displace increasingly complex types of blue collar work. Warehouse pickers will be replaced and displaced. Kai Fu Lee theorizes that about 40% of jobs could be accomplished mostly by AI and automation technologies by 2033.

What can’t AI do well? Creativity: AI cannot create, conceptualize, or plan strategically. While AI is great at optimizing for a narrow objective function, it is unable to choose its own goals or to think creatively. Nor can AI think across domains or apply common sense. Empathy: AI cannot feel or interact with feelings like empathy and compassion. Therefore, AI cannot make another person feel understood or care for. Even if AI improves in this area, it will be extremely difficult to get the technology to a place where humans feel comfortable interacting with robots in situations that call for care and empathy, or what we call “human touch services“. Dexterity: AI and robotics cannot accomplish complex physical work that requires dexterity or precise hand – eye coordination. AI can’t deal with unknown and unstructured spaces, especially ones that it hasn’t observed before. Jobs that are asocial and routine, such as insurance adjusters, are likely to be taken over in their entirety by AI. For jobs that are highly social but routine, humans and AI would work together, each contributing expertise. For jobs that are creative but asocial, human creativity will be amplified by AI tools. Finally the jobs that require both creativity and social skills are where humans will shine and will survive past the AI revolution. Please see the charts provided from Kai-Fu Lee’s books.


There are a lot of limitations of AI. The chief of which is the difficulty of defining the appropriate objective function so that a most appropriate outcome of LLM comes about. The possibility for abuse in AI has to do with the simplicity of the objective function, and the danger from single-mindedly optimizing to the single objective function. If the objective function is defined very narrowly, then all other considerations will be discarded and trying to achieve that single-minded objective function. Further, we humans have a good grasp of what we know and what we don’t know, GPT does not. GPT is also weak in causal reasoning, abstract, thinking, explanatory statements, common sense, and intentional creativity.

And I agree with Kai-Fu Lee when he writes: "People often rely on three sources to learn about AI: science fiction, news, and influential people. Science Fiction books and TV shows, people see depictions of robots that want to control or outsmart humans, and super intelligence turned to evil. Media reports tend to focus on negative, outlying examples rather than quotidian incremental advances: an autonomous vehicle killing a single pedestrian, technology companies, using AI to influence elections, and people using AI to disseminate misinformation in deep fakes. Relying on “thought leaders” ought to be the best option , but unfortunately, most who claim the title are experts in business, physics, or politics, not AI technology. The predictions often lack scientific rigor. What makes things worse is that journalist tend to quote the leaders out of context to attract eyeballs." Get full access to GeopoliticsUnplugged Substack at geopoliticsunplugged.substack.com/subscribe)