They aim to attract investors by emphasizing the potential existential threat, which can justify significant funding. Additionally, the idea of AI overtaking humanity is a cinematic concept that captures public interest, serving as a distraction from current AI-related issues.
The primary challenges include AI hallucination, where AI often makes up information, and the general unreliability of current AI models. Google, for instance, has admitted they don't know how to fix the problem of incorrect search results.
AI hallucination refers to AI making up information, which is a significant issue. For example, ChatGPT can produce false threats of violence or incorrect legal cases, undermining its reliability in critical applications.
AI could increase productivity without necessarily leading to job loss. For instance, if AI makes software engineers twice as efficient, companies might choose to keep both engineers and increase profits rather than lay off one.
AI trained on human data adopts human biases, which are difficult to eliminate. Attempts to put guardrails in place to prevent biased outcomes can sometimes create new problems, as seen with Google's AI image generator.
Human intelligence is defined by emotional responses, creative integration of past and new information, and genuine connections with others. AI, while capable of imitation, lacks these core human attributes.
Despite $50 billion invested in AI over the past few years, resulting revenue is only $3 billion. This suggests that current investment levels may not be sustainable, especially considering the high costs of hardware needed for AI improvements.
Will progress in artificial intelligence continue to accelerate, or have we already hit a plateau? Computer scientist Jennifer Golbeck interrogates some of the most high-profile claims about the promises and pitfalls of AI, cutting through the hype to clarify what's worth getting excited about — and what isn't.