The Transformer, introduced in 2017 by researchers at Google DeepMind, is a suite of algorithms that enable the creation of a universal learner capable of extracting underlying order from large bodies of structured data. This technology is crucial because it powers advancements like ChatGPT and has broader applications in fields such as drug discovery, synthetic biology, and robotics.
Companies are using the Transformer technology to create new molecules, develop bacteria that can eat plastic, and enhance robotics. For example, a company called Physical Intelligence has developed a robot that can fold laundry, a task previously considered one of the most challenging in robotics.
The primary limitation is the availability of data. Unlike language models, which can leverage vast amounts of text from the internet, other applications like robotics and self-driving cars lack similar large, freely available datasets. This data scarcity is a significant barrier for startups and smaller companies.
While transformers can process and generate data in impressive ways, they lack true intelligence or a world model. They are good at simulating human-like responses but do not possess actual understanding or sentience. This distinction is crucial for understanding their capabilities and limitations, especially in critical applications.
The top concerns with AI include over-reliance on AI without understanding its decision-making processes, the environmental impact due to its energy consumption, and the risk of malinvestment in AI startups and technologies that may not yield expected productivity gains.
While the capabilities of current generative AI models may plateau, the next phase involves integrating these technologies into everyday use. This process, known as the installation phase, can take decades as people and businesses figure out how to effectively incorporate AI into their workflows, similar to the adoption of PCs, mobile phones, and cloud computing.
We’re hearing from our reporters and columnists about some of the biggest companies, trends and people in tech and what could be in store for 2025. The underlying technology) behind artificial intelligence chatbots called a “transformer” may have profound implications for a range of industries, and could hold the answers to some of science’s toughest questions. WSJ tech columnist Christopher Mims joins us to discuss how some researchers hope to push the boundaries of what’s possible with AI in 2025. James Rundle hosts.
Sign up for the WSJ's free Technology newsletter).
Learn more about your ad choices. Visit megaphone.fm/adchoices)