This episode explores how large language models (LLMs) are being taught to use external tools to enhance their problem-solving abilities. The authors of the paper present a standardized paradigm for tool integration, outlining the steps involved in understanding user intent, selecting the appropriate tool, and executing the task. The podcast also examines various techniques for teaching LLMs to use tools, including fine-tuning and in-context learning, highlighting the challenges and advancements in these areas. Furthermore, the authors discuss the emerging trend of enabling LLMs to create their own tools, potentially revolutionising their role from tool users to tool creators.