cover of episode 691: A.I. Accelerators: Hardware Specialized for Deep Learning

691: A.I. Accelerators: Hardware Specialized for Deep Learning

2023/6/27
logo of podcast Super Data Science: ML & AI Podcast with Jon Krohn

Super Data Science: ML & AI Podcast with Jon Krohn

Frequently requested episodes will be transcribed first

Shownotes Transcript

GPUs vs CPUs, chip design and the importance of chips in AI research: This highly technical episode is for anyone who wants to learn what goes into chip development and how to get into the competitive industry of accelerator design. With advice from expert guest Ron Diamant, Senior Principal Engineer at AWS, you’ll get a breakdown of the need-to-know technical terms, what chip engineers need to think about during the design phase and what the future holds for processing hardware.This episode is brought to you by Posit), the open-source data science company, by the AWS Insiders Podcast), and by WithFeeling.ai), the company bringing humanity into AI. Interested in sponsoring a SuperDataScience Podcast episode? Visit JonKrohn.com/podcast )for sponsorship information.In this episode you will learn:• What CPUs and GPUs are [05:29]• The differences between accelerators used for deep learning [14:31]• Trainium and Inferentia: AWS's A.I. Accelerators [22:10]• If model optimizations will lead to lower demand for hardware to process them [43:14]• How a chip designer goes about production [48:34]• Breaking down the technical terminology for chips (accelerator interconnect, dynamic execution, collective communications) [55:29]• The importance of AWS Neuron, a software development kit [1:15:42]• How Ron got his foot in the door with chip design [1:26:40]Additional materials: www.superdatascience.com/691)