cover of episode NVIDIA's Jensen Huang on AI Chip Design, Scaling Data Centers, and his 10-Year Bets

NVIDIA's Jensen Huang on AI Chip Design, Scaling Data Centers, and his 10-Year Bets

2024/11/7
logo of podcast No Priors: Artificial Intelligence | Technology | Startups

No Priors: Artificial Intelligence | Technology | Startups

AI Deep Dive AI Chapters Transcript
People
J
Jensen Huang
领导NVIDIA从创立到成为全球加速计算领先公司的CEO和联合创始人。
Topics
Jensen Huang认为,未来十年,英伟达将通过软硬件协同设计和数据中心规模计算,将人工智能的性能每年提高两到三倍,同时降低成本和能耗。传统的缩放方法已经失效,需要新的方法,例如协同设计,即修改算法以反映系统架构,并修改系统以反映新软件的架构。此外,数据中心规模的计算和将工作推送到网络结构中也是扩展的关键。为此,英伟达收购了Mellanox,并开发了Infiniband和NVLink技术。NVLink技术将使数百个GPU能够协同工作,形成一个虚拟的超级处理器,从而实现低延迟和高输出,满足推理时间扩展的需求。稳定的基础架构对于软件生态系统和生产力至关重要,使得上层软件可以不断改进,而无需更改底层架构。

Deep Dive

Chapters
Jensen Huang discusses NVIDIA's long-term bets on computing, focusing on scaling performance and reducing costs and energy consumption.
  • NVIDIA aims to double or triple performance every year at scale.
  • The company is moving beyond Moore's Law to a 'hyper-Moral's Law' curve.
  • NVIDIA's approach involves both chip design and data center scale.

Shownotes Transcript

Translations:
中文

Hi, listeners, and welcome to no, priya. Today, we're here again one year since our last discussion with the one and only jen fong, founder and CEO of NVIDIA. Today, nvidia's market cap is over three trillion dollars and so on, literally holding all the chips in the AI revolution. We're excited to hang out in nvidia's headquarters and talk all things from tier models and data center scale computing. And the beats in video is taking on a ten year basis.

Welcome back janson thirty years in to in video and looking ten years out, what of the big bets you think are are still to make? Is that all about scale from here? We're running into limitations in terms of how we can freeze more compute memory out of the architectures we have. What do you focused on?

Well, if we take a step back and and think about, we have done, we went from coding to machine learning, from writing software tools to creating A S, and all of that running on C, P, S, that was designed for human coating to now running on G, P, S, designed for A I coding, basically machine learning. And so the the world has changed the way we do computer. The whole stack has changed.

And as result, the scale of the problems we could address has changed a lot. Because we could, if you could paralyze your software on one GPU, you can set the foundations, the paralyzed across whole cluster or maybe across multiple clusters or multiple data services. And so I think we've set ourselves up to be able to scale computing, uh, at a level and develop software at a level that no is are very imagined before.

And so what are the begin that um over the next ten years, uh, our hope is that we could double or triple performance every year at at scale, not a chip at scale. And to be able to therefore drive the cost down by a factor two or three drive the energy down by a factor to three every single year. When you do that, every single year when you double a couple every year in just a few years, that adds up. So compounds really, really aggressively. And so I would be surprised if you know the way people think among more s law, which is uh uh two x every couple of years, um you know we're going to be on some kind of a hyper morals law curve and um I fully hope that we continue do that.

What what do you think this is the driver of making that happen even faster, more the more I was heard of self reflects IT right IT was something that he sad and then people .

kind of implemented to me happened you the two fundamental um technical pillars. One of them was the arts scaling and the other one was cover meds fuels scheme. And both of those techniques were rigorous techniques um but those those technic have really, really run out of steam. And and so now we need a new way doing scaling.

You know obviously the new way of doing scaling are are all kinds of things associates with code design, unless you can modify or change the algorithm to reflect the architecture of the system, or change and then change the system to reflect the architecture of the new software, and go back and forth, unless you can control both sides of the u have no hope. But if you can control both sides of IT, you can do things like move from F P sixty four to F P thirty two, two, B, F sixteen to F P A two. You know, F P four, two, who knows one right? And so.

And so I think that the code design is a very big part of that. The second part of IT, we call the full stack. The second part of that is a data center scale.

You know, unless you could treat the network as a compute fabric and and push a lot of the work into the network, push a lot of the work into the fabric. And as a result, you your compressing you know doing compressing at very large scales. And so that's the reason why we buy melanosomes started fusing in fini ban and in v link um in such an aggressive way.

And now look where in d link is gonna. You know the the computer fabric is going to going to um uh scale out uh what appears to be one incredible processor called A G P U. Now we will get a hundreds of GPU that can be working together.

You know most, most of these these computing chAllenges are were dealing with. Now one of the the the most exciting ones, of course, is inference. Time scaling has to do with essentially uh, generating tokens and incredibly low lin cy because yourself reflecting as you as you just mention, I mean, going to be you gonna uh, a doing tree search.

You're gonna be doing chain of thought. You gonna be doing probably someone on a simulation in your head. You're going to be reflecting on your own answers. Well, you're going to be prompting yourself and generating text to you you know silently um and still respond hopefully in the second well, the only way to do that is if your landsea your lancy is extremely low.

Meanwhile, the data center is still about producing high output tokens because you know you still want to keep costs down, you want to keep the output high, you want IT right, generate return. And so these two fundamental things about a factory, low lin sea and high output, they're add odds with each other. And so, so in order for us to create something that is really grew in in both, we have to go invent something new and envy link is really our way of doing that. We now you have now you have a virtual GPU that has incredible amount of flops because you need IT for context. You need a huge model memory, working memory and still have incredible back with for token generation, all of the same .

time building the models, actually also optimizing things traumatically like a David, my team called data over over the last eighteen months. So the cost of um million tokens going into the GPT for equivalent model is basically dropped to another import. X yes. And so there's just massive optimization and compression happening on that hard as well.

just an hour layer just on the layer that we work on. You know one of the things that that that we care a lot about, of course, is the ecosystem of our stack and the productivity of our software. You know people forget that the because you have put a foundation, and that's a solid foundation, everything above that can change.

If everything, if if the foundations changing underneath you, it's hard to build a building on top. It's hard to creating something interesting on top. And so, so could have made a possible for us. IT is so quickly just in the last year, and we just went back and benchmark a when lama first came out, we've improved the performance of harbor by a factor of five without the without the layer on top ever changing. Now well, a factor of five in one year is impossible using traditional computing approaches, but it's already computing and using this way of of a code design were able IT against of new things.

Yeah how much are um you your biggest customer is thinking about the uh interchange ability of their infrastructure between large scale training and inference well you know infrastructures .

disaggregate these days the sand which is telling me that he he a decommission volt just recently they have pass out the ams all different configuration tions of black welcoming some of IT is optimize for air coal, some of its optimize liquor ool. Your services are going to have to take advantage of all of this.

The advantage the and video has, of course, is that the the infrastructure that you built today for training and will just be wonderful for inference tomorrow. And most of ChatGPT, I believe, influences on the same type of systems that were trained on just recently. And so if you can train on, you can inference on IT.

And so you're leaving you're leaving a trail of of infrastructure that you know is gonna incredible good of inference and and you have complete confidence that you can then take that return on on the investment that you've had and put in into a new infrastructure to go scale, scale with. You know you're going to leave behind something of use and you know that that in video diversity ecosystem are going to be working on improving the algorithm so that rest of your infrastructure improves by a factor of five, you know, in just a year and so that that motion will never, never change. And so the way that the way that people think about the infrastructures, yeah, even though I built for for training today gotto be great for training, we know it's going to be great for inference.

Now influences is gonna be multiples E. I mean, you're going to take first all in order to the still small models, good to have a larger model that still so so you're still going to create these incredible a frontier models they are going to be used for. Of course, the the ground breaking work you're going going to use a for synthetic da generation.

You're gona use the models, big models, to teach smaller models and to still down the smaller models. And so there's there's a whole bunch of different things you can do, but in the end, you're gonna giant models aren't dental little tiny models. The little tiny models are gna be quite effective, you know, not as generalizable, but quite effective.

And so you know, they are going to perform very specific students incredibly well. That one task. And when we're going to see superhuman task in one one little time, min, from a little tiny, tiny model, maybe, you know, is that small language model, but you tiny language model, T, L, M, R know whatever? yeah.

So so I think we going to see all kinds sizes, and we hope, is that right? Just kind of like soft words today. yeah.

I think in a lot of ways, artificial intelligence allows us to break new ground in in how easy IT is to create new applications. But everything about computing has largely remained the same. For example, uh, the cost of maintaining software is extremely expensive.

And once you build that, you would like IT to run on a large of an installed base as possible. You would like not to write the same software twice. I mean, you know a lot of people still feel the same way. You like take your engineering and moving forward. And and so to the extent to the extent that that the architecture allows you on one hand um creates often are today that runs even Better tomorrow with new hardware that's great or software that you create tomorrow AI that you create tomorrow runs on a largest stall base. You think that that's great that that way of thinking about something or is not .

going to change n vid s moved into larger and larger, let's say, like a unit of support for customers. I think about going from single chip to, you know server to rap and real seventy two. How do you think about that progression? Like what's nacks like the data center?

But in fact, we'd built for day in the way that we build everything unless you're building if you're developing software, you need the computer in its full manifestation. We don't we don't build powers, points, lives and ship the chips. And we build the whole data center.

And until we get the whole data center built up, how do you know the software works? Until you get the whole data center built up, how do you know your you know your fabric works and all the things that you expected the efficiencies to be, how do you know it's gonna really work at the scale? And and that's the reason why that's the reason why it's not unusual to see some base actual performance be dramatically lower than their peak performance, as shown in powerpoint intones. And and and and it's computing is just not used to is not what is used to be. You know, I say that the new unit of computing is the data center.

That's to us.

So that's what you have to deliver. That's what we build now. We build a whole thing like that.

And then we for every single thing with combination, uh air cold xd six little cold, Grace eternel in finbar N V link, no M V link. You don't say we build every single computation. We have five supercomputers in on our company today.

Next you going to build easily five more. So if you're serious about software, you build on computers. If you serious most software, then you are going to build your whole computer.

And we build at all at scale. This is the part that that is really interesting. We build at at scale, and we build vertically integrate.

We optimize full stack and end and then we disaggregate everything and we sell in in parts. That's the part that is completely utter ly remarkable about what we do. The complexity of that is just in saying. And the reason for that is we want to be able to graft our infrastructure into G, C, P, A W S, azure, O C I, all of their control planes, security plans are all different and all of the way they think about their cluster sizing all different. And um uh but yet we make a possible for them to all comedy and videos architecture so that could I could be everywhere.

That's really, really, in the end, the the singular thought you know that we would like to have a computing platform that developers could use that's largely consistent module, you know ten percent here and there because people's infrastructure slightly optimized differently in module temperature here and there. But but everything they they build will run everywhere. This is kind of the one of the principles of software, and you should never be given up.

And and we we protect IT quite clearly. IT makes a possible for our soft engineers to build once, run everywhere and and that because we recognize, uh, that the investment of software is the most expensive investment and so easy to test. Look at the size of the whole hardware industry and then look at the size of the worlds industry is one hundred trillion dollars on top of this one trillion, you know, industry.

That tells you something the south way that you build, you have to know. You basically maintain for us as long as you shall live. We've never given up piece of software.

The reason why could is used is because you know why called everybody. We will, we will. We will maintain the personals.

We shall live and we're serious. We still maintain. Uh, I just saw a review the other the day in video shield, our android T, V. It's the best android T, V in world we shipped at seven years ago. IT is still the number one android T V.

That the people, you know anybody who who enjoys T V uh and we just updated this half ware just this last week and people rode a new story about IT g force. We have three hundred million gamers around the world. We've never stand a single.

And so the fact that architecture is a compatible across all of these different areas makes a possible for us to do IT. Otherwise, we will. We be we have a we would have software teams at hundred times the size of our companies today, if not for this architect compatibility. So we're very serious about them and translates to benefits to cause the developers.

One impressive of station that recently was how quickly brought up a cluster for X I. You and if you want, talk about that because that .

was striking in terms of the scale in the speed with A I on first able to decide to do something, select the site um I bring cooling to IT a power and then and then decide to build this hundred thousand G P U super cluster, which is you know the largest of its kind in in one unit uh and then working backwards you know uh I we started planning together uh the date that he was gonna stand a beating up and the date that he was gna stand everything up was determined um know quite a few months ago.

And so all of the components of the o ms, all the systems, all the software integration we did with their team of the network simulation, we simulate all all the all the new network configuration ation. We pri means like we prepared everything as a digital twin. We we, we we are presented all of the supply chain, uh, we presented all of the wiring of the networking.

We even, we even set up a small version of IT, a kind of a, you know, just the first instance of IT, uh, you know, ground true to be a reference zero, you know, system zero, uh, uh, before everything else showed up. So by the time that everything showed up, everything was staged, uh, all the practicing was done, all the simulations were done. And then, you know, the massive immigration, even then, the massive immigration was a was a monument of garden huan teams of humanity crawling over each other, worrying everything at twenty four, seven and and within a few weeks, uh, the clusters were out.

I am it's really it's really a testable to his wheel power and and um uh how is able to think through mechanical things, electrical things and and overcome what is apparently in extraordinary obstacles. I mean, what was done there is the first time that A A computer of that large scale has ever been done at that speed. Unless our two teams are working from the networking team, the computer in the soft team, the training team know and the infrastructure team, the people that the the the electrical engineers to the you know to the software engineers all working together ah it's really quite a fee to watch.

Was there a chAllenge that felt most likely to be blocking from an engineering perspective.

just a tonge of electronics that had to come together? I mean pretty worth just to measure IT. I mean it's you know IT IT IT tons and tons of equipment is just have Normal you know usually usually a supercomputer system like that.

Um you planned for a couple years from the moment that the first systems come on come delivered to the time that you probably submitted everything for some serious work. Don't be surprise of as a year, you know I mean that that happens all the time, is not abNormal. Now we we could afford to do that.

We we created, you know, uh, a few years ago, there was an initiative in our company that's called data center as a product. We don't sell IT as a product, but we have to treat like as a product everything about planning for IT and then standing IT up, optimizing IT tuning and keep IT Operational. The the goal is that you should be, you know, kind of like opening up your beautiful new iphone and you open IT up and everything just kind of works.

Now of course, it's a miracle of technology making IT that like that, but we now have the skills to do that. And so if you're interested in the data center and just have to give me a space and some powers, some cooling, you know and well, what have you set IT up with a call thirty days? I mean, it's pretty extraordinary .

wild if you think if you look ahead to two hundred thousand and five hundred thousand and million um in uh super cluster if you call IT at that point, what do you think is the biggest banner capital energy supplying one area?

Everything, nothing about what you just the scales that you talk about, but nothing is Normal. H yeah but nothing .

is impossible.

Nothing is yeah no no laws of physics limits um but everything is gonna hard and and of course you know is IT worth IT um like you can't believe you know to to get to something that we would recognize as as a computer that that so easily and so able to do what we ask you to do what you know otherwise uh general intelligence of some kind uh and even you know even even if we could argue about is that really general intelligence just getting close to IT is gonna a miracle? We know that.

And so I think the there there five or six endeavors to try to get there, right? I think of course open eye and and anthropic and eggs and uh you know of course google and meta and microsoft and you know there this this brunch er the next couple of clicks up that mountain are just so vital. A who doesn't want to be the first on ec on that on mount.

I think the the the prize uh for reinventing uh intelligence altogether just is is too consequential not to attempt. And so I think that there no laws of physics. Everything is going to be hard.

A year ago um when we spoke together, you talked about we asked like what applications you've got most excited about that in video would serve next N A and otherwise and you talk about how you had your most extreme customers to lead you there um and and about some of the scientific applications. I think that become like much more mainstream of you over the last year. Uh, is IT still like science and A S application of science that most exact?

I love the fact that we have digital. We have A I chip designers here.

Video.

yeah. I I love that we have A I soft engineers. How effective .

are A I designers to?

Super good. We wouldn't a build harper without IT. And the reason for that is because they could explore a much largest space can and h because they have infinite time, they running on a supercomputer.

Uh we have so little time using using human engineers that that um we don't explore as much of the space as we should. And we also can explore commentaries. I can explore myspace while including your exploration and your exploration.

And so you know our chips are so large as not like it's design as one chip is design almost like a thousand ships. And we have to we have to optimize each one of them, kind of an isolation. You really want to optimize a lot of them together and and you know cross module code design and and optimize across much largest space.

But obviously, we can be able to find find. Local maximum that are hidden behind local minimum somewhere. And so so clearly we can find Better answers. Um you can do that without A I engineers just simply can do IT. We just happen of time.

One other thing has changed um since we will last spoke collectively and I I looked IT up um at the time in videos. Market cap was about five hundred billion. It's now over three trillion to have the last eighteen months you've attitude and half troilo plus of market cap which effectively is one hundred billion dollars plus a month or two a half snowflakes or you know a strike plus a little bit or how you want to think about a country or two country or two.

Um obviously a lot of things that seek consistent in terms of focus on what you're building and a and you know walking through here earlier today, I felt the buz like when I was a google of fifteen years ago as kind of you felt the energy of the company in the map of excitement. What is change during that period of anything? Or what is different in terms of either how in video functions or how you think about the world or the size of bets. You can take .

care where a company can change as fast as as stock Price. This should be clear by, yes. So in a lot of ways, we haven't changed that much.

I think the the thing to do is to take a step back and ask ourselves what are we doing? I think that's really the big in the big observation realization awake ing for companies and countries is what's actually happening. I think when we're talking about earlier for our industry prospect that we reinvented computing now IT hasn't been reinvented for sixty years.

That's how big of a deal that is that we driven down the the marginal cost of computing, down probably by a million and ex in last ten years to the point that we is IT legally let the computer go exhaustively, right the software. That's the big realization. And that that in a lot of ways, I was to say we were kind of saying the same thing about chip design.

We would love for the computer to go discover something about our chips that we otherwise couldn't done ourselves, explore our chips and optimizing in a way that we couldn't do ourselves in, in, in the way that we would love for a digital biology, or, you know, any other, any other field of science. And so I I think people are starting to realize what we reinvented on computer. But what does that mean? even? And as we all of the same, we create this thing called intelligence.

And and what happened to computing? Well, we went from data centres. Data centers are multi tenant stores are files.

These new data centres were creating or not data centers. They don't there, not multi tenant. They tend to be single tenant.

They are not storing any other files. They're producing something. They're producing tokens.

And these tokens are reconstituted into what appears to be intelligence, isn't that right? And intelligence about different kinds, you know, that could be articulation of robotic motion, IT could be a sequences of obama acid. IT could be, you know, chemical chains.

That could be all kinds of interesting thing. So what are we really doing? We've created a new instrument, a new machinery that that in a lot ways is the the none of the additive generate the eye.

You know, instead of generate A I know it's it's an A I factory, is a factory that generate A I and we're doing that an extremely large scale. And and where people are start to realize this, you know, maybe this is a new industry. IT generates tokens that generates numbers, but these numbers constitute in a way that is fairly valuable and and what industry wouldn't benefit from IT.

Then you take a step backing, you ask your self again, you know what's going on the video. On the one hand, we reinvented computing as we know IT. And so there's a trillion dollars of the infrastructure that needs to be modernized.

That's just one layer. The big layer of IT is that there's this instrument that, that we're building is not just for data centres, which we were modernizing, but you're using IT for producing some new comedy. And how because this new commodity industry, be hard to say, but is probably we're trillions.

And so that I think is is kind of if you to take a step back, you know we don't build computers anymore. We build factories and every countries is gonna need IT. Every company is gna need IT.

You don't give me an example of our company, who or industries is? You know what, we don't need to produce intelligence. We got plenty of IT.

And so so that's the big idea, I think, you know and that's kind of an abstracted industrial view. And you know, sunday, someday, people realized that in in a lot ways the semicon story industry wasn't about building chips. IT was about building the foundational fabric for society. And then all go, uh, I get IT. You know, this is a big deal, not just about chips.

How do you think about embodiment?

Well, the thing I I am super excited about is in a lot ways, we ve were close to artificial general intelligence, but were also close to artificial general robotics tokens or token S I mean, the question is, can tokening zing? Of course token tokens zing. Things are not easy as you guys know, but if you're able to tokening ze things, um a line with large, large language models and other modalities.

If I can generate a video that has Johnson reaching out to pick up the coffee cup, why can I prompt a robot to generate the tokens will pick up the bit, you know? And so intuitive. Ly, you would think the problem statement is rather similar for computer. And and so I I think that we're that close. That's incredibly exciting.

Now the the two the two Brown field uh robotic systems Brown field, meaning that you don't have to change the environment for this uh self driving cars in and um with digital show fs and and body robots right between the cars and human robot, uh we we can literally um bring robotics to the world without changing world because we build the world for those two things. Probably not an incident that that elon spoke s and those two forms of robotics because IT is likely to have the larger potential scale and and so I ending that's exciting, but the digital version of IT is equally exciting. You know, we're talking them about digital or A I employees.

There is no question we're going to have A I employees of all kinds. And our outlook will be some biologics and some artificial intelligence, and we will prompt them in the same way. Isn't that right? Mostly I prop my employees, you know, provide them context asking a the performing mission, they go on a recruit other team members. H they come back and and work going back and for, uh, how is that can be any different with digital and A I employees about kinds of we're going to have A I marketing people A I to the designers A I supply ing people A I you know and and i'm hoping that the video is sunday um um biologically bigger but also from an artificial intelligence perspective, much, much bigger. So that's our future company.

If we came back and talk to you a year from now, what part of the company do you think would be um most artificial .

and intelligent, i'm hoping is to design and that's right because because I should start I should start where I had moved the needle most, also where we can make the biggest impact most. You know it's such an an insanely hard problem. I work with the scene in offices and root at a kings um I told imagine them having synapses chip designers that I can rent and they they know something about a particular module, their their their tool and and they trained an aid to be in red. We could at IT.

And what is higher unch of whether we need were in that phase of that trip design? You know, I might might rent a million syn ops engineers, the government helping me out, and then go rent a million kids, engineers, the home out. And that what wouldn't exciting future for them, that they have all these agents that that sit on top of their tools platform that use the tools platform and other and collaborate with with other platforms.

And you'll do that for a Christian will do that at S. A. P. And bill will do that as service. Now people people say that the SaaS platforms are gonna be disrupted.

Actually think the opposite, that they're sitting on a gold line, that they going to be flourishing of agents that are can be specialized in sales for, specialized in in sales for, I think they call lightning and S A P as a bab. And everybody got their own language is not right. And we got koodi, and we got open U S B for our universe.

And and who's gonna create an A I agent? That's awesome. M, at open U S D. We are, you know, because nobody cares about more than we do. And and so so I I think a lot of ways these platforms are going to be flashing with agents, and we're going to introduce them to each other and they are to collaborate on problems.

You see a wealth of different people working in every domain and A I do you think is um undernote ed or that people that you want more entrepreneurs s or engineering or business people to go work on.

Well, first of all, I think what what is misunderstood and and and misunderstood maybe maybe underestimated is the the um under the under the water activity, under the surface activity of a ground breaking science, computer science, two science and engineering that is being affected by A I and machines. I think you just can walk into a science department anywhere, theodule math department anywhere, where A I A machine learning.

And the type of work that we're talking about today is going to transform tomorrow if they are if if you take all of the engineers in world, all of the scientists and world, and you say that the way they're working today is early indication of the future, because obviously IT is, then you're gna see A A title way of gender. They are a title wave. They are a title way of machine learning. Change everything that we do in some short period of time.

Now remember uh I I saw the early indications of of computer vision and on the work with with um uh alex and alien and hinton in toronto and um um uh Young lecon and and of course Andrew anger and stanford and no I saw the early indications of IT um and we were we were we were fortunately extrapolated from what was observed to be detecting cats into a profile change in computer sites in computing altogether and that extrapolation was fortunate for us and now of course we we were we were so excited by so inspired by IT that we changed everything about how we did things. But I took how long I took, uh, literally six years from observing that toy, alex net, which I think by today's standards we considered a toy the superhuman levels of capabilities in object recognition. Well, that was only a few years.

Uh, what is happening right now, the ground swell in all of the fields of science, not one fields of science left behind. I mean, just to be very clear OK, everything from quantum computing, the quantum chemistry, you know, every field of science is involved in, in the the approaches that we're talking about. If we give ourselves, and they've been added for a couple, two, three years, if we give ourselves in a couple, two, three years, the world's gona change.

There's not going to be one paper. There's not going to be one breakthrough science, one break through an engineering where generative eyes and at the foundation of IT i'm fairly certain of in him and so I think I think um uh you know there's a lot of questions about I every so often and I hear about whether this is a fat um uh computer. You just gotto go back to first principles and observe what is actually happening.

The computing stack, the way we do computing has changed if the way you write software has changed. I mean, that is pretty cool. Software is how humans encode knowledge.

This is how we encode you, our algorithms. We encode IT in a very different way. Now that's going affect everything, nothing else whatever be the same.

And so right I think the the um I think i'm i'm talking to the converted here and and we all see the same thing and all the start upset that. You know you guys you guys work with the scientist I work with, the engineers I work with, nothing will be love behind me. We're going to take everybody with us.

I think one of the most exciting things coming from the computer science world and looking at all these other fields of sciences um like I can go to robotics conference now, material science conference, a biotech conference and like, oh, I understand this you know not at every level of the science but in the driving of discovery IT is all the algorithms that are .

general and there are some universal some universal unifying concepts. Yeah yeah and and .

I I think that's like incredibly exciting when you see how effective that is .

in every domain. Absolutely yes. And and i'm so excited that i'm using on myself everyday you know I don't know about you guys, but it's my turtle.

Now, I mean, I don't do I don't learn anything without first going to A A. I why learn is the hard way. Just go directly to A, I just go directly to ChatGPT.

You know, sometimes I do perplexing, just depending on the formulation of my questions. And I just start learning from there, and then you can always put off and go deeper, if you like. 嗯, but but holy cow is just incredible.

And and almost everything I know, I I double check, even though I know IT to be a fact. You know what I considered be ground truth. I'm the expert also to go to A, I check, make double check.

Ah, so great. Almost everything I do. I involved IT.

I think a great note to stop on.

Thanks so much.

That time day I really enjoy IT.

Nice to see guys. Thanks, Johnson. Fine us on twitter at no prior spot subscribed to our youtube channel if you want to see your faces fall the show on apple podcasts, spotify or where have you listen where you get a new epsom every week and signed for emails were fine transcripts for every episode at no dash prior stock com.