cover of episode #309 ‒ AI in medicine: its potential to revolutionize disease prediction, diagnosis, and outcomes, causes for concern in medicine and beyond, and more | Isaac Kohane, M.D., Ph.D.

#309 ‒ AI in medicine: its potential to revolutionize disease prediction, diagnosis, and outcomes, causes for concern in medicine and beyond, and more | Isaac Kohane, M.D., Ph.D.

2024/7/15
logo of podcast The Peter Attia Drive

The Peter Attia Drive

Chapters

Isaac Kohane, a leading physician-scientist, discusses the evolution of AI in medicine, from early iterations to the current third generation. He highlights how AI is transforming medical specialties, enabling early disease diagnosis, and paving the way for advancements like autonomous robotic surgery.
  • AI is revolutionizing image-based medical specialties such as radiology, pathology, and dermatology.
  • AI can recognize retinopathy as accurately as experts, demonstrating its potential for early disease diagnosis.
  • Large language models, like GPT-4, can provide diagnostic insights and assist with tasks like writing prior authorization letters.
  • The integration of AI into healthcare raises ethical concerns and regulatory challenges, especially regarding patient privacy and data security.

Shownotes Transcript

Hey everyone, welcome to The Drive Podcast. I'm your host, Peter Attia. This podcast, my website, and my weekly newsletter all focus on the goal of translating the science of longevity into something accessible for everyone. Our goal is to provide the best content in health and wellness, and we've established a great team of analysts to make this happen.

It is extremely important to me to provide all of this content without relying on paid ads. To do this, our work is made entirely possible by our members. And in return, we offer exclusive member-only content and benefits above and beyond what is available for free.

If you want to take your knowledge of this space to the next level, it's our goal to ensure members get back much more than the price of a subscription. If you want to learn more about the benefits of our premium membership, head over to peteratiamd.com forward slash subscribe.

My guest this week is Isaac Kohane, who goes by Zach. Zach is a physician, scientist, and chair of the Department of Biomedical Informatics at Harvard Medical School, and he's an associate professor of medicine at the Brigham and Women's Hospital. Zach has published several hundred papers in the medical literature and authored the widely used books Microarrays for Integrative Genomics and the AI Revolution in Medicine, GPT-4 and beyond. He is also the editor-in-chief of the newly launched New England Journal of Medicine AI.

In this episode, we talk about the evolution of AI. It wasn't really clear to me until we did this interview that we're really in the third generation of AI, and Zach has been a part of both the second and obviously the current generation. We talk about AI's abilities to impact medicine today. In other words, where is it having an impact, and where will it have an impact in the near term? What seems very likely, and of course, we talk about what the future can hold. Obviously, here you're starting to think a little bit about the difference between science fiction and

and potentially where we hope it could go. Very interesting podcast for me, really a topic I know so little about, which tend to be some of my favorite episodes. So without further delay, please enjoy my conversation with Zach Cohen.

Well, Zach, thank you so much for joining me today. This is a topic that's highly relevant and one that I've wanted to talk about for some time, but wasn't sure who to speak with. And we eventually kind of found our way to you. So again, thanks for making the time and sharing your expertise. Give folks a little bit of a sense of your background. What was your path through medical school and training? It was not a very typical path. No. So what happened was I

I grew up in Switzerland. Nobody in my family was a doctor. Come to the United States, decide to major in biology, and then I get nerd sniped by computing back in the 70s, in the late 70s. And so I minor in computer science, but I still complete my degree in biology, and I go to medical school. And then in the middle of medical school's first year, I realized, holy smokes, I'm

This is not what I expected. It's a noble profession, but it's not a science. It's an art. It's not a science. And I thought I was going into science. And so I bail out for a while to do a PhD in computer science. And this is during the 1980s now, early 1980s. And it's the heyday of computer science.

AI. It's actually a second heyday. We're going through the third heyday. And it was a time of great promise. And with the retrospective scope, very clear that it was not going to be successful. There was a lot of overpromising. There is today. But unlike today, we had not released it to the public. It was not actually working in the way that we thought it was going to work. And it

a very interesting period. And my thesis advisor, Peter Solovich, a professor at MIT, said, "Zach, you should finish your clinical training because I'm not getting a lot of respect from clinicians. And so to bring rational decision-making to the clinic, you really want to finish your clinical training." And so I finished medical school, did a residency in pediatrics and then pediatric endocrinology,

which was actually extremely enjoyable. But when I was done, I restarted my research in computing, started a lab at Children's Hospital in Boston, and then a center of biomedical informatics at the medical school. Like in almost every other endeavor, getting money gets attention from the powers that be. And so I was getting a lot of grants. And so they asked me to start the center and then

eventually a new department of biomedical informatics that I'm the chair of. We have now 16 professors or assistant professors of biomedical informatics. Then

I had been involved in a lot of machine learning projects, but like everybody else, I was taken by surprise, except perhaps a little bit earlier than most, by large language models. I got a call from Peter Lee in October '22. And actually, I didn't get a call. It was an email right out of a Michael Crichton novel. It said, "Zach, if you'll answer the phone, I can't tell you what it's about, but it'd be well worth your while."

And so I get a call from Peter Lee and I knew him from before he was a professor of computer science at CMU and also department chair there. And then he went to ARPA and then he went to Microsoft and he tells me about GPT-4. And this was before any of us had heard about chat GPT, which is initially GPT 3.5. It tells me about GPT-4 and he gets me early access to it when no one else knows that exists. Only a few people do.

And I start trying it against hard cases. I get called down. I just remember from my training, I get called down to the nursery. There's a child with a small phallus and a hole at the base of the phallus, and they can't palpate testicles, and they want to know what to do because I'm a pediatric endocrinologist.

And so I asked GPT-4, "What would you do? What are you thinking about?" And it runs me through the whole workup of these very rare cases of ambiguous genitalia. In this case, it was congenital adrenal hyperplasia.

where the making of excess androgens during pregnancy and then subsequently in birth causes the clitoris to swell, form the glands of the penis, of the phallus, and the labia minora to fuse to form the shaft of what looks like a penis. But there's no testicles, there's ovaries. And so there's a whole endocrine workup,

with genetic tests, hormonal tests, ultrasound, and it does it all. And it blows my mind. It really blows my mind because very few of us in computer science really thought that these large language models would scale up the way they do.

It was just not expected. And talking to Bill Gates about this after Peter Lee had introduced me to problem. And he told me that his line engineers in Microsoft research, a lot of his fanciest computer scientists did not expect this, but the line engineers at Microsoft were just watching the scale up, you know, GPT-01

And they just saw it was going to keep on scaling up with the size of the data and with the size of the model. And they said, yeah, of course, it's going to achieve this kind of expertise. But the rest of us, I think because we value our own intellects so much, we couldn't imagine how we would get that kind of conversational expertise just by scaling up the model and the data set.

Well, Zach, that's actually kind of a perfect introduction to how I want to think about this today, which is to say, look, there's nobody listening to us who hasn't heard the term AI, and yet virtually no one really understands what is going on. So if we want to talk about how AI can change medicine, I think we have to first invest in

some serious bandwidth in understanding AI. Now, you alluded to the fact that when you were doing your PhD in the early 80s, you were in the second generation of AI, which leads me to assume that the first generation was shortly following World War II, and that's probably why someone by the name of Alan Turing has his name on something called the Turing Test. So maybe you can talk us through what Alan Turing posited, what the Turing Test was,

was and proposed to be and really what gen one AI was. We don't have to spend too much time on it, but clearly it didn't work. But let's maybe talk a little bit about the postulates around it and what it was. After World War II, we had computing machines. And anybody who was a serious computer scientist could see that you could have

these processes that could generate other processes. And you could see how these processes could take inputs and become more sophisticated. And as a result, shortly after World War II, we actually had artificial neural networks, the perceptron, which was modeled, roughly speaking, on the ideas of a neuron that could take

inputs from the environment and then have certain expectations. And if you updated the neuron as to what was going on, it would update the weights going into that artificial neuron. And so going back to Turing, he just came up with a test that said essentially

If a computational entity could maintain, essentially, its side of the conversation without revealing that it was a computer,

and that others would mistake it for a human, then for all intents and purposes, that would be intelligent behavior. And there's been all sorts of additional constraints put on it. And one of the hallmarks of AI, frankly, is that it keeps on moving the goalposts of what we consider to be

intelligent behavior. If you had told someone in the 60s that the world chess masters were going to be beaten by a computer program, they'd say, well, that's AI. Really, that's AI. And then when Kasparov was beaten by the blue, by the IBM machine,

People said, "Well, it's just doing search very well. It's searching through all the possible moves in the future. It also knows all the grandmaster moves. It has a huge encyclopedic store of all the different grandmaster moves." This is not really intelligent behavior.

If you told people it could recognize human faces and find your grandmother in a picture, on any picture on the internet, they'd say, "Well, that's intelligence." And of course, when we did it, no, that was not intelligent. And then when we said it could write a rap poem,

about Peter Atiyah based on your webpage and it did that, well, that would be intelligent. That would be creative. But then if you said it's doing it based on having created a computational model based on all the text ever generated by human beings, as much as we can gather, which is one to six terabytes of data,

And this computational model basically is predicting what is the next word that's going to say, not just the next word, but of the millions of words that could be, what are the probabilities of that next word? That is what's generating that rap. There's people who are arguing that's not right.

intelligence. So the goalposts around the Turing test keep getting moved. So I just have to say that I no longer find that an interesting topic because it's what it's actually doing. And whether you want to call it intelligent or not,

That's up to you. It's like discussing whether is a dog intelligent? Is a baby intelligent before it can recognize constancy of objects? Initially, babies, if you hide something from it, it's gone. And it comes back. It's a surprise. But at some point early on, they learn there's constancy of objects even when they don't see them. There's this spectrum of intelligent behavior. And I'd just like to remind myself that

There's a very simple computational model predicting the next word called a Markov model. And several years ago, people were studying songbirds, and they were able to predict the full song, the next note, and the next note of the songbird just using a very simple Markov model. So from that perspective, I know we think that we're all very smart, but the fact that you and I, without thinking too hard about it, can come up with fluid speech

Okay, so the model is now a trillion parameters. It's not a simple Markov model, but it's still a model. And perhaps later we'll talk about how this plays into, unfortunately, the late Kahneman's notions of thinking fast and thinking slow, and his notion of system one, which is this sort of pattern recognition, which is very much similar to what I think we're seeing here, and system two, which is the more deliberate

and much more conscious kind of thinking that we pride ourselves on. But a lot of what we do is this sort of reflexive, very fast pattern recognition.

So if we go back to World War II, that's to your point where we saw basically rule-based computing come of age. And anybody who's gone back and watched movies about the Manhattan Project or the decoding of all the sorts of things that took place, Enigma, for example, again, that's straight rules-based computational power. And we're obviously at the limits of

I can only go so far. But it seems that there was a long hiatus before we went from there to kind of like maybe what some have called context based computation, what your Siri does or Alexa, which is a step quite beyond that.

And then, of course, you would go from there to what you've already talked about, Blue or Watson, where you have computers that are probably going even one step further. And then, of course, where we are now, which is...

GPT-4. I want to talk a little bit about the computational side of that, but more what I want to get at is this idea that there seems to be a very nonlinear pace at which this is happening. And I hear your point. I'd never thought of it that way. I hear your point about the goalpost moving, but I think your instinct around

majoring in the right thing is also relevant, which is let's focus less on the fact that we're never quite hitting the asymptote definitionally. Let's look at the actual output and it is staggeringly different. So what was it that was taking place during the period of your PhD, what you're calling wave two of AI? What was the objective and where was the failure? So what the objective was in the first era, you wrote computerized

computer programs in assembler language or in languages like Fortran. And there was a limit of what you could do. You had to be a real computational programmer to do something in that mode. In wave two, in the 1970s, we came up with these rule-based systems where we said rules in what looked like English. If there is a patient who has a fever,

and you get an isolate from the lab, and that bacteria in the isolate is gram positive, then you might have a streptococcal infection with a probability of so-and-so. And these rule-based systems, which you're now programming in the level of human knowledge, not in computer code, the problem with that was several-fold. A, you're going to generate tens of thousands of these rules, and these rules would interact in ways that you could not anticipate.

And we did not know enough, and we could not pull out of human beings the right probabilities. And what is the right probability of you have a fever and you don't see anything on the blood test? What else is going on? And there's a large set of possibilities, and getting all those rules out of human beings ended up being extremely expensive, and the results were not stable. And for that reason, because we didn't have much data online,

we could not go to the next step, which is have data to actually drive these models. What were the data sources then? Books, textbooks, and journals as interpreted by human experts. That's why some of these were called expert systems, because they were derived from introspection by experts who would then come up with the rules, with the probabilities, and the

Some of the early work, like for example, there was a program called Misen run by Ted Shortlift out of Stanford who developed a antibiotic advisor that was a set of rules based on what he and his colleagues

sussed out from the different infectious disease textbooks and infectious disease experts. And it stayed only up to date as long as they kept on looking at the literature, adding rules, fine-tuning it. If there's an interaction between two rules that was not desirable, then you had to adjust that. Very labor-intensive. And then if there's a new thing, you'd have to

add some new rules. If AIDS happened, you'd have to say, "Oh, there's this new

pathogen, I have to make a bunch of rules. The probability is going to be different if you're an IV drug abuser or if you're a male, a homosexual. And so it was very, very hard to keep up. And in fact, people didn't. What was the language that it was programmed in? Was this Fortran? No, no. These were so-called rule-based systems. And so the languages, for example, system mycin was called e-mycin, essential mycin. So these looked like English. Super labor-intensive.

Super labor-intensive, and there's no way you could keep it up to date. And at that time, there was no electronic medical records. They were all paper records, so not informed by what was going on in the clinic.

Three revolutions had to happen in order for us to have what we have today. And that's why I think we had such a quantum jump recently. Before we get to that, that's the exciting question, but I just want to go back to the Gen 2. Were there other industries that were having more success than medicine? Were there applications in the military? Were there applications elsewhere in government where they got a little closer to utility? Yes. So there's a company which...

It was a remnant of, back in the 1970s, there were a whole bunch of computer companies around what we called 128 in Boston. And these were companies that were famous back then, like Wang Computer, like Digital Equipment Corporation. And it's a very sad story for Boston because that was...

Silicon Valley got its pearl of computer companies around it. And one of the companies, Digital Equipment Corporation, built a program called R1. And R1 was an expert in configuring the mini computers that you ordered. So you wanted some capabilities and it would actually configure all

all the industrial components, the processors, the disk, and it would know about all the exceptions and what you needed to know, what cabling, what memory configuration, all that was done, and it basically replaced several individuals who had that very, very rare knowledge to configure their systems. It was also used in several government logistics efforts. But even those efforts, although they were successful and used commercially,

were limited because it turns out human beings, once you got to about three, four, five, six thousand rules, no single human being could keep track of all the ways these rules could work. We used to call this the complexity barrier, that these rules would interact in unexpected ways and you'd get incorrect answers, things that were not commonsensical.

because you had actually not captured everything about the real world. And so it was very narrowly focused. And if the expertise was a little bit outside the area of focus, if let's say it was an infectious disease program, and there was a little bit of influence from the cardiac status of the patient, and you had not accurately modeled that, its performance would degrade rapidly. Similarly, if

There was in digital equipment a new model that had a complete different part that had not included and that there were some dependencies that were not modeled, it would degrade in performance. So these systems were very brittle, did not show common sense. They had expert behavior, but it was very narrowly done. There were applications of medicine back then that survived till today. For example,

Already back then, we had these systems doing interpretation of ECGs pretty competently, at least a first pass until they would be reviewed by an expert cardiologist. There's also a program that interpreted what's called serum protein electrophoresis, where you look at a protein separated out by an electric gradient to make a diagnosis, let's say, of myeloma or other protein disorders. And those

also were deployed clinically, but they only worked very much in narrow areas. They were by no stretch of the imagination, general purpose reasoning machines. So let's get back to the three things. There are three things that have taken the relative failures of first and second attempts at AI and got us to where we are today. I can guess what they are, but let's just have you walk us through them.

The first one was just lots of data and we needed to have a lot of online data to be able to

develop models of interesting performance and quality. So ImageNet was one of the first such data sets, collections of millions of images with annotations, importantly. This has a cat in it. This has a dog in it. This is a blueberry muffin. This has a human in it. And having that was absolutely essential to

allow us to train the first very successful neural network models. And so having those large data sets was extremely important. The other, and there's equivalent in medicine, which is we did not have a lot of textual information about medicine until PubMed went online. So all the literature, medical literature, at least we have an abstract of it in PubMed,

Plus, we have for a subset of it that's open access because the government has paid for it through grants. There's something called PubMed Central, which has the full text. So all of a sudden that has opened up over the last 10 years. And then electronic health records, after Obama signed the HITECH Act, electronic health records, which also ruined the lives of many doctors, also happened to also generate a lot of text for the use in these systems.

So that's large amounts of data being generated online. The second was the neural network models themselves. So the perceptron that I mentioned that was developed not too long after World War II was shown by one of the pioneers of AI, Marvin Minsky,

to have fundamental limitations in that it could not do certain mathematical functions like what's called an exclusive or gate. Because of that, people said these neural networks are not going to scale. But there were a few true believers who kept on pushing and making more and more advanced architectures and those multi-level deep neural networks. So instead of having one neural network, we layer on top of

one neural network, another one, another one, and another one, so that the output of the first layer gets propagated up to the second layer of neurons, to the third layer, and fourth layer, and so on. And I'm sorry, was this a theoretical mathematical breakthrough or a technological breakthrough?

Both. It was both because having those insight that these we could actually come up with all the mathematical functions that we needed to, we could simulate them with these multi-level networks, whereas was a theoretical insight, but we'd have never made anything out of it if not for the fact of sweaty teenagers, mostly teenage boys playing video games. In order to have first person shooters

capable of running high resolution pictures of aliens or monsters in high resolution, 24-bit color, 60 frames per second. We needed to have processors, very parallel processors,

that would allow you to do the linear algebra that allowed you to calculate what was going to be the intensity of color on every dot of the screen at 60 frames per second. And that's literally just because of the matrix multiplication math that's required to do this. You have N by M matrices that are so big and you're crossing and dotting huge matrices. Huge matrices. And it turns out that's something that can be run in parallel

So you want to have multiple parallel processors capable of rendering those images again at 60 frames per second. So basically millions of bits on your screen being rendered at 24 or 32 bit color. And in order to do that, you need to have that linear algebra that you just referred to being run in parallel. And so these parallel processors called graphical processing units, GPUs, were developed.

And the GPUs were developed by several companies, and some of them stayed in business, some didn't, but they were aptly essential to the success of video games. Now, it then occurred to many smart mathematicians and computer scientists that the same linear algebra that was used to drive that computation for images could also be used to calculate the weights of the edges between the neurons in a neural network.

So the mathematics of updating the weights in response to stimuli, let's say, of a neural network, updating of those weights can be done all in linear algebra. And if you have this processor, so a typical computer has a central processing unit. So that's one processing unit. A GPU

has tens of thousands of processors that do this one very simple thing: linear algebra.

And so by having this parallelism that only supercomputers would have typically on your simple PC, because you needed to show the graphics at 60 frames per second, gave us all of a sudden these commodity chips that allowed us to calculate the performance of these multi-level neural networks. So that theoretical breakthrough was the second part, but would not have happened without the actual...

implementation capability that we had with the GPUs. And so NVIDIA would be the most successful example of this, presumably? It was not the first, but it's definitely the most successful example. And there's a variety of reasons why it was successful and created an ecosystem of implementers who built their neural network deep learning systems on top of the NVIDIA architecture.

Would you go back and look at the calendar and say this was the year or quarter when there was escape velocity achieved there? Yeah. So it's probably around 2012 when there was an ongoing contest every year saying who has the best image recognition software. And these deep neural networks running off GPUs were able to outperform

significantly all their other competitors in image recognition in 2012. That's very clearly when everybody just woke up and said, "Whoa, we knew about neural networks. We didn't realize that these convolutional neural networks were going to be this effective." And it seems that the only thing that's going to stop us is computational speed and the size of our data sets.

That moved things very fast along in the imaging space with very soon consequences in medicine. It was only six years later that we saw journal articles about

recognition of retinopathy, the diseases affecting the retina, the back of your eye, in diabetes. And a paper coming out of all places from Google saying we can recognize different stages of retinopathy based on the images of the back of the eye. And that also was a wake-up call because, yes, part of the goalpost moving. It was great that we could recognize cats and dogs in web pages. But now, all of a sudden, this thing that we thought

was specialized human expertise could be done by that same stack of software, just if you gave it enough cases of these retinopathies, it would actually work well. And furthermore, what was wild was that there's something called transfer learning, where you tune up these networks, get them to recognize cats and dogs,

And in the process of recognizing cats and dogs, it learns how to recognize little circles and lines and fuzziness and so on. You did a lot better in training up the neural network first on the entire set of images and then on the retinas. And if you just went straight to, I'm just going to train on the retinas.

And so that transfer of learning was impressive. And then the other thing as a doctor was impressive to many of us. I was actually asked to write an editorial for the Journal of the American Medical Association in 2018 when a Google article was written. What was impressive to us was that what was the main role of doctors in that publication?

It was just twofold. One was to just label the images that were used for training. This is retinopathy, it's not retinopathy. And then to serve as judges of its performance. And that was it. The rest of it was computer scientists working with GPUs and images, tuning it, and that was it. Didn't look anything like medical school, and you were having expert level recognition of retinopathy. That was a wake-up call.

You've alluded to the 2017 paper by Google, Attention is All That is Needed, I think is the title of the paper. Attention is All You Need. That's not what I'm referring to. I'm referring to a 2018 paper in JAMA. Ah, I'm sorry. You're talking about the great paper, Attention is All You Need. That was about the invention of the transformer, which is a specific type of neural network architecture.

I was talking about these were vanilla, fairly vanilla convolutional neural networks, the same one that can detect dogs and cats. It was a big medical application, retinopathy, 2018. Except for computer scientists, no one noticed the attention is all you need paper. And Google had this wonderful paper that said, you know, if we recognize not just

text that collocates together. Previously, so we're going to get away from images for a second, there was this notion that I can recognize a lot of similarities in text if I see which words occur together. I can

implicate the meaning of a word by the company it keeps. And so if I see this word and it has around it kingdom, crown, throne, it's about a king. And similarly for queen and so on. That kind of association in which we created what was called embedding vectors, which just in plain English, it's a string of numbers that says

for any given word, what's the probability? How often do these other words co-occur with it? And just using those embeddings, those vectors, those lists of numbers that describe the co-occurrence of other words, we were able to do a lot of what's called natural language processing, which is looking at text and saying, "This is what it means. This is what's going on." But then in the 2017 paper,

they actually took a next step, which was the insight that where exactly the thing that you're focusing on was in the sentence, what was before and after, the actual ordering of it mattered, not just the simple co-occurrence. That knowing what position that word was in the sentence actually made a difference. That paper showed the performance went way up in terms of recognition.

And that transformer architecture that came from that paper made it clear for a number of researchers, not me, that if you scaled that transformer architecture up to a larger model so that the position dependence and this vector was learned across many, many more texts, the whole internet, you could train it to do various tasks. This transformer model, which is called the pre-trained model. So

I apologize, I find it very boring to talk about because, unless I'm working with fellow nerds, this transformer, this pre-trained model, can think of it as the equivalent of an equation with multiple variables.

In the case of GPT-4, we think it's about a trillion variables. It's like an equation where you have a number in front of each variable, a coefficient that's about a trillion long. And this model can be used for various purposes. One is the chatbot purpose, which is given this sequence of words, what is the next word that's going to be said? Now, that's not the only thing you could use this model for.

But that turns out to have been the breakthrough application of the transformer model for text.

Just to round out what you said earlier, Zach, would you say that is the third thing that enabled this third wave of AI, the transformer? It was not what I was thinking about. For me, I was thinking of the real breakthrough in data-driven AI I put around the 2012 era. This is yet another. If you talk to me in 2018, I would have already told you we're in a new heyday and everybody would agree with you. There was a lot of excitement about AI just because of the image recognition capabilities.

This was an additional capability that's beyond what many of us were expecting just from the scale up of the neural network. The three, just to make sure I'm consistent, was large data sets, multi-level neural networks, aka deep neural networks, and the GPU infrastructure. That brought us well through the 2012 to 2018 phase.

The 2017 blip that became what we now know to be this whole large language model transformer architecture, that development, unanticipated for many of us, but that was already on the heels of a ascendant AI era. There was already billions of dollars of frothy investment in frothy companies, some of which did well and many of which did not do so well.

transformer architecture has revolutionized many parts of the human condition, I think. But it was already part of, I think, the third wave. There's something about GPT where I feel like most people, by the time GPT-3 came out, or certainly by 3.5, this was now outside of the purview of computer scientists, people in the industry who were investing in it.

This was now becoming as much a verb as Google was in probably the early 2000s. There were clearly people who knew what Google was in 96 and 97, but by 2000, everybody knew what Google was, right? Something about GPT 3.5 or 4 was kind of the tipping point where I don't think you can not know what it is at this point.

I don't know if that's relevant to the story, meaning does that sort of speak to what trajectory we're on now? The other thing that I think, Zach, has become so audible in the past year is the elevation in the discussion of how to regulate this thing.

which seems like something you would only argue about if you felt that there were a chance for this thing to be harmful to us in some way that we do not yet perceive. So what can you say about that? Because that's obviously a nod to the technical evolution of AI, that very serious people are having discussions about

pausing moratoriums, regulations. There was no public discussion of that in the 80s, which may have spoke to the fact that in the 80s, it just wasn't powerful enough to pose a threat. So can you maybe give us a sense of what people are debating now? What is the smart, sensible, reasonable argument on both sides of this? And let's just have you decide what the two sides are. I'm assuming one side says,

pedal to the metal, let's go forth on development, don't regulate this, let's just go nuts. The other side is, no, we need to have some breaks and barriers. Not quite that. So you're absolutely right that chatbots have now become a commonly used noun, and that probably happened with the emergence of GPT-3.5, and that appeared around, I think, December of 2022. But now it's

Yes, because out of the box, that pre-trained model I told you about could tell you things like, how do I kill myself? How do I manufacture a toxin? It could allow you to do a lot of harmful things. So there was that level of concern. We can talk about what's been done about those first order efforts.

Then there's been a group of scientists who interestingly went from saying, "We'll never actually get general intelligence from this particular architecture," to saying, "Oh my gosh, this technology is able to inference in a way that I had not anticipated."

And now I'm so worried that either because it's malevolent or just because it's trying to do something that has bad side effects for humanity, it presents an existential threat. Now, on the other side, I don't believe is anybody saying, let's just go heads down and

and let's see how fast we can get to artificial general intelligence. Or if they do think that, they're not saying it openly. Can you just define AGI, Zach? I think we've all heard the term, but is there a quasi-accepted definition? First of all, there's not, and I hate myself for even bringing it up because it starts- I was going to bring it up before you, anyway, it was inevitable. That was an unfortunate slip because artificial general intelligence-

means a lot of things to a lot of people. And I slipped because I think it's, again, a moving target and it's very much eye on the beholder. There's a guy called Eliezer Yudkowsky, one of the so-called doomers. And he comes up with great scenarios of how a sufficiently intelligent

intelligent system could figure out how to persuade human beings to do bad things or control of our infrastructure to bring down our communications infrastructure or airplanes out of the sky. And we can talk about whether that's relevant or not. And on the other side, we have, let's say, OpenAI and Google.

But what was fascinating to me is that OpenAI, which, working with Microsoft, generated GPT-4, were not saying publicly at all, "Let's not regulate it." In fact, they were saying, "Please regulate me." Sam Altman went on a world tour where he said, "We should be very concerned about this. We should regulate AI." And he was before Congress saying, "We should regulate AI."

And so I feel a bit churlish about saying this because Sam was kind enough to write a forward to the book I wrote with Peter Lee and Terry Goldberg on GPT-4 and the revolution

in medicine. But I was wondering why were they insisting so much on regulation? And there's two interpretations. One is just a sincere, and it could very well be that, sincere wish that it be regulated so we check these machines, these programs to make sure they don't actually do anything harmful.

The other possibility, unfortunately, is something called regulatory lock-in, which means I'm a very well-funded company and I'm going to create regulations with Congress about what is required, which boxes do you have to check in order to be allowed to run. If you're a small company, you're not going to have a bevy of lawyers with big checks to comply with all the regulatory requirements.

And so, I think Sam is, I don't know him personally, I imagine he's a very well-motivated individual. But whether it's for the reason of regulatory lock-in or for genuine concern, there has not been any statements of, let's go heads down. They do say, let's be regulated. Now, having said that...

Before you even go with the Doomer scenario, I think there is someone just as potentially evil that we have to worry about, another intelligence, and that's human beings. And how do human beings use these great tools? So just as we know for a fact that one of the earliest users of GPD-4 were high schoolers trying to do their homework and solve hard puzzles given to them,

We also know that various parties have been using the amazing text generation and interactive capabilities of these programs to spread misinformation, to chatbots,

And there's a variety of malign things that could be done by third parties using these engines. And I think that's, for me, the clear and present danger today, which is how do individuals decide to use these general purpose programs? If you look at what's going on in the Ukraine-Russian war, I see more and more autonomous vehicles flying and carrying weapons.

weaponry and dropping bombs. And we see in our own military a lot more autonomous drones with greater and greater autonomous capabilities. Those are purpose-built to actually do dangerous things. And a lot of science fiction fans will refer to Skynet from the Terminator series. But we're literally building it right now.

In the Terminator, Zach, they kind of refer to a moment, I don't remember the year, like 1997 or something. And I think they talk about how Skynet became, quote, self-aware. And somehow when it became self-aware, it just decided to destroy humans. Is self-aware movie speak for AGI? Like, what do you think self-aware means in more technical terms? Or is it super intelligence? There's so many terms here and I don't know what they mean.

Okay, so self-awareness means a process by which the intelligent entity can look back, look inwardly at its own processes and recognize itself. Now, that's very hand-wavy, but Douglas Hofster has probably done the most

thoughtful and clear writing about what self-awareness means. I will not do it justice, but if you really want to read a wonderful book that spends a whole book trying to explain it, it's called I Am a Strange Loop.

And in I Am a Strange Loop, he explains how if you have enough processing power and you can represent the processes that you have essentially models of the processes that constitute you. In other words, you're able to look at what you're thinking. You may have some sense of self-awareness. There's a bit of an act of faith on that. Many AI researchers don't buy that definition. There's a difference between self-awareness

and actual raw intelligence. You can imagine a super powerful computer that would predict everything that was going to happen around you and was not aware of itself as an entity. The fact remains, you do need to have a minimal level of intelligence to be able to be self-aware. So a fly may not be self-aware. It just goes and finds good smelling poop and does whatever it's programmed to do on that.

But dogs have some self-awareness and awareness of their surroundings. They don't have perfect self-awareness. They don't recognize themselves in the mirror, and they'll bark at that. Birds will recognize themselves in mirrors.

We recognize ourselves in many, many ways. So there is some correlation between intelligence and self-awareness, but these are not necessarily dependent functions. So what I'm hearing you say is, look, there are clear and present dangers associated with current best AI tools in that humans can use them for nefarious purposes. It seems to me that the most scalable example of that is still relatively small in that

It's not existential threat to our species large, correct? Well, yes and no. If I was trying to do gain-of-function research with a virus... Good point. I could use these tools very effectively. Yeah, that's a great example. There's this disconnect, and perhaps you understand the disconnect better than I do. There's those real existential threats, and then there's this more fuzzy thing that...

We're worried about, correctly, about bias, incorrect decisions, hallucinations. We can get into what that might be. And our use in the everyday of human condition. And there's concerns about mistakes that might be made. There's concerns about displacement of workers, that just as automation displaced a whole other series of workers, now that we have something that works in the knowledge industry,

automatically, just as we're placing a lot of copy editors and illustrators with AI, where's that going to stop? It's now much more in the white collar space. And so there is concern around the harm that could be generated there. In the medical domain, are we getting good advice? Are we getting bad advice? Whose interests are being optimized in these various decision procedures? That's another level that doesn't quite rise at all to the level of extinction events

but a lot of policymakers and the public seem to be concerned about it. Those are fair points. Let's now talk about that state of play within medicine. I liked your first example, almost one we take for granted, but you go and get an EKG at the doctor's office, this was true 30 years ago, just as it is today, you get a pretty darn good readout. It's going to tell you if you have an AV block, it's going to tell you if you have a bundle branch block. Put it this way, they read EKGs better than I do.

That's not saying much anymore, but they do. What was the next area where we could see this? It seems to me that radiology is a field of medicine, which is, of course, image pixel based medicine that would be the most logical next place to see AI do good work. What is the current state of AI in radiology?

In all the visual-based medical specialties, it looks like AI can do as well as many experts. So what are the image appreciation subspecialties? Pathology, when you're looking at slices of tissue under the microscope. Radiology, where you're looking at x-rays or MRIs. Dermatology, where you're looking at pictures of the skin.

So in all those visual based specialties, the computer programs are doing by themselves as well as many experts. But they're not replacing the doctors because that image recognition process is only part of their job. Now, to be fair, to your point in radiology, we already today before AI,

in many hospitals, would send x-rays by satellite to Australia or India, where they would be read overnight by a doctor or a specially trained person who had never seen the patient. And then the reports filed back to us because they're 12 hours away from us. Overnight, we'd have the results of those reads. And that same kind of function can be done automatically by AI. So that's replacing a certain kind of

Let me dig into that a little bit more. So let's start with a relatively simple type of image such as a mammogram or a chest X-ray. So it's a single image. I mean, I guess with a chest X-ray, you'll get an AP and a lateral, but let's just say you're looking at an AP or a single mammogram.

A radiologist will look at that. A radiologist will have clinical information as well. So they will know why this patient presented in the case of the chest x-ray, for example, in the ER in the middle of the night. Were they short of breath? Do they have a fever? Do they have a previous x-ray? I can compare it to all sorts of information.

Are we not at the point now where all of that information could be given to the AI to enhance the pretest probability of whatever diagnosis it comes to?

I am delighted when you say pretest probability. Don't talk dirty around me. Love my Bayes theorem over here. Yep. So you just said a lot because what you just said actually went beyond what the straight convolutional neural networks would do because they actually could not replace radiologists because they could not do a good job of taking into account the previous history of the patient. And it's required the emergence of transformers

where you can have multi-modality of both the image and the text. Now, they're going to do better than many, many radiologists today.

There is, I don't think, any threat yet to radiologists as a job. One of the most irritating to doctors predictions was by Geoffrey Hinton, one of the intellectuals leaders of neural network architecture. He said, I think it was in 2016, I have this approximately wrong, but in six years, we would have no need for radiologists.

And that was just clearly wrong. And the reason it was wrong is, A, they did not have these capabilities that we just talked about, about understanding about the clinical context. But it's also the fact that we just don't have enough radiologists. Meaning to do the training? To actually do the work. So if you look at American medicine, I'll let you shut me down. But if you look at residency programs, we're not getting enough radiologists out.

We have an overabundance of applicants for interventional radiology. They're making a lot of money. It's high prestige. But straight up radiology readers, not enough of them.

Primary care doctors, I go around medical schools and ask who's becoming a primary care doctor, almost nobody. So, primary care is disappearing in the United States. In fact, Mass General and Brigham announced officially they're not seeing primary care patients. People are still going to dermatology and they're still going to plastic surgery. What I did, pediatric endocrinology, half of the slots nationally are not being filled.

Pediatric developmental disorders like autism, those slots, half of them filled. PID. There's a huge gap emerging in the available expertise. So it's not what we thought it was going to be that we had a surplus of doctors that had to be replaced. It's just we have a surplus in a few focused areas, which are very popular.

And then for all the work of primary care and primary prevention kind of stuff that you're interested in, we have almost no doctors available. Yeah. Let's go back to the radiologist for a second, because again, I'm fixated on this one because it seems like the most, well, the closest one to address. And again, if you're saying, look, we have a dearth of imaging radiologists who are able to work the emergency rooms, urgent care clinics and hospitals.

Wouldn't that be the first place we would want to apply our best of imaging recognition with our super powerful GPUs and now plug them into our transformers with our language models so that I can get...

clinical history, medical past history, previous images, current images, and you don't have to send it to a radiologist in Australia to read it, who then has to send it back to a radiologist here to check. Like, if we're just trying to fill a gap, that gap should be fillable, shouldn't it? And that's exactly where it is being filled. And what keeps distracting me in this conversation is

is that there's a whole other group of users of these AIs that we're not talking about, which is the patients. And previously, none of these tools were available to patients. With the release of GPT-3.5 and 4, and now Gemini and CLAWD-3, they're being used by patients all the time in ways that we had not anticipated.

Let me give you an example. So there's a child who was having trouble walking, having trouble chewing, and then started having intractable headaches. Mom brought him to multiple doctors, they did multiple imaging studies, no diagnosis, kept on being in intractable pain. She just typed into GPT-4 all the reports and asked GPT-4, what's the diagnosis? And GPT-4 said,

tethered cord syndrome. She then went with all the imaging studies to a neurosurgeon and said, what is this? He looked at it and said, tethered cord syndrome. And we have such an epidemic of misdiagnosis and undiagnosed patients. Part of my background that I'll just mention briefly, I'm the principal investigator of the coordinating center of something called the Undiagnosed Network. It's a network with 12 academic hospitals down the West Coast from University of Washington,

Stanford, UCLA, to Baylor, up the East Coast, Harvard Hospitals, NIH. And we see a few thousand patients every year.

And these are patients who have been undiagnosed and they're in pain. That's just a small fraction of those who are undiagnosed. And yes, we bring to bear a whole bunch of computational techniques and genomic sequencing to actually be able to help these individuals. But it's very clear that there's a much larger burden out there of misdiagnosed individuals. But the question for you, Zach, which is, does it surprise you that in that example, the mother was the one that...

Went to GPT-4 and inputted that I mean she had presumably been to many physicians along the way Were you surprised that one of the physicians along the way hadn't been the one to say gee? I don't know but let's see what this GPT-4 thing can do most clinicians I know do not have what I used to call the Google reflex I remember when I was on the wards and we had a child with dysmorphology. They look different and

And I said to the fellows, this is after residency, "What is the diagnosis?" And they said, "I don't know, I don't know." I said, "He has this and this and this finding. What's the diagnosis?" And I said, "How would you find out? They had no idea." I just said, "Let's take what I just said and type it into Google." In the top three responses, there was the diagnosis.

And that reflex, which they do use in a civilian life, they did not have in the clinic. And doctors are in a very unhappy position these days. They're really being driven very, very hard

And they're being told to use certain technological tools. They're being turned into data entry clerks. They don't have the Google reflex. They don't have the reflex, who has the time to look up a journal article? They don't do the Google reflex. Even less do they have the, let's look up the patient's history and see what GPT-4 would come up with.

I was gratified to see early on doctors saying, wow, look, I just took the patient history, plunked it in the GP4 and said, write me a letter of prior authorization. And they were actually tweeting about doing this, which on the one hand, I was very, very pleased for them because it was saving them five minutes to write that letter to the insurance company saying, please authorize my patient for this procedure.

I was not pleased for them because if you use ChatGPT, you're using a program that is covered by OpenAI as opposed to a version of GPT-4 that is being run on protected Azure cloud by Microsoft, which is HIPAA covered. For those of you audience doesn't know, HIPAA is the legal framework under which we protect patient privacy. And if you violate it, you can be fined and even go to prison.

So in other words, if a physician wants to put any information into GPT-4, they better not identify it. That's right. So they just plunked in a patient note into chat GPT, that's a HIPAA violation. If there's a Microsoft version of it, which is HIPAA compliant, it's not. So they were using it to improve their lives. The doctors were using it for improving the business, the administrative part of healthcare, which is incredibly important. But by and large,

Only a few doctors use it for diagnostic acumen. And then what about more involved radiology? So obviously a plain film is one of the more straightforward things to do, although it's far from straightforward as anybody knows who's stared at a chest x-ray.

But once we start to look at three-dimensional images, such as cross-sectional images, CT scans, MRIs, or even more complicated images like ultrasound and things of that nature, what is the current state of the art with respect to AI in the assistance of reading these types of images? So that's the very exciting news, which is

Remember how I said it was important to have a lot of data, one of the three ingredients of breakthrough. So all of a sudden having a lot of data around, for example, echocardiograms, the ultrasounds of your heart. Normally it takes a lot of training to interpret those images correctly. So there is a recent study from the Echo Clip Group led, I think out of UCLA, and they took a million echocardiograms

and a million textual reports and essentially train the model, both to create those embeddings I talked about of the images and of the text. Just to make sure people understand what we're talking about, this is not, here's a picture of a cat, here's a description, cat.

When you put the image in, you're putting a video in. Now, you're putting a multi-dimensional video because you have timescale, you have Doppler effects. This is a very complicated video that is going in.

It's a very complicated video and it's three-dimensional and it's weird views from different angles. And it's dependent on the user. In other words, the tech, the radiology tech can be good or bad. If I was the one doing it, it would be awful. The echo tech does

does not have medical school debt. They don't have to go to medical school. They don't have to learn calculus. They don't have to learn physical chemistry, all the hoops that you have to go through in medical school. You don't have the attitudinal debt of doctors. So in two years, they get all those skills and they actually do a pretty good job. They do a fantastic job. But my point is their skill is very much an important determinant of the quality of the image. Yes. But what we still require these days is a cardiologist to then read it and interpret it.

Right. That's sort of where I'm going, by the way, is we're going to get rid of the cardiologist before we get rid of the technician. We're on the same page. My target in this conversation is nurse practitioners and physician assistants with these tools can replace a lot of expert clinicians. And there is a big open question. What is the real job for doctors in 10 years from now? And I don't think we know the answer to that because you fast forward to the conversation just now.

Excellent. Well, let's think about it. We still haven't come to proceduralists. So we still have to talk about the interventional radiologist, the interventional cardiologist, and the surgeon. We can talk about the role of the surgeon and the da Vinci robot in a moment. But I think what we're doing is we're kind of identifying the pecking order of physicians. And let's not even think about it through the lens of replacement. Let's start with the lens of augmentation.

which is the radiologist can be the most easily augmented, the pathologist, the dermatologist, the cardiologist who's looking at echoes and EKGs and stress tests, people who are interpreting visual data and using visual data

will be the most easily augmented. The second tranche of that will be people who are interpreting language data plus visual data. So now we're talking about your internist, your pediatrician, where you have to interpret symptoms and combine them with laboratory values and combine it with a story and an image.

Is that a fair assessment in terms of tier? Absolutely a fair assessment. My only quibble, it's not a quibble, I'm going to keep on going back to this, is in a place where we don't have primary care. The American Association of Medical Colleges submits that by 35, that's only 11 years from now, we'll be missing on the order of 50,000 primary care doctors. As I told you, I can't get primary care at the Brigham or at MGH today. And in the absence of that, you have to ask yourself, how can we replace primary

these absent primary care practitioners with nurse practitioners, with physician assistants augmented by these AIs. Because there's literally no doctor to replace.

So tell me, Zach, where are we technologically on that augmentation? If NVIDIA never came out with another chip, if they literally said, you know what, we are only interested in building golf simulators and we're done with the progress of this and this is as good as it's going to get. Do we have good enough GPUs, good enough processors?

multi-layer neural networks that all you need is more data and training sets that we could now do the augmentation that has been described by us in the last five minutes? The short answer is yes. Let me make it very concrete. Most concierge services cost in Boston somewhere between $5,000 and $20,000 a year. You can get this very low cost concierge service that I'm just amazed that have not done the following called One Medical. One Medical was acquired by Amazon.

and they have a lot of nurse practitioners in there. And you can make an appointment, you can text with them. I believe that those individuals could be helped in ordering the right imaging studies, the right EKGs, the right medications, and assess your continuing heart failure, and only decide in a very few cases that you need to see a specialist cardiologist.

or a specialist endocrinologist today. Just be a matter of just making the current models better, evaluating them, because not all models are equal. A big question for us, this is the regulatory question, which is which ones do a better job? And they're not all equal. I don't think we need technological breakthroughs to just make the current set of

paraprofessionals work at the level of entry-level doctors. Let me quickly say the old very bad joke. What do you call the medical student who graduates at the bottom of his class? Doctor. And so if you could just merely get the bottom 50% of doctors to be as good as the top 50%, that would be transformative for healthcare.

Now, there are other superhuman capabilities that we can go towards and we can talk about if we want that do require the next generation of algorithms, NVIDIA architectures, and data sets. Everything stopped now, we could already transform medicine. It's just a matter of the sweat equity to create the models,

figure out how to include them in the workflow, how to pay for them, how to create a reimbursement system and a business model that works for our society. But there's no technological barrier. In my mind, everything we've talked about is take the best case example of medicine today

and augment it with AI such that you can raise everyone's level of care to that of the best, no gaps, and it's scaled out. Okay, now let's talk about another problem, which is where do you see the potential for AI in solving problems that we can't even solve on the best day at the best hospitals with the best doctors? So let me give you an example.

We can't really diagnose Alzheimer's disease until it appears to be at a point that for all intents and purposes is irreversible. Maybe on a good day, we can halt progression really, really early in a patient with just a whiff of MCI, mild cognitive impairment.

maybe with an early amyloid detection and an anti-amyloid drug. But is it science fiction to imagine that there will be a day when an AI could listen to a person's voice, watch the movements of their eyes, study the movements of their gait, and predict 20 years in advance when a person is staring down the barrel of a neurodegenerative disease and act at a time when maybe we could actually reverse it? How science fiction-y is that?

I don't believe it's science fiction at all. Do you know that looking at retinas today, images of retina, straightforward convolutional neural network, not even ones that involve transformers, can already tell you by just looking at your retina, not just whether you have retinal disease, but if you have hypertension, if you're a male, if you're female, how old you are, and some estimate of your longevity. And that's just looking at the back of your eye and seeing enough data.

I was a small player in a study that appeared in Nature in 2005 with Bruce Yankner. We were looking at frontal lobes of individuals who had died for a variety of reasons, often accidents, of various ages. And we saw, bad news for people like me, that after age 40, your transcriptome, the genes that are switched on, fell off a cliff. 30% of your transcriptome went down. And so there seemed to be a big difference

in the expression of genes around age 40, but there was one 90-year-old who looked like the young guy, so maybe there's hope for some of us.

But then I thought about it afterwards and there were other things that actually have much smoother functions, which don't have quite the follow up, like our skin. So our skin ages. In fact, all our organs age and they age at different rates. You're saying that the transcriptome of the skin, you did not see this cliff-like effect at a given age, the way you saw it in the frontal cortex. So different organs age at different rates, but having the right data sets,

and the ability to see nuances that we don't notice.

makes it very clear to me that the early detection part, no problem. It can be very straightforward. The treatment part, we can talk about it as well. But again, we had early on from the very famous Framingham Heart Study, a predictor of when you had going to have heart disease based on just a handful of variables. Now we have these artificial intelligence models that based on hundreds of variables can predict various other diseases.

And it will do Alzheimer's, I believe, very soon. I think you'll be able to see a combination of gait, speech patterns, picturing your body, picturing your skid.

And eye movements, like you said, will be a very accurate predictor. We just published, by the way, recently, speaking about eyes, a very nice study where in a car, just by looking at the driver, it can figure out what your blood sugar is. Because diabetics previously have not been able to get driver licenses sometimes.

because of the worry about them passing out because of hypoglycemia. So there was a very nice study that showed that you could just by looking, have cameras pointed at the eyes, could actually figure out exactly what the blood sugar is. So that kind of detection is, I think, fairly straightforward.

It's a different question about what you can do about it. Before we go to the what you can do about it, I just want to go a little deeper on the predictive side. You brought up the Framingham model or the multi-ethnic study on atherosclerosis, the MESA model. These are the two most popular models by far for looking at a majorized versus cardiac event risk prediction. But you needed something else to build those models, which was enough time to see the outcome. In the Framingham cohort, which was the late 70s and early 80s, you then had the Framingham offspring cohort.

And then you had to be able to follow these people with their LDL-C and HDL-C and triglycerides. And later, eventually, they incorporated calcium scores. So if today we said, look, we want to be able to predict 30-year mortality,

which is something no model can do today. This is a big pet peeve of mine is we generally talk about cardiovascular disease through the lens of 10 year risk, which I think is ridiculous. We should talk about lifetime risk, but I would settle for 30 year risk, frankly. And if we had a 30 year risk model,

where we could take many more inputs. I would absolutely love to be looking at the retina. I believe, by the way, Zach, that retinal examination should be a part of medicine today for everybody. I would take a retinal exam over a hemoglobin A1c all day, every day. I'd never look at another A1c again if I could see the retina of every one of my patients. But my point is, even if

effective today. We could define the data set and let's overdo it and we can prune things later, but we want to see these 50 things in everybody to predict every disease. Is there any way to get around the fact that we're going to need 30 years to see this come to fruition in terms of watching how the story plays out? Or are we basically going to say, no, we're going to do this over five years. It won't be that useful because a five-year predictor basically means you're already catching people in the throes of the disease.

I'll say three words, electronic health records. So that turns out not to be the answer in the United States. Why? Because in the United States, we move around. We don't stay in any given healthcare system that long. So very rarely will I have all the measurements made on you, Peter, all your glycohemoglobin, all your blood pressures, all your clinic visits, all the imaging studies that you've had.

However, that's not the case in Israel, for example. In Israel, they have these HMOs, health maintenance organizations, and one of them, Clarit, I have a good relationship with because they published all the big COVID studies looking at the efficacy of the vaccine. And why could they do that? Because they had the whole population available.

And they have about 20, 25 years worth of data on all their patients in detail and family relationships. So if you have that kind of data, and Kaiser Permanente also has that kind of data, I think you can actually come close. But you're not going to be able to get retina, gait, voice, because we still have to get those prospectively. I'm going to claim that there are proxies, rough proxies, but...

for gait, falls, and for hearing problems, visits to the audiologist. Now, these are noisier measurements.

And so those of us who are data junkies like I am always keep mumbling to ourselves, perfect is the enemy of good. Waiting 30 years to have the perfect data set is not the right answer to help patients now. And there are things that we could know now that are knowable today that we just don't know because we haven't bothered to look. Give you a quick example. I did a study of autism.

using electronic health records maybe 15 years ago. And I saw there was a lot of GI problems. And I talked to a pediatric expert and they said, it was a little bit dismissive. They said, brain bad, tummy hurt.

I've seen a lot of inflammatory bowel disease. It just doesn't make sense to me that this is somehow effect of brain function. To make a long story short, we did a massive study. We're looking forward to tens of thousands of individuals. And sure enough, we found subgroups of patients who had immunological problems associated with their autism, and they had type 1 diabetes, inflammatory bowel disease, lots of infections. Those were knowable, but they were not known. And I had, frankly, parents coming to me more thankful than for anything else I had ever done for them,

clinically, because I was telling these parents they weren't hallucinating that these kids had these problems. They just weren't being recognized by medicine because no one had the big wide angle to see these trends. So without knowing the field of Alzheimer's the way I do other fields, I bet you there are trends in Alzheimer's that you can pick up today by looking at enough patients that you'll find some that have more frontotemporal components,

Some that have more effective components, some that have more of the infectious and immunological component. Those are knowable today. Zach, you've already alluded to the fact that we're dealing with a customer, if the physician is the customer, who is not necessarily the most tech forward customer. And truthfully, like many customers of AI, runs the risk of being marginalized by the technology if the technology gets good enough.

And yet you need the customer to access the patient to make the data system better, to make the training set better. So how do you see the interplay over the next decade of that dynamic?

That's the right question. Because in order for these AI models to work, you need a lot of data, a lot of patients. Where is that data going to come from? So there are some healthcare systems, like the Mayo Institute, who think they can get enough data in that fashion. There are some data companies that are trying to get relationships with healthcare systems where they can get de-identified data. I'm betting on something else.

There is a trend where consumers are going to have increased access to their own data. The 21st Century Cures Act was passed by Congress, and it said that patients should be given access to their own data programmatically. Now, they're not expecting your grandmother to write a program to access the data programmatically, but by having a right to it,

It enables others to do so. So for example, Apple has something called Apple Health. It has this big heart icon on it. If you're one of the 800 hospitals that they've already hooked up with, Pass General or Brigham Women's, and you're a patient there, if you authenticate yourself to it, if you give it your username and password, it will download into your iPhone, your labs, your meds, your diagnoses, your procedures, as well as all the wearable stuff, your blood pressures that you get as an outpatient.

and various other forms of data. That's already happening now. There's not a lot of companies that are taking advantage of that. But right now, that data is available on tens of millions of Americans.

Isn't it interesting, Zach, how unfriendly that data is in its current form? I'll give you just a silly example in our practice. So if we send a patient to LabCorp or Boston Heart or pick your favorite lab, and we want to generate our own internal reports based on those, where we want to do some analysis on that, layout trend sheets, etc.

We have to use our own internal software. It's almost impossible to scrape those data out of the labs because they're sending you PDF reports. Their APIs are garbage. Nothing about this is user-friendly. So even if you have the MyHeart thing or whatever, the MyHealth thing come on your phone, it's not navigable. It's not searchable. It doesn't show you trends over time like

Is there a more user hostile industry from a data perspective than the health industry right now? No, no. And there's a good reason why, because they're keeping you captive. But Peter, the good news is you're speaking to a real nerd. Let me tell you two ways where we could solve your problem. One, if it's in the Apple health thing, someone can actually write a program, an app on the iPhone, which will take those data as data.

numbers and not have to scrape it. And it could run it through your own trending programs. You could actually use it directly. Also, Gemini and GPT-4, you can actually give it those PDFs.

And actually, with the right prompting, it'll actually take those data and turn them into tabular spreadsheets. We can't do that because of HIPAA, correct? If the patient gets it from the patient portal, absolutely, you can do that. The patient can do that, but I can't use a patient's data that way. If the patient gives it to you, absolutely. Really? Oh, yes.

But it's not de-identified. It doesn't matter. If a patient says, Peter, you can take my 50 LabCorp reports for the last 10 years and you can run them through ChatGPT to scrape it out and give me an Excel spreadsheet that will perfectly tabularize everything that we can then run into our model to build trends and look for things. I didn't think that was doable actually.

So it's not doable through ChatGPT because your lawyers would say, Peter, you're going to get a million dollars in fines from HIPAA. I'm not a shill for Microsoft. I don't own any stock. But if you do GPT on the Azure cloud that's HIPAA protected, you absolutely can use it with patient consent. 100% you could do it. GPT is being used with patient data out of Stanford right now.

Epic's using GPT-4, and it's absolutely legitimately usable by you. People don't understand that. We've now just totally bypassed OCRs. We do not need to waste our time for people not in the acronyms optical character recognition, which is 15 years ago what we were trying to do to scrape this data. Peter, let me tell you, there's New England Journal of Medicine. I'm on the editorial board there, and we just published three months ago a picture of them a week.

back of this 72-year-old, and it looks like a bunch of red marks. To me, it looks like someone just scratched themselves. And it says, blah, blah, blah, they had trouble sleeping. This is the image of the week. Image of the week.

And I took that whole thing and I took out one important fact and then gave it to GPT-4, the image and the text. And I came up with the two things I thought it would be, either bleomycin toxicity, which I don't know what that looks like, and shiitake mushroom toxicity. What I'd removed...

is the fact that the guy had eaten mushrooms the day before. So this thing just like look, this thing just like looking at the picture. GPT-4 spit this out? Yes. I don't think most doctors know this, Zach. I don't think most doctors understand. First of all, I can't tell you how many times I get a rash. Well, I try to send a picture to my doctor or my kid gets a rash and I'm trying to send a picture to their pediatrician and they don't know what it is. And it's like,

we're rubbing two sticks together and you're telling me about the Zippo lighter. Yes. And that's what I'm saying is patients without primary care doctors. I know I keep repeating myself. They understand that they have a Zippo lighter waiting three months because of a rash or the symptoms. They say, I'll use this Zippo lighter. It's better than no doctor for sure. And maybe better. That's now.

Quickly illustrate it. I don't know squat about the FDA. And so I pulled down from the FDA, the adverse event reporting files. It's a big zip file, compressed file. And I went and said to GPT-4, please analyze this data. And it says, unzipping based on this table, I think this is about the adverse events and this is the locations. What do you want to know? I say, tell me what adverse events for disease modifying drugs for arthritis. It says, oh,

To do that, I'll have to join these two tables and it just does it. It creates its own Python code. It does it and it gives me a report. Is this a part of medical education now? You're at Harvard, right? You're at one of the three best medical schools in the United States, arguably in the world. Is this an integral part of the education of medical students today? Do they spend as much time on this as they do histology where I spent

a thousand hours looking at slides under a microscope that I've never once tried to understand. Again, I don't want to say there wasn't a value in doing that. There was, and I'm grateful for having done it. But I want to understand the relative balance of education. It's like the stethoscope. Are

arguably we should be using things other than stethoscope. Let me make sure I don't get fired, or at least beaten severely, by telling you that George Daly, our dean of the medical school, has said explicitly he wants to change all of medical education so these learnings are infused throughout the four years. But it's going to take some doing.

Let's now move on to the next piece of medicine. So we've gone from purely the recognition image-based to how do I combine image with voice, story, text. You've made a very compelling case that we don't need any more technological breakthroughs to augment those. It's purely a data set problem at this point and a willingness. Let's now move to the procedural. Is there in our lifetimes,

Say, Zach, the probability that if you need to have a radical prostatectomy, which currently, by the way, is never done open. This is a procedure that the Da Vinci, a robot, has revolutionized. There's no blood loss anymore. When I was a resident, this was one of the bloodiest operations we did.

It was the only operation, by the way, for which we had the patients donate their own blood two months ahead of time. That's how guaranteed it was that they were going to need blood transfusions. So we just said to hell with it. Come in a couple of months before, give your own blood because you're going to need at least two units following this procedure. Today, it's insane how successful this operation is on a large part of the robot population.

But the surgeon needs to move the robot. Are we getting to the point where that could change? So let me tell you where we are today. Today, there's been studies where it's collected a bunch of YouTube videos of surgery and traded up one of these general models. So it says, oh,

They're putting on the scalpel to cut this ligament. And by the way, that's too close to the blood vessel. They should move it a little bit to the side. That's already happening. Based on what we're seeing with robotics in the general world, I think the da Vinci controlled by a robot 10 years is a very safe bet. It's a very safe bet.

In some ways, 10 years is nothing. It's nothing, but it's a very safe bet. The fact is, right now, I can do a better job, by the way, just to go back to our previous discussion, giving you a genetic diagnosis based on your findings than any primary care provider, interpreting a genomic test. So are you using that example, Zach, because...

It's a huge data problem. In other words, that's obvious that you would be able to do that because the amount of data, I mean, there's 3 billion base pairs to be analyzed. So of course you're going to do a better job. But linking it to symptoms. Yeah, yeah. But you're saying surgery is a data problem because if you turn it into a pixel problem. Pixel and movement. And degrees of freedom. Yeah. That's it.

Remember, there's a lot of degrees of freedom in moving a car around traffic. And by the way, lives are on the line there too. Now, medicine is not the only job where lives are at stake. Driving a ton of metal at 60 miles per hour in traffic is also putting lives at stake. And last time I looked, there's several manufacturers

who are saying that, or some appreciable fraction of that effort, they're controlling multiple degrees of freedom with a robot. Yeah. I very recently spoke with somebody, I won't name the company, I suppose, but it's one of the companies that's deep in the space of autonomous vehicles,

And they very boldly stated, they made a pretty compelling case for it, that if every vehicle on the road was at their level of technology and autonomous driving, you wouldn't have fatalities anymore. But the key was that every vehicle had to be at that level. I don't know if you know enough about that field, but does that sense check to you? Well, first of all, I'm a terrible driver.

I am a better driver. It's not for an ad, but the fact is I'm a better driver because I'm not on a Tesla because I'm a terrible driver. And there's actually a very good message for medicine because I will paraphrase this. I know enough to know that I need to jiggle the steering wheel when I'm driving with a Tesla because otherwise it will assume that I'm just zoning out. But what I didn't realize is this. I'm very bad. I'll pick up my phone and I'll look at it. I didn't realize it was looking at me. It says, basically, Zach, put down the phone. So I, okay, I put down.

three minutes later, I pick it up again and it says, "Okay, that's it. I'm switching off autopilot." So it switches off autopilot and now I have to pay full attention. Then I get home and it says, "All right, that was bad. You do that four more times. I'm switching off autopilot until the next software update." And the reason I mentioned that is it takes a certain amount of confidence to do that to your customer base saying, "I'm switching off the thing that they bought me for."

In medicine, how likely is it that we're going to fall asleep at the wheel if we have an AI thinking for us? It's a real issue. We know for a fact, for example, back in the 90s, that doses for a drug like adansetron, where people would talk endlessly about how frequently you should be given it with what dose, the moment you put it in the order entry system, 95% of doctors would just use the default there.

And so how in medicine are we going to keep doctors awake at the wheel? And will we dare to do the kind of challenges that I just described the car doing? So just to get back to it,

I do believe because of what I've seen with autonomy and robots that as fancy as we think that is, controlling a dementia robot will probably have less bad outcomes. Every once in a while, someone nicks something and you have to go into full surgery or they go home and they die on the way home because they exsanguinate. I think it's just going to be safer.

It's just unbelievable for me to wrap my head around that. But truthfully, it's impossible for me to wrap my head around what's already happened. So I guess I'll try to retain the humility that says I reserve the right to be stupid.

Again, there are certain things that seem much easier than others. Like I have an easier time believing we're going to be able to replace interventional cardiologists where the number of degrees of freedom, the complexity and the relationship between what the image shows, what the cath shows and what the input is, the stent, that gap is much narrower. Yeah, I can see a bridge to that.

But when you talk about doing a Whipple procedure, when you talk about what it means to cell by cell take a tumor off the superior mesenteric vessels, I'm thinking, oh my God. Since we're on record, I'm going to say, I'm talking about your routine prostate removal. Yeah. First 10 years, I would take that bet today.

Let's go one layer further. Let's talk about mental health. This is a field of medicine today that I would also argue is grossly underserved. Everything you've said to date resonates. I completely agree from my own experience that the resources in pediatrics and primary care, I mean, these things are unfortunate at the moment.

Harvard has 60% of undergraduates are getting some sort of mental health support, and it's completely outdoing all the resources available to the university health services. And so we have to outsource some of our mental health. And this is a very richly endowed university.

In general, we don't have the resources. So here we live in a world where I think the evidence is very clear that when a person is depressed, when a person is anxious, when a person has any sort of mental or emotional illness, pharmacotherapy plays a role, but it can't display psychotherapy. You have to be able to put these two things together. And the data would suggest that the knowledge of your psychotherapist

is important, but it's less important than the rapport you can generate with that individual. Now, based on that, do you believe that the most sacred, protected, if you want to use that term, profession within all of medicine will then be psychiatry? I'd like to think that. If I had a psychiatric GPT speaking to me, I wouldn't think that it understood me.

On the other hand, back in the 1960s or 70s, there was a program called ELIZA, and it was a simple pattern matching program. It would just emulate what's called a Rogerian therapist, where I really hate my mother. Why do you say you hate your mother? Oh, it's because I don't like the way she fed me. What is it about the way she fed you? Just very, very simple pattern matching.

And this ELISA program, which was developed by Joe Weizenbaum at MIT, A, his own secretary would lock herself in her office to have sessions with this thing because it's non-judgmental. This was in the 80s? 70s or 60s. Wow. Yeah. And it turns out that there's a large group of patients who actually would rather have a non-human, non-judgmental person who remembers what they've said last time, shows empathy

empathy verbally. Again, I wrote this book with Peter Lee and Peter Lee made a big deal in the book about how GPT-4 was showing empathy. In the book, I argued with him that this is not that big a deal. And I said, I remember from medical school being told that some of the most popular doctors are popular because they're very deep empaths, not necessarily the best doctors. And so I said, for certain things,

That's just me. I could imagine a lot of, for example, cognitive behavioral therapy being done and be found acceptable by a subset of human beings. It wouldn't be for me. I'd say I'm just speaking to some stupid program. But if it's giving you insight into yourself and it's based on the wisdom called for millions of patients, who's to say that it's worse? And it's certainly not judgmental. And maybe it'll build less. So Zach, you're born problematized.

probably just after the first AI boom, you come of age intellectually, academically in the second and

And now in the mature part of your career, when you're at the height of your esteem, you're riding the wave of this third version, which I don't think anybody would argue is going anywhere. As you look out over the next decade, and we'll start with medicine, what are you most excited about and what are you most afraid of with respect to AI?

Specifically with regard to medicine, what I'm most concerned about is how it could be used by the medical establishment to keep things the way they are, to pour concrete over practices. What I'm most excited about is alternative medicine.

business models, young doctors who create businesses outside the mold of hospitals. Hospitals are these very, very complex entities. They make billions of dollars, some of the bigger ones, but with very small margins, 1% to 2%. When you have huge revenue, but very small margins, you're going to be very risk averse.

and you're not going to want to change. And so what I'm excited about is the opportunity for new businesses and new ways of delivering to patients insights that are data-driven,

What I'm worried about is hospitals doing a bunch of information blocking and regulations that will make it harder for these new businesses to get created. Understandably, they don't want to be disrupted. That's the danger. In that latter case or that case that you're afraid of, Zach, can patients themselves work around the hospitals with these new companies, these disruptive companies and say, look,

We have the legal framework that says, I own my data as a patient. I own my data. Believe me, we know this in our practice. Just because our patients own the data doesn't make it easy to get. There is no aspect of my practice that is more miserable than

and more inefficient than data acquisition from hospitals. It's actually comical. Absolutely comical. And I do pay hundreds of dollars to get my data from my patients with rare and unknown diseases in this network extracted from the hospitals because it's worth it to pay someone to do that extraction. Yeah. But now I'm telling you it is doable. So you're saying because of that, are you confident that the legal framework for patients to have their data coupled with

AI and companies, do you think that that will be a sufficient hedge against your biggest fear? I think that unlike my 10-year prostatectomy by robot prediction, I'm not as certain, but I would give better than 50% odds that in the next 10 years, there'll be a company, at least one company that figures out how to use that patient's right to access through dirty APIs and

Using AI to clean it up, provide decision support with human doctors or health professionals to create alternative businesses. I am convinced because the demand is there. And I think that you'll see companies that are even willing to put themselves at risk. What I mean by that are willing to take the medical risk on that if they do better than a certain level of performance, they get paid more. Or if they do worse- They don't get paid.

Yeah. I believe there are companies that could be in that space, but that is because I don't want to underestimate the medical establishment's ability to squish threats. So we'll see.

Okay. Now let's just pivot to AI outside of medicine. Same question. What are you most afraid of over the next decade? So maybe we're not talking about self-awareness and Skynet, but next decade, what are you most afraid of and what are you most excited about?

What I'm most afraid of is a lot of the ills of social networks being magnified by use of these AIs to further accelerate cognitive chaos and vitriol that fills our social experiences on the net. It could be used to accelerate them. So that's my biggest fear.

I saw an article two weeks ago that was an individual. I can't remember if they were currently in or formerly part of the FBI. And they stated that they believed, I think it was somewhere between 75 and 90% of quote unquote individuals on social media were not in fact individuals. I don't know if you spend enough time on social media to have a point of view on that. Unfortunately, I have to admit to the fact that my daughter,

who's now 20 years old, but four years ago she bought me a mug that says on it, "Twitter addict." I spent enough time. I would not be surprised if some large fraction, our bots, could get worse. And it's going to be harder to actually distinguish reality from human beings. Harder and harder and harder. That's the real problem. We are fundamentally social animals. And if we cannot understand our social context,

in most of our interactions, it's going to make us crazy. Or I should say crazier. And my most positive aspect is I think that these tools can be used to expand the creative expression of all people. If you're a poor driver like me, I'm going to be a better driver. If you're a lousy musician but have a great ear,

you're gonna be able to express yourself musically in ways that you could not do before i think you're gonna see filmmakers who were never meant to be filmmakers before express themselves i think human expression is going to be expanded because

just like printing press allowed all sorts of... In fact, it's a good analogy because the printing press also created a bunch of wars because it allowed people to make clear their opposition to the church and so on, enabled a number of bad things to happen, but it allowed also expression of all literature in ways that would have not been possible without the printing press. I'm looking forward to human expression and creativity. I can't imagine you haven't played with some of the picture generation or music generation capabilities of AI, or if you haven't, I strongly recommend it.

you're going to be amazed. I have not. I am ashamed maybe to admit my interactions with AI are limited to really chat GPT for and basically problem solving. Solve this problem for me. And by the way, I think I'm doing it at a very JV level. I could really up my game there. Just before we started this podcast, I thought of a problem I've been asking my assistant to solve because A, I don't have the time to solve it and I'm not even sure how I would solve it. It would take me a long time. I've been asking her to solve it.

And it's actually pretty hard. And then I realized, oh my God, why am I not asking chat GPT-4 to do it? So I just started typing in the question. It's a bit of an elaborate question. As soon as we're done with this podcast, I'll probably go right back to it. But I haven't done anything creatively with it. What I will say is, what does this mean for human greatness?

So right now, if you look at a book that's been written and someone who's won a Pulitzer Prize, you sort of recognize like, I don't know if you read Sid Mukherjee, right? He's one of my favorite writers when it comes to writing about science and medicine. When I read something that Sid has written, I think to myself, wow.

There's a reason that he is so special. He and he almost alone can do something we can't do. I've written a book. It doesn't matter. I could write a hundred books. I'll never write like Sid and that's okay. I'm no worse a person. I'm no worse a person than Sid, but he has a special gift that I can appreciate just as we could all appreciate watching an exceptional athlete or an exceptional artist or musician.

Does it mean anything if that line becomes blurred?

That's the right question. And yes, Sid writes like poetry. Here's an answer which I don't like. I've heard many times. People said, oh, you know that Deep Blue beat Kasparov in chess, but chess is more popular than it ever was, even though we know that the best chess players in the world are computers. So that's one answer. I don't like that answer at all. Because if we create SidGPT and Sid wrote...

Alzheimer's, the second greatest malady, and he wrote it in full Sid style. But it was not Sid, but it was just as empathic family references. Right. The weaving of history with story with science. Yeah. If it did that and it was just a computer, how would you feel about it, Peter?

I mean, Zach, you are asking the jugular question. I would enjoy it, I think, just as much, but I don't know who I would praise. Maybe I have in me a weakness slash tendency to want to idolize. You know, I'm not a religious person, so my idols aren't religious, but I do tend to love to see greatness. I love to look at someone who wrote something who's amazing and say,

That amazes me. I love to be able to look at the best driver in the history of Formula One and study everything about what they did to make them so great. So I'm not sure what it means in terms of that. I don't know how it would change that. I grew up in Switzerland, in Geneva. And even though I have this American accent, both my parents were from Poland. And so the reason I have an American accent is I went to international school with a lot of Americans. All I read was whatever my dad would get me from England in science fiction. So I'm

a big science fiction fan. So let me go science fiction on you to answer this question. It's not going to be in 10 years, but it could be in 50 years. You'll have idols and the idols will be, yes, Gregorovitch wrote a great novel, but you know, AI 521?

Their understanding of the human condition is wonderful. I cry when I read their novels. They'll be a part of the ecosystem. They'll be entities within us. Whether they are self-aware or not will become a philosophical question. Let's not go that narrow path, that disgusting rabbit hole where I wonder, does Peter actually have consciousness or not? Does he have the same processes as I do? We won't know that about these, or maybe we will, but will it matter if they're just among us?

And they'll have brands. They'll have companies around them. They'll be superstars. And they'll be Dr. Fubar from Kansas, trained on iordific medicine. The key person for our alternative medicine, not a human. But we love what they do. Okay, last question.

How long until, from at least an intellectual perspective, we are immortal? So if I died today, my children will not have access to my thoughts and musings any longer. Will there be a point at which during my lifetime an AI can be trained to be identical to me and

at least from a goalpost perspective, to the point where after my death, my children could say, Dad, what should I do about this situation? And it can answer them in a way that I would have.

It's a great question because that was an early business plan that was generated shortly after GPT-4 came out. In fact, I was talking very briefly to Mark Cuban. Because he saw GPT-4, I think he got trademarks or copyrights on his voice, all his work.

and likeness so that someone could not create a mark who responded in all the ways he does. And I'll tell you that it sounds crazy, but there's a company called rewind.ai and I have it running right now. And everything that appears on my screen

it's recording. Every sound that it hears, it's recording. And if characters appear on the screen, it'll OCR them. If a voice appears, and then if I have a question, I say, when did I speak with Peter Atiyah? He'll find it for me. I'll say, who was I talking about? AI and Alzheimer's. And they'll find this voice.

video on a timeline. How many terabytes of data is this, Zach? Amazingly small. It's just gigabytes. How is that possible? Because A, it compresses it down in real time with using Apple Silicon. And second of all, you and I, you're old and you don't realize that gigabytes are not big on a standard Mac that has a terabyte.

That's a thousand gigabytes. And so you can compress audio immensely. It's actually not taking video. It's just taking multiple snapshots every time the screen changes by a certain amount. Yeah, it's not trying to get video resolution per se. No, and it's doing it. And I can see a timeline. It's quite remarkable. And so that is enough time.

in my opinion, data, so that with enough conversations like this, someone could create a pretty good approximation of at least public Zach. So then the next question is, is Zach willing to have Rewind AI on a recording device, his phone with him 24-7 in his private moments, in his intimate moments,

when he is arguing with his wife, when he's upset at his kids, when he's having the most amazing experience with his postdoc. If you think about the entire range of experiences we have from the good, the bad, the ugly, those are probably necessary if we want to formulate the essence of ourselves. You envision a day in which people can say, "Look, I'm willing to take the risks associated with that," and there are clear risks associated with doing that, but

But I'm willing to take those risks in order to have this legacy, this data set to be turned into a legacy. I think it's actually pretty creepy to come back from the dead to talk to your children.

So I actually have other goals. Here's where I take it. We are being monitored all the time. We have iPhones. We have Alexa devices. I don't know what is actually being stored by whom and what. And people are going to use this data in ways that we do or don't know. I feel it's us, the little guy, if we have our own copy and we can say, well, actually, look, this is what I said then. Yeah, that was taken out of context. Yeah, that was taken out of context. And I can do it. I have an assistant that can just...

Just find it and find exactly and find all the times I said it.

I think that's good. I think it's messing with your kid's head to have you come back from the dead and give advice, even though they might be tempted. Technically, I think it's going to be not that difficult. And again, speaking about rewind AI, again, I have no stake in them. I think I may have paid them for a license to run on my computer, but the microphone is always on. So when I'm talking to students in my office, it's

It's taking that down. So there are some moments in my life where I don't want to be on record. There are big chunks of my life that are actually being stored this way. Well, Zach, this has been a very interesting discussion. I've learned a lot. I probably came into this discussion with about the same level of knowledge, maybe slightly more than the average person, but clearly not much more on just the general principles of AI, the evolution of AI.

I guess if anything surprises me and a lot does, but nothing surprises me more than the timescale that you've painted for the evolution within my particular field and your particular field, which is medicine. I had no clue that we were getting this close to that level of intelligence. Peter, if I were you, this is not an offer because I'm too busy, but you're a capable guy and you have a great network.

If I was running the clinic that you're running, I would take advantage of now. I would get those videos and those sounds and get all my patients with, of course, their consent to be part of this and to actually follow their progress. Not just the way they report it, but by their gait, by the way they look. You can do great things.

in what you're doing and advance the state of art. You're asking, who's going to do it? You're doing some interesting things. You could be pushing the envelope using these technologies as just another very smart, comprehensive assistant. Zach, you've given me a lot to think about. I'm grateful for your time and obviously for your insight and years of dedication that have allowed us to be sitting here having this discussion. Thank you very much. It was a great pleasure. Thank you for your time.

Thank you for listening to this week's episode of The Drive. It's extremely important to me to provide all of this content without relying on paid ads. To do this, our work is made entirely possible by our members. And in return, we offer exclusive member-only content and benefits above and beyond what is available for free. So if you want to take your knowledge of this space to the next level, it's our goal to ensure members get back much more than the price of the subscription.

Premium membership includes several benefits. First, comprehensive podcast show notes that detail every topic, paper, person, and thing that we discuss in each episode. And the word on the street is nobody's show notes rival ours.

Second, monthly Ask Me Anything or AMA episodes. These episodes are comprised of detailed responses to subscriber questions, typically focused on a single topic and are designed to offer a great deal of clarity and detail on topics of special interest to our members. You'll also get access to the show notes for these episodes, of course.

Third, delivery of our premium newsletter, which is put together by our dedicated team of research analysts. This newsletter covers a wide range of topics related to longevity and provides much more detail than our free weekly newsletter. Fourth, access to our private podcast feed that provides you with access to every episode, including AMA's sans the spiel you're listening to now and in your regular podcast feed.

Fifth, the Qualies, an additional member-only podcast we put together that serves as a highlight reel featuring the best excerpts from previous episodes of The Drive. This is a great way to catch up on previous episodes without having to go back and listen to each one of them. And finally, other benefits that are added along the way. If you want to learn more and access these member-only benefits, you can head over to peteratiamd.com forward slash subscribe.

You can also find me on YouTube, Instagram, and Twitter, all with the handle PeterAttiaMD. You can also leave us a review on Apple Podcasts or whatever podcast player you use. This podcast is for general informational purposes only and does not constitute the practice of medicine, nursing, or other professional healthcare services, including the giving of medical advice. No doctor-patient relationship is formed.

The use of this information and the materials linked to this podcast is at the user's own risk. The content on this podcast is not intended to be a substitute for professional medical advice, diagnosis, or treatment. Users should not disregard or delay in obtaining medical advice from any medical condition they have, and they should seek the assistance of their healthcare professionals for any such conditions.

Finally, I take all conflicts of interest very seriously. For all of my disclosures and the companies I invest in or advise, please visit peteratiamd.com forward slash about where I keep an up-to-date and active list of all disclosures.