Attosecond pulses allow scientists to capture extremely fast motions, such as electrons moving through materials, chemical bonds forming or breaking, and even the photoelectric effect. These ultra-short light flashes enable 'movies' of these processes, providing detailed insights into quantum mechanics and material behavior.
The photoelectric effect, described by Einstein, involves electrons being emitted from a material when light is shone on it. Attosecond science has revealed that this process is not instantaneous but takes around 100 to 700 attoseconds, providing new insights into quantum mechanics and the behavior of electrons in atoms and molecules.
Attosecond pulses can be used for 'molecular fingerprinting,' where light is shone through a drop of blood. The resulting light wave can reveal specific molecular signatures, allowing for the detection of diseases based on the unique patterns of molecules present in the blood.
Biological molecules at room temperature are highly dynamic, making it difficult to capture their structure without freezing them. However, X-ray free electron lasers (XFELs) allow for real-time imaging of biomolecules at room temperature, providing a more accurate representation of their natural behavior.
Attosecond light pulses could enable petahertz-speed electronics by using light waves to control electron movement, potentially making computers a million times faster than current gigahertz-speed devices. This technology could also pave the way for room-temperature quantum computers, which would be more energy-efficient.
AI is used to optimize experimental setups, interpret logbooks, and analyze massive amounts of data in real-time. For example, AI can predict the structure of biomolecules from diffraction images taken at XFELs, helping researchers quickly identify useful data and streamline their experiments.
Ultrafast electronics could lead to energy-efficient quantum computers that operate at room temperature, eliminating the need for cryogenic cooling. This would significantly reduce energy consumption compared to current quantum computers, which require extremely low temperatures.
Hi, everyone. It's Russ Altman here from the Future of Everything. We're starting our new Q&A segment on the podcast. At the end of an episode, I'll be answering a few questions that come in from viewers and listeners like you.
If you have a question, send it our way either in writing or as a voice memo, and it may be featured in an upcoming episode. Please introduce yourself, tell us where you're from, and give us your question. You can send the questions to thefutureofeverythingatstanford.edu. The future of everything, all one word, no spaces, no caps, no nothing, at stanford.edu.
S-T-A-N-F-O-R-D dot E-D-U. Thanks very much.
So this is definitely true for out-of-second science. The field has evolved quite a bit in the last 20 years. So in the early days, people were happy to just understand what an out-of-second pulse looks like and to be able to produce this very short light. But nowadays, the community is very brave. So we're looking into microelectronics. We're looking into medical applications. Perens Krause, as an example, is looking into using some of the techniques developed in the field for what he calls
molecular fingerprinting. So essentially you shine the light through a drop of blood, you record the sort of light wave that comes out and then from the exact sort of wave nature that you record from that you can infer what type of diseases you have.
This is Stanford Engineering's The Future of Everything, and I'm your host, Russ Altman. If you enjoy the podcast, please hit follow on the app that you're listening to right now. That'll guarantee that you never miss an episode and you're fully briefed on the future of everything.
Today, Matthias Kling from Stanford University will tell us that our ability to generate super fast pulses of light is going to revolutionize scientific discovery and computing. It's the future of ultra-fast electronics. Before we get started, please remember to follow the show in whatever app you're listening to. That'll guarantee that you never miss the future of anything.
So we all know that electronics are fast. Computers, fiber optics, they're operating on the scale of nanoseconds, which are one billionth of a second. There are a billion nanoseconds in a second. And that's how our computers and smartphones are working these days. But physicists have recently figured out ways to generate pulses of light that are at the scale of attoseconds, A-T-T-O seconds.
There are a billion attoseconds in one nanosecond. And therefore, if you've ever heard of this word, it's a quintillion of them in a second.
So what does this matter? Well, it means that we might be able to make measurements of physical systems that capture motions that are super fast and even make movies of chemical bonds forming or breaking or individual electrons emerging from the surface of a new material.
It'll also lead to faster computers, including both traditional computers and quantum computers. Well, Matthias Kling is a professor of photon science and applied physics at Stanford University and an expert at these very fast pulses of light and their applications. He's going to tell us how all of this technology might revolutionize the future of everything.
- Matthias, one of your areas of research expertise is ultra-fast electronics, ultra-fast photonics. What are the technologies or capabilities that are making these advances possible now? - Yeah, thank you, Russ. First of all, thank you for having me.
I love talking ultra-fast. So when you think about the fastest processes that we're experienced to, that's, for instance, our computers, right? They run at gigahertz frequencies. So if you convert that to a timescale, it's nanosecond scales.
So that's sort of scales that we're sort of used to, but then there's quite a few orders of magnitude of scales below that where things evolve even faster than electrons zipping around in your electronics. And that is, for instance, molecules rotating,
That takes picosecond time, is on picosecond timescale, so that's 10 to the minus 12 seconds. So a picosecond is a thousand times faster than a nanosecond. I'm just checking. That's right. That's a thousand times faster than a nanosecond. And then another yet a thousand times faster than a picosecond is a femtosecond. That's when molecular bonds break and recombine.
And the timescales that I'm concerned with are even faster, attosecond timescales. So that's 10 to the minus 18 seconds. And it's a billionth of a billionth of a second. And you have to write a lot of zeros on the board to get to that one in the end. Yes. This is ATTO, just for people who are not familiar, A-T-T-O, second, ATTO.
is what we're talking about here. And there is, if I'm keeping track of the numbers, there's a million of them in a nanosecond, which is already what our computers are operating at speed. Okay. Yeah. So, okay. And essentially one attosecond compares to one second in
That's about the heartbeat rate, right? As one second to the age of the universe. So just the ability of researchers to be able to look at these incredibly fast timescales is already mind-blowing. And in fact, 20 years ago was the... In 2001, in fact, was the...
magical year where two groups managed to measure the first out of second pulses, the groups of Ferenc Krauss and Pierre Agostini based on a technique that Anneli Yeh developed that is called high harmonic generation where you generate x-rays from intense laser interactions.
And they have been awarded the Nobel Prize last year for this discovery. And there's a good reason that they were awarded the prize because these very short light flashes, they allow us to take movies. So imagine you're dancing around, right, in a disco or so, and you have that flashlight.
illuminating your dance, then you will see yourself in sort of still frame images of dancing around the floor. And if you put them all together, you can sort of assemble the whole dance out of it. Yes. And in a way, if you use these extremely short light flashes that are just at a second short,
We can actually flash them at electrons as they undergo motion in molecules or in nanosystems or whatever it is, and we can see how they move from A to B and so on. So they're so incredibly fast that we need these very, very short light flashes to sort of take a still frame image of their motion.
So that's, thank you very much. So that's fantastically exciting. Atoseconds are our new favorite time unit. What kinds of things are happening in physics that you could take a movie of at Atoseconds? So you mentioned that the heartbeat is every one second or so. So we can get nice pictures of a heartbeat since it takes about a second. I only have to divide that up, say, into 10 or 100 words.
One hundredth of a second or one tenth of a second and I can get a nice smooth movie of one or two heartbeats That's right. We don't need out of seconds to look at hearts. Oh, maybe I'm wrong You'll tell me at least the gross heartbeat It would just be a very very very slow mo and many people know about this now because we all have slow mo on our phones if you take too many time points it just takes forever for the heart to be so there must be interesting things happening and
at the at a second time scale paint a picture for what kind of things we'll be able to see in these movies
Yeah, so imagine your, and the first movie in fact that was taken was a movie that Edward Muybridge took at Stanford. Stanford University didn't exist at the time, but Stanford was this big horse farm. Yes. And the question was there if horses have all four hooves in the air as they are galloping, right?
And they couldn't answer that question because it was just the motion was too fast. And so they were debating whether this is the case or not. And then Edward Moobridge had this really nice idea of assembling a set of cameras along the horse track.
and sort of triggering them one by one as the horse was going along the track. And so he recorded these still frame images of the horse along the track and well, he did capture in fact one image where the horse has all four hooves in the air. So they were able to answer a very fundamental question that existed at the time with the first type of movie that was ever recorded. - Great. - And we're taking this basic concept to the extreme
So we're looking at the fastest motions that you can imagine nowadays, and that is electrons
Zipping around atoms, zipping around small molecules. And so one very fundamental process we looked at, and there's actually a paper that just came out yesterday, is photo emission. The photoelectric effect is something that Einstein essentially was able to describe using quantum mechanics. And he was awarded the Nobel Prize for this discovery.
for this description. And it's essentially you shine light on some metal, for instance, and what is emitted then as an action of the light impinging on this structure is an electron.
Okay. And this electron has a certain kinetic energy and we can all measure that. And so you would think this was about 100 years ago, right? So at least his price was. So you would think that this is well understood, right? 100 years of physics. I mean, we must understand this.
But in fact, what was not understood until very recently, until out of second science came along is how long it actually takes between the light being absorbed by the structure and the electron to come up. Yes. And this was assumed to be essentially instantaneous, right? So there was, it was so fast that people said, okay, it's intense instantaneous.
But that's not really true. So if you look at the details, you can find out that this electron actually it really travels and it needs a little bit of time to come out.
And the interesting thing is that the time it takes to come out has a lot to do with what kind of environment it's in, right? If it's in a molecule and an atom and how many other are around and how it kicks around other electrons and things like that. And so this is quantum mechanics at its extreme. We're looking at the timescale it takes an electron to come out, for instance, of an atom as we shine light on it.
And this is something we can measure. And it takes just about, let's say, 100 out of seconds to this latest measurement went all the way up to 700 out of seconds. And it tells us about the system. And so we can compare this to very detailed, very hard calculations. In fact, the data that we just published existed since about now. It was 2018. So that's six years ago, right? Six years ago, we took six years for the theory to catch up.
So I like these experiments where we get the data and then theory is scratching their heads and they're like, oh, how – I need to explain this. I need to have a theory that makes – Yes. Okay. So that sounds fantastic. Let me just pause and make sure we understand. So –
Are you taking movies of multiple electrons leaving the surface and then averaging them to get a sense? Or are you able to actually observe like individual movies of individual atoms leaving the surface?
Yeah, so this is interesting. And this actually brings me to free electron lasers. Because there have been experiments that these Nobel laureates did since the invention or the birth of adiposecond science using just normal tabletop setups in their labs. And in that case, they looked at what's called valence electrons. So these are electrons that are sort of the least bound electrons in some sense, and they're the easiest to remove.
Yes. And they're typically they're not very localized. Right. So so if you imagine you have a solid or you have a molecule, these electrons, they don't sit just on one particular atom, but they have some kind of delocalization over the whole structure. So your question is a very good one, because using these tabletop techniques, it was very hard to answer where exactly that electron came from.
Yes. Now we can use X-rays at much higher energies. And these X-rays penetrate very deep into atoms, very specific atoms. We can tune the energy just to, for instance, a nitrogen atom, an oxygen atom, things like that. And then look at the photo emission from that particular atom in a very large molecule. So we're very specific. Wow. Yeah, very sort of medical insertion in some sense. So it's very specific in what we can probe.
And it's really great because the more specific we can look at where something is happening, the more detailed the movie, of course, is that we will record in the end of the more detailed the comparison can be to the theory that we're trying to push ahead. Right. Because in an ideal world,
Theory can explain everything. So since we're talking about the future of everything, you would hope that one day we have an AI and it uses theoretical calculations that are so advanced that we don't need to do experiments anymore. Right. Right. I think this is actually my goal for biology as well. Yes. So my personal opinion on this is we'll never get there.
But that's also because we're asking more and more complicated questions. So simple questions, probably theory, like if I drop an apple and I let it fall to the ground, I can pretty much well predict when it arrives on the floor.
But for complicated enough questions, theory has to make assumptions. And especially true for quantum mechanics. If I look at very complex systems, I need to make a lot of assumptions to describe what an experiment gives me. And so this interplay between experiment and theory to kind of push each other is
to develop the theory and make it better and to help us use then this sort of basic understanding to make much better materials, for instance, for solar cells, much better catalysts for producing new fuels and things like that. So it's real world impact in this very fundamental understanding that we are reaching.
Yes, I've had guests on the Future of Everything podcast who are doing material science or electronic technology.
I'm sorry, batteries. And it's always impressive to me that there is still a very empirical aspect to these fields. This is not a criticism, but the experimental work is very important because they don't always have the supporting theories. So what I'm understanding from you is that these measurements will give you the basis for the theories where they may be able to have a little bit more of an idea before they go into the laboratory of what to look for or what to build to get the properties that they're seeking.
So this is definitely true for Adderstock and Science. The field has evolved quite a bit in the last 20 years. So in the early days,
people were happy to just understand what an out-of-second pulse looks like and to be able to produce this very short light. But nowadays, the community is very brave. So we're looking into microelectronics. We're looking into medical applications. Ferenc Krauss, as an example, is looking into using some of the techniques developed in the field for what he calls microelectronics.
molecular fingerprinting. So essentially you shine the light through a drop of blood, you record the sort of light wave that comes out and then from the exact sort of wave nature that you record from the...
From that, you can infer what type of diseases you have. Yes, because there are specific molecules that have a kind of, as you said, you used the word signature. These molecules have a signature that is unique and you can detect. Okay, let me ask a few questions about that because now you're getting close to things that I maybe understand fully. Okay.
One of the things about living systems and blood is it's at room temperature or it's at body temperature, and the molecules are moving around a lot. And you were talking about focusing on – earlier you were saying you could focus on an individual nitrogen. That implies to me that you're going to have to do something to keep these molecules from moving too much, but maybe not. So is the issue of temperature – like do you have to freeze everything to make these measurements, or is room temperature or body temperature within range? Yeah.
So this is an excellent question because you already motivated why it's interesting to study systems at the temperature where they're functioning in the body, right? So ideally, we don't want to have to freeze structures out to study their behavior because it will be very particular to that crystalline structure we created, and it might not reflect what we see in the real world, right? This interview would be very different if both of us were frozen.
For sure. Exactly. So that's a good example. And in fact, this is where these X-ray free electron lasers come in. They can produce and I have to back up a little bit to explain what that is. So a free electron laser is essentially it starts with the linear accelerator that accelerates electrons, these tiny quantum particles.
to very high energies for the experts in the audience at the moment. With our new superconducting accelerator, we can go up to four gigaelectron volts and in the future it will be eight gigaelectron volts. So these are very, very high energies. And then we send these very high energy electrons through what's called an undulator. It's essentially a periodic magnetic structure that forces the electrons to wiggle, to oscillate a bit.
And electrons don't like to oscillate. They essentially emit radiation as they do. So it's called synchrotron radiation. And using a few tricks, we can actually, in these free electron lasers, we can amplify that radiation. We can make it coherent instead of incoherent, how it's called. So it becomes laser-like. And so it really gains a lot in intensity. And these FELs, they have...
an incredible increase in brightness, in intensity over synchrotrons that have been generating x-rays for decades. X-files are around just since about, well, SLS has been around since 2009. So that's when the first experiments have been conducted, when it saw first light. Our new superconducting accelerator was actually commissioned and taken into operation just last year. And
And so it's a relatively young field. And now we can use these extremely bright x-ray pulses. Imagine, you know, this is cannot at all compare to what you have at your doctor's office. It's first of all, it's laser like, which helps a lot with detecting, for instance, information that helps you to build 3D images instead of just projecting. So when you take an image of your tooth, let's say at the dentist, sometimes it's really hard to see the details. Right.
When you use laser-like radiation, you take the same image. Even I, and I'm not a dentist, I can tell what I see because it's social. And you get that sort of depth information, so you get 3D images. And this is sort of the type of radiation we generate, just a lot brighter radiation.
And it's also very short. So the pulses we get from the X-FIL are, well, we can tune them. We can nowadays generate out-of-second light pulses. It's actually something that was invented here at LCLS by Argo Marinelli and James Crine, two really fantastic scientists at the lab. And with these extremely short light pulses, we can, for instance, illuminate a biomolecule. And then...
And then take an image, a diffraction image, x-ray diffraction image in one single shot. So one single x-ray illumination gives us the structure. And now imagine in real life, in nature, you don't just have that one structure. You would need to freeze it out to have just that one structure. You have many different...
ways how the protein could look like depending on temperature, depending on the environment and so on. And so what we do at these exfels is we take all of these images. So we let the system at room temperature, maybe stimulate some dynamics like we mix something in and then this thing folds or it catalyzes something. So enzymes are things that we look at.
And we are following all of that at the same time. So essentially, we're taking these multitudes of images of all the things that are going on at the same time. And then we can use that information to really tell what nature is doing.
And one of the really exciting things that nature is doing is to actually generate the oxygen that we breathe. So one of the most investigated and one of the success stories of these X-Files is the study of photosystem 2. It's essentially a system that sits in plants and is using just water and sunlight to generate oxygen.
Somewhat key for life on Earth. Exactly. Exactly. And everyone knows that. But the exact way how this is happening, it's a cycle where in total this catalytic cycle is very complicated, has a couple of steps that people knew about, but they didn't quite know how the structure looks like and how it functions in reality. It was really possible at these X-files to study that for the first time. So
So now we have a sort of very fundamental understanding of how Photosystem 2 is generating oxygen from breaking up water with just sunlight. And this is, of course, amazing because if you, in fact, if you want to generate water, oxygen from water using our human technology, right, not what nature is doing, we would need to use very high electric currents or whatever it is. It would be very, very expensive.
inefficient as a process. So we're learning nature's secrets by looking at and taking movies as it happens. That's right. This is The Future of Everything with Russ Altman. More with Matthias Kling next. Welcome back to The Future of Everything. I'm Russ Altman, and I'm speaking with Professor Matthias Kling from Stanford University. In the last segment, Matthias explained to us some of the new capabilities in generating very rapid bursts of light
that can control electrons and can make measurements on physical materials that allow us to see them move and change in real time. They're making movies that are super, super slow-mo. In this segment, Matthias will tell us a little bit about how all of this will lead to faster computers and electronics. He'll also tell us what the role of AI and machine learning is in all of these endeavors.
So, Matthias, I wanted to ask you about an area that actually you're an expert in, which is ultra-fast electronics. Very early in the conversation, you referred to the fact that our current electronics are operating at a nanoscale timescale, but we've now been talking about attoseconds for quite a while. Is there a possibility of using that speed for our next generation of electronics?
Yes, thank you for this excellent question. So I'm very passionate myself about advancing the speed of electronics. And one of the ways we dream of doing this is to use the
the waves themselves, the light waves themselves, and that electric field that the light wave has to steer electrons and circuitry. So at the moment, this is done just by applying a voltage and then you shift electrons around in these wires and they are typically...
There is some resistance that limits the speed. There is also other sort of limits. And so, in fact, the transistors nowadays, they have reached gigahertz level frequencies. So they're still operating at that nanosecond scale.
But we dream about pushing that all the way to the out-of-second scale, which in frequency space would not be gigahertz. It would be not even terahertz, but the next level, which is petahertz. And so you might wonder how far could we ever go? Well, there is a limit. Moore's law is going to have a limit. And the limit is essentially the speed of light. So you can't be faster than the speed of light. This is a very fundamental law.
And essentially, as soon as we start moving electrons around at almost the speed of light, that's how fast we can go. But we're very, very far away from that. So we're about a million times below that speed. And so we want to use these light waves that actually the out-of-second community has done a lot to produce light waves that are very well controlled, that are controlled to a minute detail on their actual sort of
on the sub-cycle evolution. Imagine a light wave that's multiple cycles that you could draw on the board how the light wave propagates. And now I really look at the very fine details of how that light wave interacts with electrons. So this principle has been, in fact, demonstrated in 2013 in the first prototype device where they were shining light and essentially it's a very simple principle
device that has just the dielectric and then two metal contacts. And with that intense light, it was possible to turn the dielectric into a metal. So it was possible to essentially make it conducting. So it was a change of 10 to the 18 in conductivity. So this is pretty much what a transistor is doing. Right. So you go from electrons cannot pass to things can pass, but now it's light. Yeah.
And now it's really fast because you use the light wave itself to switch it on and off. So the switching speed we already demonstrated in the community, we can go to these petahertz timescales. So that's great. To make a real transistor is still a challenge because we need to integrate essentially this very concept into something that has now a million or a trillion processors.
and then make that all work in parallel. And of course, think about the light source that we would need. And there's many challenges, but it's a field that is, in my view, it's exponentially growing. There's more and more people jumping on it. And it's also really nice because this is something that, yes, with every sort of vision that you have and where you want to go, even if we might never actually reach that point where we make that sort of light speed electronic device, there's a lot of,
great discoveries that will happen along the way. And that is sort of the fun of it, right? That as a researcher... Yeah, I mean, you said there's a million, there's an opportunity to be a million times faster, but people might be very happy to be a thousand times faster for a little while while we're working out the details. Now, when you think about... I know this is very early and I know this is far off, but that doesn't stop me from asking these questions.
When you think about these potential computers, are they going to be very energy efficient or are they likely to be at least initially very energy consumptive? Because as you know, in the world these days, people are now thinking not just about compute, but like finance.
compute per power requirement because the power requirement is starting to scale to things that affect, you know, the temperature of the globe. So I know it's early and I know, but at least theoretically, are these things going to be low or are they going to be high energy consumers?
I mean, we all aim to, of course, produce low-energy consumer electronics. There's sort of one way that I can imagine we're getting there, and it's essentially to enable quantum computing at room temperature. So at the moment, quantum computers, in fact, you don't need a billion transistors. Right. You actually double the computing capacity.
power with every single qubit you're adding. And so it's a very limited number that we need to have a huge sort of computing power in these quantum computers. And quantum computers are based on having coherence, having sort of, let's imagine you have a wave type
a thing that goes into a quantum computer and we need to preserve that wave nature. So, so we make these calculations with these waves and then we need to preserve that nature. And that is essentially what light wave electronics, that's how we call it, or petahertz electronics is also doing. We're using that very coherent nature of light and we're preserving the sort of quantum nature of the process. And,
And since we're investigating all of this at room temperature, we're hoping one day we will have the right recipe to maybe not build a billion transistors on a circuitry, but to have enough of these nodes to essentially build a quantum computer that could run at room temperature. And that would be very energy efficient, very energy efficient.
Because we don't need to cool it down. Imagine to cool something down to cryogenic temperatures. That's a huge plant that you need. In fact, we have such a plant here at LCLS. So we're running the superconducting accelerator here.
at two Kelvin. So we need to cool it down to two Kelvin to reach that superconductivity where there's no resistance. So essentially we can crank up the fields and we can generate these massive fields without generating a lot of heat. And so this becomes efficient in terms of the electron acceleration process, but it's very inefficient in terms of having to produce that cold helium in the first place. And so the same is true for quantum computers. At the moment, they're all using essentially cryogenic temperatures
And this is something that I'm dreaming of together with the community we could sort of move away from, develop either superconductors at room temperature that would enable us to do so or advanced light wave electronics I just talked about to do so.
Very exciting. And so it's very interesting to hear that quantum computing might be the first easier application than a kind of traditional computer because of these considerations of room temperature and also the power of just adding individual qubits. Well, in the last couple of minutes, I wanted to ask you about the role of AI and machine learning, because I know it's important in the field and it's popping up in everybody's life. And my understanding is it's even popping up in your work.
It absolutely is. In fact, we have an MLAI program here at Slack, and it's a growing program that has essentially impact on all of our individual science programs. We're also very strongly connected, of course, with the Stanford community on this and what's happening in Silicon Valley. And so this is extremely exciting times, I have to say.
There's many applications. The simple ones are, for instance, looking at the logbooks that we create. So when we take experimental data, the people that do the experiments, they enter information in logbooks. And this is typically very cryptic. And it's difficult, let's say, five years later to understand what someone had in mind when they wrote the logbook, right? Right. You can use these large language models nowadays to really help you in interpreting essentially what you find in these logbooks.
And so you can ask, instead of scratching your head and wondering what the heck they meant at the time, you can ask AI what their interpretation is. And I think this can really help us. And that's a very simple application. We have even better ones in some sense. So imagine we have this really, really complicated machine that we use to produce the x-rays that starts with an injector. And then we have a two-mile accelerator. And there's many, many different units that need to play together.
And in the past, we used to have expert operators. The facility runs 24-7. And the very best ones, they could align this machine within a brief amount of time, but you needed a lot of training to do so.
Nowadays, we can use AI to essentially help us with this alignment. So it's a lot faster. We do save money doing so and we make the machine more efficient and we make it more suitable for all of these applications that we're after because the more stable the machine is, the better data we get.
I find these two examples really surprising because you haven't talked about all the data that I'm sure you're collecting. I mean, of course, at the end of the day, you've talked about actually ways up front to get the data collection process to be more efficient. And on the back end, hey, you can say to yourself, well, five years ago, we did something and we didn't think it was that important.
But all of a sudden now it becomes something very important. And can I use AI to help understand that? So I guess my final question is, what about AI to understand what you're observing? You were talking about three-dimensional movies and things like this. Is the AI going to play a role there? And of course, we're hoping it's true and not hallucinated.
Almost certainly, right? So at the moment, I can tell you we produce with the new superconducting accelerator that produces up to a million pulses per second. So imagine you record images up to a million images per second.
And each of these images has, I don't know what it is, let's say, megabytes, right? So, sorry, megapixels. So it's a huge amount of data. In fact, it's so much data that we will struggle to store it somewhere. But the most important is people come here with their favorite biomolecule and all they want is the structure.
Right. So and if if we produce this huge amount of data, it becomes very difficult for researchers to essentially, you know, go through the data, interpret what they see and also sort it into this is good data. This maybe is not so interesting and and just storing what's really interesting and analyzing it. And this we can absolutely streamline with ML. So this is being used as we speak.
we have collaborations with, uh, exascale computing centers to essentially, uh, look at this data in real time and to send the data, uh, off to a superconductor. And then, sorry, sorry, super computer. And then essentially, uh,
analyze the structure within just minutes. And so we can put a structure in, we record this huge amount of data and the ML algorithm essentially predicts then the structure from that data that was recorded, something that a human being could never do because it's just too much data to go through.
And we have a sort of first indication, is this experiment working? Should we spend more time on a particular substance or not? Or should we maybe go to the next one, right? So these very important questions, we don't want to base any photons we're sending to the experiments. So the faster we can analyze the data and the more comprehensive the information is that we can get out of the data, the better. And this is just one example, but essentially ML is...
so important these days in running these very complex machines and in analyzing the huge amount of data that I cannot think of a world where I would separate the two again. So I do think also that facilities like the ones we operate here, LCLS at Slack,
they can help the ML/AI community because we are data producers, right? Right. So we produce a huge amount of data and they can test essentially the models and the different algorithms they are developing if they're applicable to these types of problems. And I think there is a lot to be learned
not just from us on our side from the ML community, but I think also on the other end, right? So the other way around. So we're essentially the data providers and they try different algorithms on this data. Well, that's great. And I think that's where we'll leave it. Thank you for this introduction to Atosec and physics, the way it will help build better computers and also use and help AI in the analysis of the data across a wide range of applications.
Thanks to Matthias Kling. That was the future of ultra-fast electronics. Thanks for tuning into this episode. You know, we have more than 250 episodes in the back catalog, so you have access to a wide range of discussions on a diversity of topics that will give you a picture of the future of everything. Meanwhile, if you're enjoying the show, please consider telling your friends, family, and colleagues about it, because that's the best way to grow our audience and get feedback about how we're doing.
You can connect with me on X at RB Altman or with Stanford Engineering at Stanford ENG.
If you'd like to ask a question about this episode or a previous episode, please email us a written question or a voice memo question. We might feature it in a future episode. You can send it to thefutureofeverything at stanford.edu. All one word, the future of everything. No spaces, no underscores, no dashes. The future of everything at stanford.edu. Thanks again for tuning in. We hope you're enjoying the podcast.
♪