cover of episode #376 – Stephen Wolfram: ChatGPT and the Nature of Truth, Reality & Computation

#376 – Stephen Wolfram: ChatGPT and the Nature of Truth, Reality & Computation

2023/5/9
logo of podcast Lex Fridman Podcast

Lex Fridman Podcast

AI Deep Dive AI Chapters Transcript
People
L
Lex Fridman
一位通过播客和研究工作在科技和科学领域广受认可的美国播客主持人和研究科学家。
S
Stephen Wolfram
Topics
Lex Fridman: 围绕ChatGPT、人工智能、以及现实和真理的本质与Stephen Wolfram进行探讨,并讨论了人工智能的风险和未来教育等问题。 Stephen Wolfram: 详细阐述了ChatGPT与Wolfram语言在技术层面的差异,前者基于海量文本数据进行浅层统计预测,而后者则致力于构建更深层次的计算系统,使世界知识可计算。他认为,即使是简单的程序也能产生复杂的现象,这与自然界的运作方式类似。他还探讨了计算不可约性,即为了获得计算结果,必须进行计算本身,这在理解宇宙和科学发明中至关重要。他认为,人类作为计算受限的观察者,只能感知到计算可约性的部分,这导致了我们对世界的感知是简化的,时间一致性只是对自身的一种叙述。他认为意识并非宇宙中最高层次的现象,而是具有单一经验线程等特性的特殊化现象。他还探讨了观察者在计算宇宙中的重要性,以及如何将自然语言转化为计算语言。他认为,大型语言模型可能已经发现了潜在的语言和思维规律,并探讨了这些规律的本质以及如何将其明确化。他认为,深度计算并非大型语言模型所擅长的,大型语言模型更擅长处理人类日常生活中容易想到的事情。他还探讨了大型语言模型的局限性,以及如何利用大型语言模型进行代码生成和调试。他认为,大型语言模型生成的自然语言可以作为不同系统之间交互的媒介。他还探讨了大型语言模型在定义“真理”方面面临的挑战,以及如何利用计算语言来提高准确性。他认为,热力学第二定律是计算不可约性的一个表现,是计算受限的观察者观察计算不可约系统的结果。他认为,宇宙的存在是必然的,而人类的存在是偶然的,人类的经验是对复杂系统的简化。他还探讨了外星智能的可能性,以及如何理解计算宇宙。 Stephen Wolfram: 深入探讨了计算的本质,以及人类如何理解计算。他认为,即使是非常简单的程序,运行后也能产生非常复杂的现象,这与自然界的运作方式类似。他认为,将计算的可能性与人类思维的模式相结合是计算领域的重大挑战,符号编程是将计算的可能性与人类思维模式相结合的一种方法。他详细阐述了计算不可约性,认为为了获得计算结果,必须进行计算本身。他认为,人类存在于宇宙计算可约性的一个特定区域内,人类对世界的感知是基于对计算可约性的提取。他认为,人类认为自己具有时间上的持久性,这是我们意识的关键方面。他还探讨了观察者的概念,以及观察者如何对系统的多种不同配置进行等效处理,只关注整体特征。他认为,科学模型常常只捕捉到系统的一个方面,而忽略了其他细节。他认为,大型语言模型可能已经发现了潜在的语言和思维规律,并探讨了这些规律的本质以及如何将其明确化。他认为,大型语言模型不擅长进行深度计算,更擅长处理人类日常生活中容易想到的事情。他还探讨了大型语言模型的局限性,以及如何利用大型语言模型进行代码生成和调试。他认为,热力学第二定律是计算不可约性的一个表现,是计算受限的观察者观察计算不可约系统的结果。他认为,宇宙的存在是必然的,而人类的存在是偶然的,人类的经验是对复杂系统的简化。他还探讨了外星智能的可能性,以及如何理解计算宇宙。

Deep Dive

Chapters
This chapter compares ChatGPT and Wolfram Alpha, highlighting their key differences in approach and capabilities. Wolfram Alpha focuses on deep computation using formal structures, while ChatGPT uses a shallower approach based on statistical analysis of existing text.
  • ChatGPT focuses on language generation based on patterns in vast text data.
  • Wolfram Alpha aims to make as much of the world computable, using deep computations and formal structures.
  • Wolfram Alpha's approach is viewed as 'deep and broad,' while ChatGPT's is 'wide and shallow'.

Shownotes Transcript

Translations:
中文

The following is a conversation Steven were from his fourth time in this podcast, his computer scientist, mathematician, the radical physicist and the founder of wolf research, a company behind mathematical will from alpha, were from language and the war from physics and mathematics tics projects. He has been a pioneer in expLoring the computational nature reality.

And so he is the perfect person to explore with together the new, quickly evolving landscape of large language models as human citizen journey towards building super intelligent A G. I. And now a quick few second mention of sponsor checked him out in the description.

The best way to support this podcast. We got major class for learning Better help for mental health, and inside tracker for tracking your biological data usually zing. My friends also, do you want to work with our amazing team were always hiring, go to lex, free man on ash hiring.

And now onto the four ad reads, as always, no ads in the middle. I try to make this interesting, but if you must keep them friends, please still check out the sponsors. I enjoy their stuff. Maybe you will do. This show is brought you by a master class, one hundred and eighty boxy year, gets in all access pass to watch courses from the best people in the world in the respective disciplines, there are several components, effective learning.

I think learning the foundations is really important in the best way to do that, depending on the field, is probably some kind of material that encapsulates the foundations that could be textbook, that could be a really good youtube video, really good toral with a written or video form. Then there's the actual practice of those foundations by building something, again depends on the field. But I think a component of learning is often not utilized is to learn from the best people in the world that did that thing you're trying to learn.

I think even if they don't cover the entirety of the foundations, even if they don't cover, I kind of hands on the toil type of description they can get elsewhere through their words, you can get the wisdom of the details that mastering, I feel, requires any could also see kind of take in the mode of being required to achieve mastery in that field. I think it's so powerful that master class allows you to look in to some of these world experts in a structure, context, really intensely learn from them, not just the content, but the way of being. I wasn't to something of them.

It's too want to list, but the color santana all right then you negra before the the pocket with I mean at these are just really excEllent gene good all if you want to check IT out the master class that come slash legs to get up to thirty five percent off from mother's day as master class that consort ash legs for the thirty five percent off this episode is also brought you by a Better help spelled H E L P help. I posted this mean on twitter recently. That has that me in format with a car swards off and an exit and go straight means going to a therapies and swarming off on exit says, saying in quotes, IT is what IT is.

And then the car is labeled as most men is true. I think a lot of us face hardship in life. And I think there's a dance between kind of being fragile to the richness of the experience of that hardship can really break you.

So there's some usefulness to IT is what IT is. But afterwards, during IT, there has to be some component where you're raw and honest with your feelings and you bring them to the surface with yourself in your interested CT. What you think, what you feel, what you fear, what you hope.

IT is so simple. But so many of us are afraid of the simplicity, of the intense feeling that our mind is capable of, that roller coaster that our mind takes us on. So I think there be bringing stuff to the surface with a license professional is definitely something I recommend.

Mental health is, at the core. What he means to be a healthy human being. And Better help is easy, discreet, affordable and is available everywhere.

Check them out, a Better help that flash legs and save on your first month that's Better help that come slash legs. This show is also about by inside tracker service. I used to track biological data markers from my biog. From the blood test, they take IT looks, the blood data, DNA data in the struck data, all that kind of data coming from my body to help me make decisions about my lifestyle.

The more conversations i've had with biologist, competition biologist, biochemist by engineers, newer biologists for people specializing in particular systems with in the body, biologist immunologists, all of that, I realize how incredible human body is, how incredible the machinery of IT is, and how many signals that provides internally for that large scale, hierarchical ical system to a maintain a clip him to maintain health, to maintain life in the full definition of those words. And I think it's a really exciting possibility in the future that we can get as much as signal as possible, richly temporal signal, every second of every moment from every system within the body, and help us make predictions about where stuff fales wrong helps give this advice on what we should do. And so I think services like inside tracker is a really important step into that direction.

Get special savings for a limited time when you go to inside tracker. D comp lash legs, this is elected ment podcast to support IT. We check our sponsors in the description. And now the friends here, Stephen, were from.

You announced the integration of tragic t and will from alphand will from language. So let's talk about that integration. What are the key differences from the hypo osoph's o level, maybe the technical level between the capabilities of, broadly speaking, in the two kinds of systems, large language models in this competition, gigantic competition. al. System infrastructure.

that is alpha. yeah. So what does something like ChatGPT do? Its most focused on make language like the language that humans have made and put on the weapons on. So you know it's it's primary sort of underlying technical thing is you've given a prompt. It's trying to continue that prompt in a way that somehow typical of what is seen based on a trillion words of texts that humans have written on the web.

And the way it's doing that is with something which is probably quite simple of the way we humans do the first stages of that using a neuron that and so i'm just saying given these given this piece of tax, let's ripple through the new on that one work and get one word, a time of output. And it's of a shallow computation on a large amount of kind of training data that is what we humans are put on the web. That's a different thing from sort of the computational stack that I spent the last time in a forty years or so building, which has to do with what can you compute, many steps, potentially a very deep computation.

It's not sort of taking the statistics of what we humans are produced on trying to continue things based on that statistics. Instead, it's trying to take kind of the formal structure that we've created in our civilization, whether it's from mathematics, so whether it's from kind of systematic knowledge of all kinds and use that to do arbitrarily deep competitions to figure out things that that aren't just let's match what's already been kind of set on the web, but let's potentially be able to compute something new and different that never been computed before. So as as a practical matter, the what we are now, our goal is to have made as much as possible of the world computable in the sense that if there's a question that in principle is answerable from some sort of expert knowledge is been accumulated, we can compute the answer to that question and we can do IT in a sort of reliable way.

That's that's the best one can do given what the expertise that our civilization has accumulated. It's a very it's it's a much more sort of labor intensive on the side of kind of being creating kind of the the computational system to do that. Obviously, the the kind of the chat bt world, it's like take things which were produced for quite other purposes, namely that all the things we've written out on the web and so on and sort of courage from that things which are like what's been written on the web.

So I think as a practical point of view, I view sort of the ChatGPT thing is being wide and shallow. And what we're trying to do with sort of building out computation as being the sort of deep also abroad, but but most importantly, kind of deep type of thing. I think another way to think about this is you go back in human history, and I don't know, thousand years or something, and you say what? What can the typical person, what's the typical person going to figure out what the answers is that there are certain kinds of things that we humans can quickly figure out that sort of what, what are you at our neural architecture? And the kinds of things we learn in our lives let us do.

But then there's this whole layer of kind of formalization that got developed in, which is, you know, the kind of whole a sort of story of intellectual history and whole kind of depth of learning that formalization turned into things like logic, mathematics, science and so on. And that's the kind of thing that allows me to kind of build these towers of sort of towers of things you work out. It's not just I can immediately figure this out.

It's no, I can use this kind of formalism to go step by step and work out something which was not immediately obvious to me. And that's kind of the story of what what we're trying to do computational is to be able to build those kind of tall towers of what implies what implies what and so on. And as opposed to kind of the, yes, I can immediately figure that out. It's just like what I saw somewhere else in something that I heard or remember or something like this.

What can you say about the kind of formal structure or the kind of formal foundation you can build such a final structure on about the kinds of things you would start on in order to build this kind of a deep computer knowledge trees?

So the question is, sort of how do you how do you think about computation and that there's a couple of point here. One is what computation intrinsically is like, and the other is what aspects of computation we humans, with our minds and with the kinds of things we've learned, can sort of relate to in that computational universe.

So if we start on the kind of what can computation be like, it's something i've spent some big chunk of my life studying, is imagine that you usually we write programmes, what we kind of know, what we want the program to do, and we carefully, right in many lines of code, and we hope that the program does what we what we intended IT to do. But the thing i've been interested in is if you just look at the kind of natural science of programs, so you just say i'm going to make this program is a really tiny program. Maybe I even picked the pieces of the program at random, but it's really tiny by really tiny, I mean, less than a line of code types thing.

You say, what does this programme do? And you run IT. And big discovery that I made in the early eighties is that even extremely simple programs, when you run them, can do really complicated things.

Really surprised me, took me several years to kind of realized that that was a thing, so to speak. But that realization, that even very simple programs can do incredibly complicated things, that we very much don't expect. That discovery, i'm and I realized that that's very much, I think, how nature works.

S that is, nature has simple rules, but yet does all sorts of complicated things that we might not expect. You know, the big thing of the last few years has been understanding that that help the whole universe and physics works, but that's a quite separate topic. But so there's this whole world of programs and what they do and very rich, sophisticated things that these programs can do.

But when we look at many of these programs, we look at them myself as kind of I don't really know what that's doing. It's not a very human kind of thing. So on the one hand, we have sort of what's possible in the computational universe.

On the other hand, we have the kinds of things that we humans think about, the kinds of things that are developed in kind of our intellectual history. And that's really the chAllenge to sort of making things computational is to connect what's computationally possible out in the computational universe with the things that we humans sort typically think about with our minds. Now that's a complicated kind of moving target because the things that we think about change over time, we've learned more stuff.

We've invented mathematics, we've invented kinds of ideas and structures and so on. So it's it's gradually expanding. What kind of gradually colonizing more and more of this kind of intellectual space of possibilities? But the the real thing, the real chAllenges, how do you take? What is computationally possible? How do you take? How do you encapsulate the kinds of things that we think about in a way that kind of plugs into what's computationally possible? And and actually the the big sort of idea there is this idea of kind of symbolic programming, symbolic representations of things.

And so the question is, when you look at sort of everything in the world and you know takes some visual scene or something looking at, and you say, well, how do I turn that into something like a kind of stuff into my mind there? Lots of pixel in my visual scene, but the things that I remembered from that visual scene, you there, there's a chair in this place, is a kind of a symbolic representation of the visual scene. There are two chairs and a table or something, rather than there all these pixel, arranging all these detail ways.

And so the question then is, how do you take sort of all the all the things in the world and make some kind of representation that corresponds to the types of ways that we think about things? And human language is is sort of one form of representation that we have. We talk about chairs, that's a word in human language and so on.

How do we how we take. But human language is not in and of itself, something from the plugs in very well to set of computation. It's not something from which you can immediately compute consequences and so on.

So you have to kind of find a way to take sort of the the stuff we understand from your language and make IT more precise. And that's really the story of of symbolic programing. And you know what that turns into is something which I didn't know at the time that was going to work as well as IT has.

But back in the one thousand and seven and nine o so I was trying to build my first big computer system and trying to figure out, you know, how should I represent computations at a high level? And I kind of invented this idea of using kind of symbolic expressions you structured as it's kind of like A A function and a bunch of arguments. But that function doesn't necessarily evaluate to anything.

It's just A A thing that sits there representing a structure. And so building up that structure, and it's turned out that structure has been extremely. It's it's a good match for the way that we humans that seems to be a good match for the way that we humans kind of conceptualize higher level things. And it's been the last, I know, forty five years or something IT served me remarkably well.

So building of that structure, using this kind of symbolic representation, but working, say, about abstractions here, because you could just start with your physics project. You could start at a hyperdrive PH at a very, very low level and build up a everything from there. But you don't.

You take shortcuts, right? Uh, you you take the highest level of abstraction, convert that the kind of abstraction is convertible to something computable using symbolic representation. And then that's your new foundation for that little piece of knowing that somehow all that is integrated.

right? So the sort of a very important phenomenon that, that is kind of a thing that i've sort of realized is just. It's one of these things that sort of in the in the future of kind of everything is going to become more more important as this phenomenon of computational early ability.

And the the question is, if you know the rules for something, you have a program, you're gonna run IT. You might say, I know the rules great. I know everything about what's gonna en.

Well, in principle, you do because you can just run those rules out and just see what they do. You might run them a million steps. You see what happens a tetter up.

The question is, can you like immediately jump ahead and said, I know it's going to happen after a million steps and the answer is thirteen or something? yes. And the one of the very critical things to realize, if you could reduce that computation, there isn't a sense, no point in doing the computation.

The place where you really get value out of doing computation is when you have to do the computation to find out the answer. But this phenomenon that you have to do the computation to find out the answer, this phenomenon, computational disability, seems to be tremendously important for thinking about lots of kinds of things. So one of the things that happens is, okay, you've got a model, the universe, at the low level, in terms of items of space and hyper graphs and rewriting hyper graphs and on.

And it's happening, you know, ten of the one hundred times every second. Let's say, well, you say, great, then we ve nailed that. We know how the universe works.

Well, the problem is the universe can figure out what what's going to do IT does. That was tend to the one hundred you know, steps. But for us to work out what it's going to do, we have no way to reduce that computation.

The only way to do the computer receive the result of the computation is to do IT. And if we are Operating within the universe, we're kind of there's no there's no opportunity to do that because the universe is doing IT as fast as the universe can do IT. And that's you know, that's what's happening.

So what we're trying to do and a lot of the story of science, a lot of other kinds of things, is finding pockets of reduce ability. That is, you could have a situation where everything in the world is full of computational rediscovery. We never know what's going to happen next.

The only way we can figure out what was going to happen next, just, just let the system run and see what happens. So in a sense, the story of of most kinds of science inventions, a lot of kinds of things, is the story of finding these places where we can locally jump ahead. And one of the features of computational reduce ability is there are always pockets of reduce ability.

There are always places, there are always an infinite number, places where you can jump ahead. There's no way where you can jump completely ahead, but there are little, little patches, little places where you can jump ahead a bit. And I think we can talk about physics project and so on, but I think the thing we realize is we kind of exist in a slice of all the possible computational leering ability in the universe.

We exist in a slice with as a reasonable amount of predictability. And in a sense, as we try to construct this kind of higher levels of of abstraction, symbolic representations and so on, what we're doing is we're finding these lumps of reduced ability that we can kind of attach ourselves to and about which we can kind of have fairly simple narrative things to say. Because in principle, you know, I say what's going to happen in the next few seconds, you know, oh, there are these molecules moving around in the air in this room. And o gosh is an incredibly complicated story. And that's a whole computational reducable thing, most of which I don't care about and most of IT is, well, you know the is still gonna be here and nothing much is going to be different about IT and that a kind of reduced able fact about what is ultimately at an underlying level of competition of the reducable .

process and uh, life would not be possible. We didn't have a large number of such reducable pockets, uh, yes, pockets amenable to, uh, reduction into something symbolic.

Yes, I think so. I mean life in the way that we experience IT that I mean, you know one might you know, depending on what we mean by life, so to speak, the experience that we have a sort of consistent things happening in the world, the idea of space, for example, where that you know, we can just say you're here, you move there. It's kind of the same thing.

It's still you in that different place, even though you're made of different items of space and so on. This is this idea that is that this sort of this level of predictability, what's going on, that's us finding a slice of reduce ability in what is underneath this computationally reducable kind of system. And I think that that some of the thing, which is actually my favorite ite discovery the last few years, is the realization that this is sort of the interaction between the sort of underlying computational irreducibility and our nature as kind of observers who sort of have to key into computational reduce ability.

That fact leads to the main laws of physics that we discovered through in this century. So this is, we talk about the thing in in more detail, but this is, uh, to me, it's kind of our nature as observers, the fact that we are computationally bounded observers, we don't get to follow all those little pieces of computational reduced ability to stuff what is about out there in the world into our minds requires that we are looking at things that are reducable. We are compressing kind of tracing just some essence, some kind of symbolic essence, of what's the detail of what's going on in the world, that together with one other condition that at first seems sort of trivial but isn't, which is that we believe we are persistent in time. That is, yes.

you the cassity.

This is the thing at every moment. According to our theory, we are made of different items of space at every moment. Sort of the microscopic detail of what what the universe is made of is being rewritten.

And that's and in fact, the very fact that this coherence between different parts of space is a consequence of the fact that there are only little processes going on, kind nick, together, the structure of space. It's kind like if you wanted to have a fluid with bunch of molecules in IT, if those molecules weren't interacting, you wouldn't have this fluid that would por and do all these kinds of things. IT would just be set of a free floating collection of molecules so similar with space that the fact that space is kind of mitted together is a consequence of all this activity in space.

And the fact that kind of what what we consist of sort of the series of of of we're continually being rewritten. And the question is, why is that the case that we think of ourselves as being the same us through time? That's kind of A A key assumption. I think it's a key aspect of what we see is out of our consciousness, so to speak, is that we have this kind of consistent thread of experience.

What isn't that just another limitation uh, of our mind that we want to reduce the reality into some that kind of temporal yeah consistency is just a nice narrative right on ourselves. Well.

the fact is I think it's critical to the way we humans typically Operate is that we have a single thread of experience. You know, if you if you imagine sort of a mind where you have, you know maybe that's what's happening in various kinds of minds that on working the same way other minds work is that you're splitting into multiple threads of experience.

It's also it's also something where you know, when you look at, I don't know, quantum mechanics, for example, in the insides of quantum mechanics, it's splitting into many threads of experience. But in order for us humans to interact with IT, you have to have to nit all those different threads together so that we say, oh yeah, definite thing happened and now the next definite thing happens and so on. And I think you know, sort of inside it's it's sort of interesting to try and imagine what's IT like to have kind of these fundamentally multiple threads of experience going on.

I mean, right now, different human minds have different threats of experience. We just have a bunch of minds that are interacting with each other, but we don't have a you within each mind. There's a single thread and that say that is indeed a simplification.

I think it's it's a thing. You know the general computational system does not have that simplification. And then it's one of the things people often seem to think that consciousness is the highest love of kind of things that can happen in the universe, so to speak. But I think that's not true. I think it's actually A A, A specialization in which, among other things, you have this idea of a single thread of experience, which is not a general feature of anything that could kind of can happen .

in the universo, the feature of a computationally limited system that's only able to the observe reducable pockets.

So yes.

so that mean this word, observer, that means something. Economic ics, that means something. In a lot of places he means something to us humans, right? Is conscious being.

So what? What's the importance of the observer? What is the observer? Was the importance of the observer in the competition universe.

So this question of what is an observer, what's the general idea of an observer, is actually one of my next project, which got somewhat derailed by the the current of a imani.

But time is there, connection there? Or that do you, do you think of service, primarily physical?

Is IT related to the whole A I thing? Yes, that is related. So one question is, what is a general observer? So we know we have an idea.

What is a general computational system? We think about touring machines. We think about other models of computation. There's a question, what is a general model of an observer? And there's kind of observers like us, which is kind of the observers were interested in, know we could imagine an alien and observer that deals with computational reduced ability and IT has a mind that utterly different from hours and and completely incoherent with what what we are like.

But the fact is that that you, if we are talking about observers like us, that one of the key things is this idea of kind of taking all the detail of the world and being able to stuff in into a mind, being able to take all the detail and kind of know, extract out of IT a smaller set of of kind of degrees of freedom, of smaller number of elements that will sort of fit in our minds. And I think this this question. So I been interested in trying to characterize what is the general observer.

And the general observer is, I think. In part, there are many. Let me give an example of her.

If you have a gas, it's got a bunch of molecules bouncing around. And the thing you're measuring about the gas is its pressure. And the only thing you isn't observer care about is pressure.

And that means you have a piston on the side of this box, and the piston is being pushed by the gas. And there are many, many different ways that molecules can hit that piston. But all that matters is the kind of aggregate of all those molecular impacts, because that's what determines pressure.

So there is a huge number of different configurations of the gas, which are all equivalent. So I think one key aspect of observers is the equalizing of many different configuration tions of a system, saying, all I care about is this aggregate feature. All I care about is this this overall thing and that's that sort of one, one aspect. And what can we see that in lots of different? Again, it's the same story of earned over again that that's there's a lot of detail in the world, but what we are extracting from IT is something a sort of a thin a thin .

summary of of detail is that in summary, nevertheless true is can IT be A A crappy approximation that an average is correct them? If we look at the observer as the human mind, that seems like there's a lot of very um as represented by natural language for example, there's a lot of really crappy approximation. sure. And that could be maybe a feature of IT with the ambiguity right right?

You don't know you know I could be the case you're just measuring the aggregation acts of these molecules, but there is some tiny, tiny probability that in molecules will arrange themselves in some really funky way and that just measuring that average isn't going to the main point. Yeah, by the way, and awful, a lot of science is very confused about this because, you know, you look at, you look at papers, and people are really keen.

They draw this curve and they have these these bars on the curve and things and just this curve. And is this one thing? And it's supposed to represent some system that has all kinds of details in IT. And this is a way that lots of science has gotten wrong because people say, I remember years ago I was staying snowflake growth, know you have a snowflake and it's growing that has all these arms. It's doing complicated things.

But there was a literature on this stuff, and I talked about, you know, what's the rate of snowfall growth? And, you know, I did got pretty good answers for the rate of the growth of the snow flake. And they looked at IT more carefully, and and they had these nice curves of no snowflake growth rates and so on, looked at IT more carefully.

And I realized, according to their models, the snowflakes be spiracle. And so they got the growth right, right. But the detail was just utterly wrong. And you know, not only the detail, the the whole thing was was not capturing, you know, IT was capturing this aspect of the system that was in a sense missing the main point of what was going on.

And what is the geometric? A shape of a snowflake.

snowflake. Start in in the phase of water that's relevant to formation of snowflakes. It's a phase of ice which starts with the existing arrangement of water molecules and so IT starts off growing as the of plate. And then what happens is it's a .

play oh versus well then know but it's .

it's much more than that. I mean, snow flakes of fluffy know typical snowflakes have little little dandridge aren't ah yeah and and what actually happens is it's kind of cool because you can make these very simple discrete models with cell authority to and things that that figure this out. You start off with this, you second thing, and then the places IT IT starts to go, little arms.

And every time a little piece of ice adds itself, the snowflake, the fact that that ice condenser from the water vapor heats the snowflake up locally. And so IT makes IT less likely for for another piece of ice to accumulate right nearby. This leads to a kind of growth inhibition on.

So you grow an ARM. And IT is A A separated ARM, because right around the ARM, IT got a little bit hot and IT didn't add more eyes there. So what happens as IT grows? You have a hex egon.

IT grows out arms, the arms grow arms, and then the arms grow arms, go arms, and eventually actually kind of cool, because IT actually fills in another haxey gan, a bike hax gon. And I first looked at this, you had a very simple model for this. I realized, you know, when IT fills in that heck's ago and actually leads some holes behind.

So I thought, well, you know, that, is that really right? So I look at these pictures of snowflakes, and sure enough, they have these holes in them. There are kind of scars of the way that these arms .

grow out so you can fill in backfill holes.

Arbitral arly grow.

I'm not sure. I mean, the thing falls through the I mean, ticket, you know IT hits the ground at some point. I think you can grow and I think you can grow in the lab. I think you can grow pretty big ones. I think you can grow many, many iterations of this kind of goes from hexagon IT grows out arms IT turns back IT fills back into a hexagon on IT grows more arms again in three knows .

flat usually why is a flat? Why isn't IT spent out OK with me? You said it's fluffing. Fluff is a three dimensional property.

no or no. It's fluffs now is okay. So you know what makes we're really we're really in this multiple snowflake .

become fluffy. Single snowflake is not fluffy.

No single snowflake is fluffy. And what happens is, you know if if you have snow that is just pure hexagons, they fit together pretty well. It's not IT doesn't doesn't make IT doesn't have a lot of air in IT and they can also slide against each other prety easily.

And so the snow can be pretty you I think our launches happen sometimes when when the things tend to be these you know hex signal plates and that kind of slides. But then when the thing has all these arms that have grown out, it's not they don't fit together very well and that's why the snow has blots of air in net. And if you look at one of the snowflakes, and if you catch one, you'll see IT has little arms.

And people actually, people often saying, you know, two snowflakes are like, that's mostly because as a snowflake grows, they do grow pretty consistently with these different arms and so on. But you capture them at different times as they, you know, they fell through through the air in a different way. You'll catch this one at this stage. And as IT goes through a different stages, they look really different. And so that's why you know, color looks like note, slum flakes are like because you caught them at different a different time.

So the rules under which they grow are the same is just the timing is yes OK. So the point is science is not able to describe the full complexity is no flag growth?

Well, science, if you if you do what what people might often do, which you say, okay, let's make IT science fc, let's turn into one number. And that one number is kind of the growth of the arms, are some such other thing that fails to capture sort of the detail of what's going on inside the system.

And that, in a sense of big chAllenge for science, is how do you extract from the natural world, for example, those aspects of IT that you are interested in talking about? Now you might just say, I don't really care about the fluffy of the snowflakes. All I care about is the growth rate of the arms, in which case you you have you can have a good model without knowing anything about the fluffy.

But the fact is, as a practical when if you you say, what's the what is the most obvious feature of a snowflake? Oh, that IT has this complicated shape. Well, then you've got a different story about what you model. I mean, this is one of the features of sort of modeling and science that you know, what is a model.

A model is some way of reducing the actuality of the world to something where you can readily sort of given narrative what's happening, but you can basically make some kind of abstraction of what's happening and the answer questions that you care about answering if you wanted, answer all possible questions about the system, you'd have to have the whole system because you might care about this particular molecule. Where did you go? And you know, your model, which is some big abstraction of that, has nothing to say about that.

So, you know, one of the things that often confusing in science as people will have, i've got a model. Somebody says, somebody else will say, I don't believe in your model because IT doesn't capture the feature of the system that I care about is always this controversy about, you know, is, is that a correct model? Well, no model is, except for the actual system itself, is a correct model in the sense that IT captures everything question.

This is a capture what you care about capturing sometimes that's ultimately defined by what you're going to build technology out of things like this. The one country example to this is if you think you're modeling the whole universe all the way down, then there is a notion of a correct model. But even that is more complicated because IT depends on kind of how observers sample things and so on.

That's that's a separate story, but at least at the first level to say, you know this thing about oh, it's an approximation you're capture in one aspect, you're not capturing other aspects. When you really think you have a complete model for the whole universe, you Better be capturing ultimately everything even though to actually run that model is impossible because of computational reduce ability. The only, the only thing that successfully runs that model is the actual running of the universe.

is the universe yourself. But okay, so what you care about is an interesting concept. So that's that's a human concept.

So that's what you're doing that with the world from malfa will from language. Is he trying to come up with symbolic representations? Yes, as simple as possible. Ah so a moral is as simple as possible that fully catch her stuff we care about yes.

So I mean, for example, you know we could will have A A thing about know date about movies. Let's say we could be describing every individual pixel in every movie and but that's not the level that people care about. And it's yes, this is I mean and that level that people care about is somewhat related to what described the natural language.

But what what we're trying to do is to find a way to sort of represent precisely so you can compute things. See, one thing we say, you give a piece of natural language question is you feed IT to a computer. You say, does the computer understand this natural language? Well, you know, the computer process is in some way.

Does this maybe I can make a continuation of the natural language. You know, maybe I can go on from the prompt and say what it's going to say. You say, does IT really understand IT? Hard to know. But for in this kind of computational world, there is a very definition of does IT understand, which is, could IT be turned into the symbolic computational thing from which you can compute kinds of consequences? And that's the that's the sense in which one has set of a target for the understanding natural language.

And that's kind of our goal is to have as much as possible about the world that can be computed in in a reasonable way, so to speak, be able to be sort of captured by this kind of computational language, that that kind of the goal. And and I think for us humans, the main thing that's important is as we formalize what we're talking about, IT gives us a way of kind of building a structure where we can sort build this tower of consequences of things. So if we're just saying, well, let's talk about a natural language, IT doesn't really give us some hard foundation that lets us, you know, build step by step to work something out. It's kind of like what happens in in math if we were just sort of vaguely talking about math, didn't have the kind of full structure of math and all that kind of thing, we wouldn't be able to build this kind of big tower of consequences. And so no, a sense what are trying to do with the whole computational language effort is to make a formalism for describing the world that makes IT possible to kind of build this.

this tower of consequences. Well, can you talk about this death between natural language and, well, from language? So this is this gigantic thing we call the internet, where people post memes and. Direct pe thousand very important signing articles and all of that, that makes up the training data for a GPT. And then there's a walk from language. How can you map from the natural language of the internet to the wall from language? Is there an manual is there an automated way of doing that as we look into the future?

Well, so wealth mala, what IT does its kind of front end is turning natural language into computational language.

What you mean that is there's a prompt to ask a question, what is the capital of yeah right.

And and IT turns into you what's the distance between you know chicago and london or something and that will turn into you know geo distance of entity city. You seta each one of those things is is very well defined. We know you know given that it's the entity city, chicago, setara annoy united states, know, we know the geolocation of that, we know its population, we know all kinds of things about IT, which we have accurate that data to be able to to know that with some degree of certainty to speak. And then then we can compute things from this and that's that's kind of the um that's the idea.

But then something like GPT large language models, do they allow you to a make that conversion much more powerful? Okay.

that's an interesting thing which we still don't know everything about. Okay, the um I mean this question of going from natural language to computational language, yes. And well from alpha we've now you know now has been out about for thirty half years now and you know we've achieved I don't know what that is ninety eight percent ninety nine percent success on query es that get put into IT.

Now obviously, there's a sort of feedback group because the things that work are things people go on putting into IT so that that but we've got to a very high success rate of the the little fragments of natural language that put people put in your questions, math calculations, chemistry calculations, whatever is you can, we can. We do very well at that, turning those things into computational language. Now, from the very beginning of auth alpha, I thought about, for example, writing code with natural language.

Fact I had I was just looking at this recently. I had a post that I wrote in twenty ten, twenty eleven called something like programing. Natural language is actually going to work. okay. And so you know, we had done a bunch of experiments using methods that were little bit some of them a little bit machine learning, like, but certainly not the same in the same kind of idea of vast training data, so on. That is the story of large language models.

Actually, I know that that post piece of utter trivia, but that that post, Steve jobs forwarded that post around to all kinds of people at apple, and know that was because he never really like program languages. So he was very happy to see the idea that that you could get rid of this kind of layer of kind of engineering like structure he would have liked. I think what's happening now, because he really is the case that you can, this idea that you have to kind of learn how the computer works to use a programing language, is something that is, I think, a thing that just like you had to learn the details, the up codes, to know how some of the language worked in on.

It's kind of a thing that's that's that's a limited time horizon, but but kind of the you know, so this idea of how elaborate can you make kind of the prompt, how labor can you make the natural language and abstract from IT computational language, it's a very interesting question. And you know what ChatGPT you know GPT four and so I can do is pretty good. Um IT isn't it's very interesting processing still to understand this workflow. We've been working out a lot of tooling around this workflow.

The natural language to computational language right on the process is Better if it's conversation, like dialogue, like multiple quiz.

right? There are so many things that are really interesting that work and so on. So first thing is, can you just walk up to the computer, expect to sort of specify computation? What one realizes is humans have to have some idea of kind of this way of thinking about things computationally. Without that, you kind out of luck because you just have no idea what you're going to walk up to a computer.

I remember when I I should tell a soly story about myself, the very first computer I saw, which was when I was ten years old, and IT was a big main friend, computer and son. And I didn't really understand what computers did. And it's like, so just showing me this computer it's like you can the computer work out the weight of a dinosaur is like that isn't a sensible thing to ask.

That's kind you know you have to give IT that's not what computers do. I mean, in what mouthful, for example, you could say, what's the typical way of a story that will give you some answer, but that's a very different kind of thing from what one thinks of computers is doing. And so the kind of the the question of the first thing is people have to have an idea of what what computation is about.

You know, I think it's a very know for education that is the key thing of a this notion, not computer science, not so the details are programing, but just this idea of how do you think about the world computationally computation. Thinking about the world computationally is kind of this formal way of thinking about the world. We've had other ones like logic was a formal way, was a way of some of abstracting and formalizing some aspects of the world.

Mathematics is another one. Computation is this very broad way of sort of formalizing the way we think about the world. And the thing that's that's cool about computation is if we can successfully formalize things in terms of competition, computers can help us figure out what the consequences are.

It's not like you formalize IT with math. Well, that's nice. But now you have to if you're not not using a computer to do the math, you have to go work out of monster stuff yourself. So I think but with this idea, I would see, I mean, you know, we're trying to take kind of the that we're talking about sort of natural language and its relationship to computational language.

The the thing the sort of the typical workflow, I think is first, human has to have some kind of idea of what they are trying to do that if it's something that they want a sort of build a tower of of capabilities on, something that they want a sort of formalize and make computational. So then human can type something in to, you know, some L M system and sort of savagely what they want in sort of computational terms. Then IT does pretty well at synthesizing volte language.

Code and play do Better in the future because we've got a huge number of examples of of natural language input together with the wall languish translation of that. So it's kind of a you know that that's a thing where you can kind of extrapolating from all those examples makes IT easier to do that. That toss is the prompter .

task could also kind of debugging the world from language code? Or is your hope to not do that?

The buggy? No, no, no, no. I mean, so so there many steps here. Okay, so first, the first thing is you type neural language. IT generates wolf language.

Give examples, but give example that the dinosaurs ample. During example, I jumps to mind that we should be thinking about .

some dumb example. It's like take my heart rate data and you know figure out whether I, you know make a moving average every seven days or something and work out what the and make up a plot of the result. Okay, so that's a thing which is, you know about two thirds of a line of all language code. I mean, you know this plot of moving average of some data been or something of the of the data and then you'll get the result. And you know, the vague thing that I was just saying, a natural language could would almost certainly correctly turn into that very simple piece of our language code.

If we start mumbling about hearts rate.

Yeah, you know you .

arrive at the moving average kind of idea, right? You say average of a seven.

seven days, maybe it'll figure out that that's a moving that can be in capitated as this moving average idea, i'm not sure. But then the typical workflow that i'm seeing is you generate this piece of all the language code, it's pretty small. Usually it's if IT isn't small, that problem isn't right.

But you know it's pretty small. And work of language is one of the ideas of wolf. Language is is a language that humans can read.

It's not a language which programming languages tend to be this one way story of human written and computers exec from them both languages intended to be something which is sort of like math notation, something where you know humans write IT and humans supposed to reit as well. And so kind of the workflow that's emerging is kind of this this human mumbles some things. You know, large language model produces a fragment of olten language code.

Then you look at that, you say, yeah, that looks well. Typically you just run IT first. You see does approach the right thing.

You look at what IT producers, you might say that obviously crazy. You look at the code, you see, I see why it's crazy. You fix IT.

If you really care about the result, you really want to make sure it's right. You Better look at that code and understand IT, because that's the way you have the sort of checkpoint. I did IT really do what I expected IT to do.

Now you go beyond that. I mean, it's, you know, what we find is, for example, let's say the code does the wrong thing, then you can often say to the large language model, can you adjust this to do this? And it's pretty good at doing that.

interesting. So you using the output of the code to give you hints about the the the .

function of .

the coast. You're debugging based on .

the output, the code, that code itself, right? The plugging in that we have for ChatGPT IT does that routinely. You know, I will send the thing in IT will get a result IT will discover the alarm will discover itself that the result is not plausible and IT will go back and say, oh, i'm sorry, it's very polite and IT you know, IT, he goes back and says, i'll rewrite that piece of code and then I will try again and get the result gillings pretty interesting is when you're just running.

So one of the new concepts that we have, we invented this pole idea of notebooks back thirty six years ago now. And so now there's the question of out of how do you combine this idea of notebook, but you have no text and code and output. How do you combine that with the notion of chat and so on? And there's some really interesting things like like for example, of very typical thing now as we have these these notebooks where as soon as the if if the thing produces errors, if, you know, run this code and IT produces messages and so on, the the, the allam m automatically not only looks at those messages that can also see all kinds of internal information about stack traces and things like this.

And IT can then IT does a remarkably good job of guessing what's wrong and telling you. So another words, it's looking at things sort of interesting. It's kind of a typical sort of a ih thing that it's able to have more sensory data than we humans are able to have, is able to look at a bunch of stuff that we humans would kind of glaze over looking at and it's able to then come up with, oh, this is the explanation of what's happening and .

and what is the data of the stack trace, the code you were previously the natural language of.

It's also what's happening is one of the things that is, for example, when there's these messages, that documentation about these messages, those examples of where the messages have occurred otherwise, all these other thing is really amusing with this is when he makes a mistake. One of the things that's in our prompt when the coat doesn't work is read the documentation. We have not the piece of the plug in, but let's read documentation.

And that again is very, very useful because IT will you? I will figure out sometimes they will get it'll make up the name of some option for some function that doesn't really exist, read the documentation. They'll have you know some wrong structure for the function and on it's some that's a powerful thing. I mean, the thing that, you know, i've realized as we built this language over the cause of all these years to be nice, single, hearing and consistent, and so and so it's easy, if you humans to understand. Turns out there was a side effect that I didn't anticipate, which is that makes IT easy if A A.

I to understand, like another natural language. But yes, so so on language is a kind of foreign language. Yes, yeah, you have a line up english, french, japanese well, from language. And then I don't know spanish and system is not going to notice.

Well, yes, I mean, maybe know that's an interesting question because IT really depends on what I see is being important piece of fundamental science that basically just jumped out of us with ChatGPT because I think know the real question is, why does chat beauty work? How is that possible to capsule ate, you know, to successfully reproduce all these kinds of things in natural language, you know, with A A comparatively small, he says, a couple a hundred billion weights of neuron, that and so on.

And I think that you that that relates to a kind of a fundamental fact about language, which you know the main, the main thing is that I think there's a structure of language that we haven't kind of really explored very well as kind of the semantic grammar I talking about about about language. I mean, we kind of know that we when we set up human language, we know that has certain regularities. We know that IT has a certain grammatical structure, you know, now followed by verb, followed by now adjectives, seta.

That's its kind of grammatical structure. But I think the thing that ChatGPT is showing us is that there's an additional kind of regularity to language, which has to do with the meaning of the language beyond just this pure in a part of speech combination type of thing. And I I think the kind of the one example of that that we've had in the past is logic.

And you I think my my sort of kind of picture of how was logic invented, how was logic discovered? IT really was the thing that was discovered in its original conception. IT was discovered, presumed by aristotle, who kind of listen to a bunch of people, or, as you know, giving speeches.

And this one made sense. That one doesn't make sense. This one, and you know, you see these patterns of, you know, if the, you know, I don't know what, you know, the if the persons do this, then this does that tera, and what what aristotle realized is there's a structure of those sentences.

There's a structure of that rhegium IT doesn't matter whether it's the persians and the greeks or whether it's the cats on the dogs. It's just, you know, p and q. You can abstract from this the details of these particular sentences. You can lift out this kind of formal structure. And that's what logic is.

That's a heck of a discovery, by the way, logic, you make me realized, no, yes is not obvious.

The fact that there is an abstraction from natural language that has where you can fill in any word you want, yeah, is a very interesting discovery. Now, IT took a long time to mature. I mean, aristotle had this idea of piloting istis c logic, where there were these particular patterns of how you could argue things, so of being.

And know, in middle ages, part of education was you memorized the solo gsm. I forget how many there were, but fifteen of them or something and they all had names. They're had the mics like I think Barbara and solar ant.

But two of the the nymphs for the the soloists and people would kind of this is a valid argument because IT follows the Barbara ologies at this peak and and IT took until eighteen thirty, you know, with George bull, to kind of get beyond that and kind of see that there was A A level of abstraction that was beyond the this particular temptation of a sentence that to speak and that, you know what, what's interesting there is, in a sense, you chat B T is Operating at the article level. It's essentially dealing with templates of sentences by the time you get to bully and bully an algebra. And the idea of, you know, you can have arbitrary pth nested collections of ants and oars and nuts, and you can resolve what they mean, that's the kind of thing.

That's a computation story that's know you've gone beyond the pa sort of templates of natural language to something which is an orbiter's deep computation. But the thing that I think we realize from ChatGPT is aristotle stopped too quickly, and there was more that you could have lifted out of language as formal structures. And I think there's, in a sense, we've captured some of that and know some of what is in language there. There there's a lot of kind of little calculate, little algebra of what you can say, what language talks about you mean whether it's I don't know .

if you say I go from place .

a to place b place b to place c then I know i've gone from place a to place c if a is a friend of b and b is a friend of c, IT doesn't necessarily follow that a is a friend of sea. These are things that are.

And you know that there are if if you go from from place a to place b, place b to play c IT doesn't matter how you went, like logic IT doesn't matter whether you flew their walk, the swam there, whatever you still, this transitivity of of where you go is still valid. And there are, there are many kinds of kind of features, I think, of the way the world works that are captured in these aspects of language, so to speak. And I think what what ChatGPT effectively has found, just like IT discovered logic, people are really surprised that can do these, these logical inferences. IT discovered logic the same way. Aria discovered logic by looking at a lot of sentences effectively and noticing .

the patterns in those sentences. But he feels like I discover something much more complicated than logic. So this kind of semantic grammar, I think you've wrote about this um maybe we can call IT the laws of language public call or which I like the laws of thought.

Yes, that was the title that George bull had for his his blue and bob cco eighteen thirty. But yes, I have thought, yes, that was what he said. So he he thought he nailed that was ultra a yeah that's more to IT.

I is a good question. How much more is there to IT and IT seems like one of the reasons, as you uh imply, that the reason GPT works ChatGPT works is that um there's a finite number things to .

IT yeah I mean the kids .

discovering the laws in some sense GPT discovering the laws of semantic grammar that underlies ed language. yes.

And what sort of interesting is in the computational universe, there's a lot of other kinds of computations that you could do. They're just not ones that we humans have cared about and and Operate with. And that's probably because our brains are built in a certain way. And you know, the neural nets of our brains are not that different in some sense, from the neural nets of of of a large language model and that kind of. And so when we think about know maybe can talk about this small, but when we think about what will a ultimately do, the answer is if in so far as they as are just doing computations, they can run off and do all these kinds of crazy computations. But the ones that we sort of have have decided we care about other is this kind of very limited set.

That's where the, uh, reinforcement learning with human feedback seems to come in. The more the A I say the stuff that kind of interests, the more were impressed by IT. So I can do a lot of interesting intelligence things, but were only interested in the A I systems when they communicate. Human in a human, likely, yes, about human like topics.

yes. But it's it's like technology. I mean, in a sense, the physical world provides all kinds of things, know all kinds of processes going on in physics, only a limited set of those, the ones that we capture and used for technology, because they're only a limited set where we say, you know, this is a thing that we can sort of apply to the human proposes we currently care about.

You might have said, okay, you pick up a piece of of rock. You said, OK and I silica ate IT contains all kinds of silicon. I don't care. Then you realized, oh, we could actually turn this into a, you know, semiconductor way for make a microprocessor out of IT.

And then we care a lot about IT here and it's you know it's this thing about what do we do in the evolution of our civilization, what things do we identify as being things we care about. I mean, it's it's like know when there was a little announcement recently the possibility of a high temperature superconductor that involved the element lutetium, which you generally nobody has cared about you, it's kind of them. But suddenly if there's this application that relates to kind of human purposes, we start to care a lot.

So given your thinking that GPT may have discovered, including of laws of thought. Do you think such laws exist? Can move around that? What your intuition here.

definitely. I mean, the fact is, look that the logic is. But the first step, there are many other kinds of calculate about things that we consider you about, sort of things that happen in the world, or things that are meaningful.

what? How do you know? logic. And that the last step, no.

I mean so well, because we can plainly see that that thing. I mean, if you say, here's a sentence that is syntactically correct, okay, you look at and you like you the happy electron you know eight I don't know what something that just you look at in is like this is meaningless. It's just a bunch of words.

It's synthetically correct. The nouns, the verbs are in the right place but IT just doesn't mean anything um and so there clearly is some rule that there are rules that determine when a sentences has the potential to be meaningful that go beyond the pure parts of speech syntax. And so the question is what are those rules and all they are fight fairly finite set of those rules.

My guesses is that there's a fairly finite set of those rules. And they know once you have those rules, you have a kind of a construction kit, just like the rules of syntactic grammar, give you a construction kit for making stacking ally correct sentences. So you can also have a construction kit for making semantic ally correct sentences.

Those sentences may not be realized in the world. I mean, I think, you know the elephant fluid of the moon, yeah, a synthetic semantic ally. You know, we know we have an idea. If I say that to you, you kind of know what that means. But the fact is that hasn't been realized in the world.

So thematically, correct perhaps, as things that can be imagined with the human mind, no things that are consistent with both our imagination in our understanding of physical reality. I know.

Yeah, good question. I mean, it's a good question. It's a good question. I think I think IT is given the way we have constructed language, IT is things which which fit with the things we're describing a language.

It's a bit circular in the end because you know, you can sort of boundaries of what is physically realizable. Okay, let's take the example of motion. Okay, motion is a complicated concept. IT might seem like it's a concept that should been figured out by the greeks you long ago, but it's actually really pretty complicated concept.

Because what is motion? Motion is you can go from place a to place b and it's still you when you get to the other end, you take an object, you move IT and it's still the same object, but it's in a different place. Now even in ordinary physics, that doesn't always work that way.

If you're near a space time singularity in a black hole, for example, and you take your teapot or something, you don't have much of a tea pot by the time it's near the space time singula, it's been completely deformed beyond recognition. But so that's the case where pure motion doesn't really work. You can't have a thing stay the same.

But so this idea of motion is is something that sort of is a is a slightly complicated idea. But once you have the idea of motion, you can start. Once you have the idea that you're going to describe things as being the same thing, but in a different place, that sort of abstracted idea then has you that has all sorts of consequences, like this transitivity of motion.

Go for me to B, B, to save on, for me to see. And that so that level of description you can have what are sort of inevitable consequences. There are inevitable features of the waves sort of set things up. And that, I think what this sort of semantic grammar is capturing is things, things like that.

And you know, I think that is a question of, what does the word mean when you say I go from, I move from here to there? What is complicated to say what that means? This is this whole issue of no. Is pure emotion possible at that, a terror? But once you have kind of got an idea of what that means, that there are inevitable consequences of that idea.

but the very idea of meaning IT seems like there's some words that become it's like there's a latent ambiguities to them. I mean, the world like emotionally loaded words like hate and love, right? What what they? What mean? Exactly what? So it's Better when you have relationships between complicated objects. We seem to take this kind of shortcut, descript a shortcut to describe like right object to a hates object b. Was that really mean right?

Well, words are defined by kind of our social use of them. I mean, it's not, you know, a word in computational language. For example, when we say we have A A construct there, we expect that, that construct is a building block from which we can construct in an obvious toll tower.

So we have to have a very solid building block. And you know we have to turn into piece of code that has documentation. You know, it's a whole, it's a whole thing, but the word hate, you know, the documentation for that word.

Well, there isn't a standard documentation for that word, so to speak. It's a complicated thing defined by kind of how we use IT when, you know, if IT wasn't for the fact that we were using language, I mean, so so what is language? At some level, language is a way of packaging thoughts so that we can communicate them to another mind.

Can these complicated words be converted to something um that competition agent can use right?

So so I think the answer to that is that that what one can do in computational language is defined make a make a specific definition. And if you have a complicated word, like, let's say the word eat, okay, you think that's a simple word. As you know, animals eat things, whatever else. But you know, you do programing, you say this function eats arguments, which is sort of poetically similar to the animal eating things.

But if you start to say what what are the implications of, you know, a the function eating something, you know, does IT can the function be poison? Well, maybe I can actually, but know there's a tight this match or something in in some language, but but you know in what how far does that analogy go? It's just an analogy whether if you use the word eat in a computational language level, you would define there isn't a thing which you anchor to the kind of natural language concept eat. But IT is now some precise definition of that that then you can compute things from.

But you think the analogy is also precise. Software eats the world. Don't don't you think there's there's something concrete interns of meaning about analysts?

sure. But the thing that sort of is the first target for computational language is to take sort of the ordinary meaning of things and try and make IT precise, make IT sufficiently precise, you can build these towers of computation on top of IT. So it's kind like if you start with a piece of poetry and you say i'm going to define my program with this piece of poetry, it's kind of like that's that's a difficult thing. It's Better to say i'm going to just have this boring piece of prose and it's using words in the ordinary way and that time communicating with my computer and that time going to build a solid building block from which I can construct this .

whole kind of computational tower. So some sense where if you take a poem and reduce IT to something computable gonna have very few things left. So maybe there is a bunch er of human interaction that's just poetic, aimless nonsense. There's just like recreational like hampshire, a wheel is not actually producing well.

I think that that's a complicated thing because in a sense, human Louis's communication is there's one mind is producing language, that language is having an effect on another mind. And the question of the sort of a type of effect that is well to find, let's say, where we're for example, it's very independent of the two minds that IT doesn't there. This communication where IT can matter a lot sort of what the experience of of of one mind is versus another one and so on.

Yeah, but what is the purpose of natural english communication? I think I think .

the universe is.

So computation competition, england somehow feels more amendment to the definition of purpose. It's like, yeah, you're given to a clean representations of a concept and you can build a tower base on that. Is is natural language the same thing but more fuzzy?

what? Neural language? right? That's the great invention of our species. We don't know other exits in other species, but we know IT existed our species.

It's the thing that allows you to sort of communicate abstractly from like one generation of the species to another. You can know there is an abstract version of knowledge that can be passed down. IT doesn't have to be, you know, genetics.

IT doesn't have to be, you don't have to append ce the next day. You know, the next generation of birds to the previous one to show them how something works. There is this abstracted version of knowledge that can be kind of passed down. Now that, you know, IT relies on, IT still tends to rely because language is fuzzy.

IT does tend to rely on the fact that if we look at the you know, some ancient language that where we don't have a chain of translations from MIT until what we have today, we may not understand that ancient language and we may not understand, you know its concepts may be different from the ones that we have today. We still have to have something of a, but IT is something where we can realistically expect to communicate abstract ideas. And that know that one of the big, big roles of a language, I think you in in it's you know that that's been this this ability to sort of concrete fy abstract things is what what language is provided.

Do you see natural language and thought as the same? The stuff has gone outside your mind.

Well, that's been a long debate in philosophy. IT seems to be .

become more important now when we think about how intelligent GPT is, what if that means? Whatever that means. But this seems like the stuff has gone on. The human mind seems something like intelligence. And is the language .

we call this intelligence?

Yeah, we call. Well, yes. And so you start to think of, okay, what's the relationship between thought, the language of thought, the laws of thought, the laws of the words like reasoning, and the laws of language, and how they has to do with competition, which seems like a more rigorous, precise ways of reasoning.

right, which are beyond human. I mean, much of what computers do, humans do not do. I mean.

you might say humans are a subset. Yes, reasonably.

hopefully. Yes, yes. right? You know, you might say, who needs computation when we have a large language models? Large language models can just know, eventually you have a big enough nearing that I can do anything, but they're really doing the kinds of things that humans quickly do.

And there plenty of sort of formal things that humans never quickly do. For example, I don't know, you know, you can some people can do mental arithmetic. They can do a set amount math and their and their minds.

I don't think many people can run a program in their minds of any sophistication. It's just not something people do. It's not something people i've ever thought of doing because just it's kind of it's kind of not you know you can easily run .

IT on the computer when an arbitrarily program yeah aren't we running specialized program?

Yeah yeah but if I say to you there's a turning machine, yeah you know tell me what IT does off to fifty steps and you like trying to think about that in your mind, that's really hard to do. It's not what people do.

I mean, while in a some sense people program, they build a computer, I program me just to answer your question about what the system does after fifty steps in humans build computers? Yes.

yes, yes, that's right. But but they have created something which is then you then when they run IT, it's doing something different than what's happening in their minds. I think they have outsourced that, that piece of computation from something that is internally happening in their minds to something that is now a tool that's external to them.

might. So either are humans to you didn't invent computers. They discovered, though.

they discovered computation, which they invented, the technology of computers.

This the computers just a kind of wait to plug into this whole the stream of competition are the ways, sure. I mean the particular .

ways that we make computers out of semiconducting res and ultronic s and so on, that's the particular technology attack we built. I mean, the story of a lot of what people tried to do with point of computing is finding different set of underlying physical you infrastructure for doing computation. You know, biology does lots of computation.

IT does IT using infrastructure that's different from semiconductors and electronics. It's it's a molecular scale sort of computational process that hopefully will understand more about. But I have some ideas about understanding more about that. But that's that's another it's another representation of computation, things that happened in the physical universe at the level of, you know, these involving hyperrational s and so on. That's another sort of implementation layer for this abstract idea of computation.

So if GPT or large language models are starting to form, starting to develop or implicity understand the laws of language and thought, do you think they can be made explicit? Yes, how the .

bunch of effort, I mean, doing natural science, I mean, what is happening in natural science? You have the world that's doing all these complicated things, and then you discover, you know, newtons lawers, for example, this is how motion works. This is the way that this particular sort of idealization of the world, this is how we describe IT in a simple computationally, reducable way. And I think it's the same thing here, is there are sort of computationally, reducable aspects of what's happening that you can get a kind of narrative theory for, just as we've got narrow theories in physics and so on.

Do you think you will be depressing or exciting when all the laws of thought I made explicit, human thought made explicit?

I think that once you understand computational reduced ability IT is it's neither of those things because the fact is, people say, for example, that people will say, oh, but you I have free will. I kind of you know, I Operate in a way that is, you know you you they have the idea they are doing something that is sort of of internal to them that they're figuring out what's what's happening.

But in fact, we think there are laws of physics that ultimately determined you every every nerve, you know, every electrical impulse in a nerve and things like this, so you might say, isn't IT depressing that we are ultimately just determined by the rules of physics are speak. It's the same thing is at a higher level, it's like it's it's a shorter distance to get from kind of manic grammar to the way that we might construct a piece of text. Then IT is to get from individual nerve firings to how we construct a piece of text.

But it's not fundamentally different. And by the way, as soon as we have this kind of level of this other level of description, IT IT helps us to go even further. So we'll end up being able to produce more and more complicated kinds of kinds of things that just like when we if we didn't have a computer and when we knew certain rules, we could write them down, we go a certain distance. But once we have a computer, we can go mostly further. And this is the same kind of thing.

You wrote a blog post titled what is ChatGPT doing and what does that work? We've been talking about this, but can we just step back and link on this question? What what's what's ChatGPT doing? What what are these um a bunch of billion parameters trained on a large number of words. What what does that seem to work again is that is that because at the point you made that there's laws of language that can be discovered by such a process, there's something .

to talk about sort of the low level of what ChatGPT is doing. I mean, ultimately, you give IT a prompt. It's trying to work out, you know, what should the next word be right?

Which is wild, is not surprising to you that this kind of low level dum train procedure can create something syntactically correct first and instamatic ally .

correct you second. The thing that has been sort of a story of my life, realizing that simple rules can do much more complicated things than you imagine, that something that starts simple and starts simple to describe can grow a thing that is, you know, vastly more complicated than you can imagine. And and honestly, it's taken me, and i've sort of been thinking about this now forty years or so, and IT always surprises me.

I mean, even, for example, in our physics project, sort of thinking about the whole universe growing from the simple rules i've still resist, because I keep on thinking, you know, how can something really complicated arise from something that simple? IT just seems, you IT seems wrong. But yet, you know, the majority of my life, I have kind of known from from things i've studied that this is the way things work.

So yes, I IT is wild that it's possible to write a word time and produce a coherent essay, for example, but it's worth understanding kind of how that's working. I mean, it's kind of like if if I was going to say, you know, the cat SAT on thee, what's the next word? okay.

So how does that figure out the next word? Well, it's seen a trillions words written on the internet, and it's seen the cat SAT on the floor, the cat SAT on the sofa, the cat SAT on the whatever. So it's minimal thing to do is just say, let's look at what we saw on the internet.

We saw you know ten thousand examples of the cat SAT on thee. What was the most probable next word that's just pick that out and say that the next word and that's that's kind of what at some level is trying to do. Now the problem is there isn't enough text on the internet to a for if you have a reasonable length of prompt to that, that that specific prompt will never occur on the internet.

And as you as you kind of go further, there just won't be a place where you could have trained, you know, where you could just just worked out probabilities from what was already there. Um you know, like if you say two plus two the'd be a zillion examples of two plus two equals four and a very small number of examples, two plus two ticals five and so on, and you can pretty know what's going to happen. So then the question is, well, if you can't just work out for examples what's gonna en, just no problem isn't for for examples what's gonna en?

You have to have a model and this kind of an idea, this idea of making models of things, is an idea that really, I don't know. I think galileo probably was one of the first people who sort of worked this out. And it's kind of like, like, you know, I think I give an example of that book I wrote about about chat B, T.

Where is kind of like you? Galloway was dropping cannon balls of the of the different floors of the of the tar of pizza. And it's like, okay, you drop a cannabis of this floor. You drop a cannabis of this floor, you miss floor five or something for whatever reason. But you know, the time I took the cannon ball to falls to the ground from floors one, two, three, four, six, seven, eight, for example.

Then the question is, can you work out? Can you can you make a model which figures out how long you take the ball, how? How long would have to take the ball to false? The ground from the floyd didn't explicitly measure.

And the thing that layer realized is that you can use math, you can use mathematical formulas to make a model for how long I will take the ball to fall. So now the question is, well, okay, you want to make a model for, for example, something much more elaborate, like you've got this arrangement of pixels. And is this arrangement of pixel A, B, A correspond to something with record guys? Is an a or b? And you can make a similar count.

You know, each pixel le is like a parameter in some equation. And you could write down this giant equation where the answer is either, you know, a one or two, A, B. And the question is, then, what kind of a model successfully reproduces the way that we humans would would conclude that this is an A N, this is A B? You know, if if there's a complicated extra tail on the top of the a, would we then conclude something different? What is the type of model that maps well into the way that we humans make distinctions about things? And the big kind of meta discovery is neural.

That's all such a model. It's not obviously would be such a model IT could be that human distinctions are not captured. You know, we could try searching around for a type of model that could be a mathematical model, that could be some model based on something else that captures kind of typical human distinctions about things IT.

Turns out this model that actually is very much the way that we think the architecture brains works, that, perhaps not surprisingly, that model actually corresponds the way we make these distinctions. And so the core next point is that the kind of model, this neuron that model, makes sort of distinctions and generalized things in sort of the same way that we humans do IT. And that's why when you say, you know, the cat set on the Green lank, even though I never didn't see many examples of the cats said on the Green whatever I can make a, or the advice SAT on the Green, whatever, i'm sure that particular sentence does not occur on the internet.

And so IT has to make a model that concludes what you IT has to kind of generalize from what it's from the actual examples that have seen. And so so you know that that the fact is that neural that's generalized in the same kind of way that we humans do, if if we were, you know, the aliens might look at annual that generalizations and say, that's crazy. You know that thing when you put that export dot on the a, that isn't a anymore that you know, that mess the whole thing up.

But for us humans, we make distinctions, which seem to correspond to the cancer distinctions that neural that's make. So then, you know, the thing that is just amazing to me about chat B T is how similar the structured has is to the very original way people imagine neural that might work back in one thousand hundred and forty three. And there's a lot of detailed engineering, you know, great, clever ess. But it's really the same idea. In fact, even the sort of elaboration of that idea where people said, let's put in some actual particular structure to try and make the new one that more elaborate to be very clever about IT, most of that didn't matter.

I mean, there are some things that seem to when you when you train this neuron, that you know the one thing, this kind of transformer architecture, this attention idea, that really has to do with, does every one of these neurons connect to every other neurons, or is IT somehow causing ly localized? So a speak is IT like we're making a sequence of words, and the words depend on previous words, rather than just everything can depend on everything, and that seems to be important. And just organizing things so that you don't have a sort of a giant mess.

But the thing, you know, the thing worth understanding about what is ChatGPT in the end? I mean, what is the neural that's in the end and neural that's in the end, as each neuron has a IT, it's taking inputs from a bunch of other neurons. It's eventually it's going to have a it's going to have A A numerical value.

It's going to compute some number. And it's saying i'm going to look at the neurons above me. It's kind of a series of layers. It's going to look at the neurons above me and it's going to say, what are the values of all those neurons? Then it's going to add those up and multiple them by these weights. And then it's going to apply some function that says if it's bigger than zero or something that make IT one and otherwise make IT zero or some slightly more complicated function like you .

know very well how this works, but it's a giant equation with a lot of variables you mention, figuring out where the ball falls when you don't have date on the fourth floor. Um this the equation here is not as simple as a one .

hundred and seventy five billion terms and .

it's quite surprising that in some sense of simple procedure. Of training such an equation can lead to a .

good .

representation.

Natural language. The real issue is you, this architecture of a neural that where where what's happening is you you you've turned so neuron, that's always just deal with numbers. And so you know you've turned the sentence that you started with a bunch numbers like, let's say, by mapping you know each word of the fifty thousand words in english, you just map each word or part of a word into some number.

They feed all those numbers in. And then the thing is going to and then those numbers just go into the values of these neurons. And then what happens is it's just rippling down, going layer to layer until he gets to the end.

I think chat chu bg is about four hundred layers and you're just, you know, IT just goes once through IT just every every new word it's gonna pute just says here, here are the numbers from the words before, let's compute the what is a computer computes the probabilities that IT estimates for each, the possible fifty thousand words that could come next. And then IT decide sometimes that will use the most probable words, sometimes that will use not the most problem word. It's an interesting fact that there's the so called temperature parameter, which you temperature zero.

It's always using the most probable word that IT that IT estimated was the the most probable thing to come next. If you increase the temperature, it'll be more, more kind of random in its selection of words that will go down to lower and lower probability. Woods thing I was just playing with actually recently was the transition that happens.

As you increase the temperature, the thing goes bonkers at particular you know something at a particular temperature and maybe about one point two is, but the thing I was noticing from yesterday actually that you know usually it's giving reasonable answers and then at that temperature with some probability is just starts spouting nonsense um and you know nobody knows why this happens. I mean it's it's um um and by the way, I mean the thing to understand is it's putting down one word at the time, but the outer loop of the fact that IT says, okay, I put down a word now let's take the whole thing I wrote so far, let's feed that back in. Let's put down another word that out of loop, which seems almost trivial, is really important to the Operation of the thing and and for example, one of the things that is kind of funny is it'll give an answer, and you say to IT, is that answer correct? And it'll say no. And why is that happened?

right?

Why can I do that? Well, the answer is because IT is going one world time sort of forwards. And IT didn't you know IT? IT came along with some sort of chain of of thoughts, in a sense, and IT IT came up with completely the wrong answer. But as soon as you feed IT the whole thing that IT came up with, IT immediately knows that that isn't right. IT immediately can recognize that was know a bad soldiers and or something, and can see what happened, even though, as IT was being let down the garden path, so to speak, IT didn't IT came .

to the wrong place. But it's facing that this kind of procedure, converges, is something that forms a pretty good compressed representation of language on the internet. That's quite .

i'm not sure .

what to make IT.

Well, look, I think you know that many things we don't understand. okay. So for example, you know hundred and seventy five billion weights IT may be about a trillion bits of information, which is very comfortable to the training set that was used um and you know why that why can IT sort of stands to some kind of reason that the number of weights in the world that I don't know, I can't really argue that I can't really give you a good dog, you know, in a sense, the very fact that you know, in so far as there are definite rules of what's going on, you might expect that eventually we will have a much smaller neuron that, that will successfully capture what's happening.

I don't think the best way to do IT is probably a neuron that I think I know, and that is what you do when you don't know any other way to structural the thing. And it's a very good thing to do if you don't know any other way to structure the thing. And for the last two thousand years, we haven't known any other way to struct IT. So this is a pretty good way to start. But that doesn't mean you can't find sort of, in a sense, more symbolic rules for what's going on that you know much of which will then be you can kind of get rid of much of the structure of the internet and replace IT by things which are sort of pure steps of computations, so to speak, sort of with news, that stuff around the edges and that becomes just just a much .

employ to do IT. So then your net, you hope, will reveal to us good symbol lic rules that make the needs in your that less and less and less right .

and and there will still be some stuff that kind of fuzzy, just like, you know, there there things that it's like this question of what can we formalize? What can we turn into computational language? What is just sort of h IT happens that way just because brains are set up that way?

What you think are the limitations of large language models just to make IT explicit? Well, I mean.

I think that deep computation is not what large language models do. I mean, that's just it's a different kind of thing. You know, the outer loop of a large language model, if if you are trying to do many steps in a competition, the only way you get to do that right now is by pooling out, you know, all that, the whole chain of thought as a bunch of words, basically.

And you know, you can make a turn the machine out of that if you want to. I just was, make doing that construction in principle, you can make an orbital computation by just pooling out the words, but it's it's a bizarre and inefficient way to do IT. But it's something where the you I think that you know sort of the the deep computation is really what a humans can do quickly.

Large language models will probably be able to do well. Anything that you can do kind of off of your head typed thing is really, you is good for large language of models. And the things you do after to have had, you may not get them always right. But you know you it's it's it's thinking through the same way we do.

But I want if there's an automated way to do something that humans do well, much faster to wear IT like loop. So generate arbitrary large code bases of war, war from language, for example.

Well, the questions, what does you what .

you want the code base to do, escape control and take over the world?

okay. So, you know, the thing is when people say, you know, we want to build this giant thing, right? Giant piece of computational language. In a sense, it's sort of a failure of computational language. If the thing you have to build in other, if we have a description, if if you have a small description, that's the thing that you represent in computational language and then the computer can compute from that. yes.

So in a sense, in you know, when as soon as you're giving a description, you know, if you have to somehow make that description something definite, something formal and once and to say to say OK, i'm gna give this piece of natal language and then it's going to a split out this giant formal structure that in a sense that doesn't that doesn't really make sense. Because except in so far as that piece of natural language kind of plugs into what we socially know, so to speak, but plugged into kind of of knowledge, then that's the way we capturing a piece of that corpse of knowledge, but hopefully will have done that in computational language. How do you make IT do something that's big? Well, you you have to have a way to describe what you want.

I can make you more explicit 新闻。 How well I just pop into my head um IT rate through all the members of congress and figure out how to convince them that they have to let me this meaning the system, become president, past all the laws that allows A R systems to take control and be the president, I don't know. So that's a very explicit like figure out the individual life story of each congressman, each 3个人, anybody I don't know what's a quite a really kind of pass legislation and figure out how to control them and manipulate them, get all the information. What would be the biggest fear of this congressman and the in such a way that you take action on IT in the digital space, so maybe threatening the destruction, reputation or something like this.

right? If I can describe what I want. Yeah, you know, to what extent on a large language model automate that with the help.

with the help of the concretization of something like work from language that makes IT more background.

rather a long way?

I most are surprised how quickly as they, the genre.

yeah, yeah, right.

That's attack that.

you know, I see. I swedish.

think about this before. IT is funny. How quickly, which is a very concerning thing because that that probably this idea would probably do quite a bit damage and there might be very large number of others such ideas.

Well, I give you a much more benie version of that idea. Okay, you're going to make an A I tutor system and you know that is that A A benie version of what you're saying is I want this person to understand this point. You know your essentially doing machine learning where the whether whether you know the the loss function that the thing you're trying to get to is get the human to understand this point.

And when you do a test on the human that they, yes, they correctly understand how they so that works. And I I am confident that you know sort of a large language model technology combined with computational language is going to be able to do pret pretty well at teaching us humans things. And it's going to be an interesting phenomenon because you sort of individualized teaching is is a thing that has been kind of a goal for a long time.

I think we going to get that. And I think more that IT has many consequences for you like, like, just, you know, if you know me asn't if you the AI know me, tell me i'm about to do this thing. What is, what are the three things I need to know? You know, given what I already know, you know what's s let's say i'm i'm looking at some paper or something, right? It's like there's a version of the summary of that paper that is optimized for me, so to speak.

and. Where IT really is. And I think that's really going to work.

You could understand the the major gaps in your knowledge that the field would actually give you a deep understanding of the top of here, right?

And that's a that's an important thing because IT really changes. Actually, I think when you think about education and so on, IT really changes kind of what's worth doing, what's not worth doing until IT makes, you know, I know in my life i've learned lots of different fields and you know so I I don't know, I have every time I think this is the one that's going to i'm not going to be able to learn, yes, but turns out sort of there are sort of matter methods for learning these things in the end.

And you know, I think this this idea that IT becomes easier to you, IT becomes easier to be fed knowledges, so to speak. And IT becomes, you know, if you need to, this particular thing you can, you can get taught IT in an efficient way is something I think is sort of an interesting feature. And I think that makes the you know things like the value of of big towers of specialized knowledge become less significant compared to the kind of matter knowledge of sort of understanding, kind of the big picture of being able to connect things together.

I think there's been a huge trend of of let's be more more because we have to you know, we have to attend these towers of knowledge. But by the time you can get more automation are being able to get to that place on the tower without having to go through all those steps, I think IT. So of changes, that picture.

interesting. Your intuition is that in terms of the, the, the collective intelligence of the species, in the individual minds that make up that collective, there will be more. They will trend towards being journalists and being kind of philosophers.

But I think, I think that's where the humans are gonna useful. I think that a lot of these kind of the drilling, the the mechanical working out of things is much more automated, but as much more A I, A I territory, so speak.

No more phds.

Well, that's it's interesting. yes.

I mean, you know the the kind of the specialization, this kind of tower of specialization which has been a feature of, you know, we've accumulated lots of knowledge in our in our species and and you know in a sense, every time we every time we have an a kind of automation of building of tools, IT becomes less necessary to know that whole tower than IT becomes something where you can just use the tool to get to the top of that tower I think that um you know the thing that is ultimately, you know when we think about, okay, what do the eyes do us as what did the humans do? It's like a eyes, you tell them, you say, go achieve this particular objective, okay? They can maybe figure out a way to achieve that objective.

We say, what objective would you like to achieve? The has no intrinsic idea of that. It's not a define thing. That's a thing which has to come from some other you know some other entity. And and so far as we are in charge, so to speak up whatever is and our kind of web of society and history and so on, is the thing that is defining what objective we want to go to, that you that that's a thing that we humans are necessarily involved.

And to push back a little bit, don't you think that GPT future versions, s GPT would be able to give a good answer to what objective would you like to achieve .

from on what basis? I mean, if they say, look, he's the terrible thing that could happen, okay, they're taking the average of the internet and they are saying, you know, from the average of the internet what the people want to do.

Well, that's the the are scattered of the most entertaining outcome is the most likely.

Okay, that could be one from here.

That could be, that could be one objective as maximize a global entertainment. The dark version that is drama. The the good version of that is fun.

right? So I mean, this question of what you know, if you say to the AI, you know what does the species want to achieve yesterday.

they'll be an also right?

They'll be an answer. It'll be what the average of the internet says the species wants to achieve.

Well, well, I think in the word average, very loosely there, right? right? So I think you I think the answers will become more, more interesting as these language models are trained Better, Better?

no. But I mean, in the end, it's a reflection back of what we ve already said.

yes, but it's there's a deeper wisdom to the collective intelligence, presumably than each individual. Maybe isn't that will try just society.

Well, I mean, that's that's that's an important this is an interesting question. I mean, you in so far as some of us, you work on trying to innovate and figure out how new things and so on IT is. Sometimes it's a complicated interplay between sort of the individual doing the crazy thing, often some some spurs, so to speak, versus the collective that's trying to do some of the the high inertia average thing. And sometimes the collective, you know, is is bubbling up things that are interesting, and sometimes it's pulling down kind of the attempt to make this kind of innovative direction.

What doesn't think the large language models to see beyond that simplification will say maybe intellectual and career diversities really important. So you need the crazy people on the outlier on the outskirts. And so like actual, what's the purpose of this whole thing is to explore through this kind of dynamics that we've been using as a human services ation, which is most of less focus on one thing.

And there's the crazy people on the outskirts doing the opposite of that one thing. And you can they pull the whole society together. There's the mainstream science, and then there is the crazy science, just enough of the history of human civilization.

And maybe the A, S. Will be able to see that. And the more and more impressed we are by a language model telling us this, the more control will give IT to IT and the more will be willing to let IT run our society. And hence, there's this kind of loop where the society could be manipulated to the AI system on IT.

right? I mean, look, one one of the things that sort of interesting is we might say we always think we're making progress, but yet if you know in a sense by by saying let's take what already exists and use that as a model for what should exist yeah then it's interesting that, for example, you know, many religions have taken that point of view. There is a sacred book that got written at time. Max and IT defines how people should act for all future time and that, you know, it's it's a model that people have Operated with and in a sense, you know, this is a version of that that kind of statement. It's like take the twenty twenty three version of sort of how the world has exposed itself and use that to define what the world should do in the future.

But it's not. It's an impervious definition, right? Because just like with the religious text and with GPT are the human interpretation of what gp t says will be the will be the prohibition in the system IT will be the noise that IT be full of. Uncertainty is like you exactly to do, tell you, approach a narrative of what like, you know, it's like turning other cheek kind of a narrative. It's not a fully instructive narrative.

Well, until IT, until the A S. Control of the systems in the world.

they will be able to very precisely tell .

you what what they'll do, what theyll just do this or that thing, and and that. And not only that, they'll be auto suggesting to each person, you know, do this next, do that next. So I think it's it's a slightly more prescriptive situation than one has typically seen.

But no, I think this this whole question of sort of what what's left for the human statistic, to what extent do we you know this idea that there is an existing kind of corpus of purpose for humans defined by what's on the internet, so on, that's an important thing. But then the question of sort of, as we explore what we can think of is the computational universe, as we explore all these different possibilities for what we could do, all these different inventions, we could make, all these different things. The question is, which ones do we choose to follow those choices are the things that, in a sense, if the humans want to still have kind of human progress, that's what we we get to make those choices.

To speak another words that this idea, if you say, let's take the kind of what exists today and use that as the determiner of all of what there is in the future. The thing that is sort of the opportunity for humans is there will be many possibilities thrown up there, many different things that could happen. It'll be done. And the in so far as we want to be in the loop, the thing that makes sense for us to be in the loop doing is picking which of those possibilities we want.

But the degree to which there's a feedback loop of the idea that we're picking something that becoming questionable because we're influenced by the vary systems, if that becomes more and more source of our education and wisdom in absolute ledge.

right, the eyes take over. I mean, i've thought for a long time that you it's the AR auto suggestion that's really the thing that makes the air I take over. It's just the humans just follow in.

We will no longer get emails to each other. We will just send the out of suggested email.

Yeah yeah. But the the thing where humans are potentially in the loops, when there's a choice and when there's a choice which we could make based on our kind of whole web of history and so on here. And that's you know that's in so far as it's all just determined, you know the humans don't have a place.

And and by the way, I mean you at some level um you know it's all kind of a complicated of lazos ical issue because at some level, the universe is just doing what IT does. We are parts of that universe that are necessarily doing what we do, so to speak. Yet we feel we have sort of agency in what we're doing and that's that's its own separate kind of interesting issue.

And we also can feel like we're the final of destination of the universe is meant to create, but we very well could be unlikely or some kind of immediately step, obviously. Yeah, we're most certainly some intermediate step. The question is if there is some cooler, more complex, more interesting things that's going to .

material universe full of such things.

but in our particular pocket specifically, if this is the best we going to do or not this kind .

of we can make all kinds of interesting things. The computational universe, we when we look at them, we say, yeah you know that's a thing we don't IT doesn't really connect with our current a current way of thinking about things means, like in mathematics, you know, we ve got certain themes.

There are about three or four million the human mathematicians have written down and published and so on, but there are an infinite number of possible mathematical theory. We just go out into the universe of possible themes and pick another theory, and then people will say, well, you know, that they look out and say, I don't know what this there means. It's not connected to the things that are part of kind of the web of history that we're dealing with.

You know, I think one point to make about understanding A I and its relationship to us is, as we have this kind of whole infrastructure of ais doing their thing and doing their thing in a way that perhaps not readily understandable by our humans, you know, you might say that that's a very weird situation. How can we have built the thing that behaves in a way that we can't understand, full of A A disability at sea territory? You know what? What is this? What's going to feel like when the world is run by air, whose Operations we can't understand? And the thing one realizes is actually, we've seen this before.

That's what happens when we exist in the natural world. The natural world is full of things that Operate according to definite rules. They have all kinds of no computationally reduce ability. We don't understand what the natural world is doing occasionally.

And you know, when you say, you know, all the AI is going to wipe us out, for example, well, it's kind of like IT is the machination of the going to lead to this thing that eventually comes and destroys the species. Well, we can also ask the same thing about the natural world of the machination of the natural world going to eventually lead to this thing that's going to, you know, make make the earth explorer or something like this. Those those are questions those.

And in so far as we think we understand what's happening in the natural world, that's a result of science and natural science and so on. One of the things we can expect when there's this giant infrastructure of the air, yes, is that's where we have to kind of invent a new kind of nursing science. That kind of is the natural science that explains us how the ais work.

It's kind of like we can we can you know, we have A I don't know, horse or something and we're trying to get IT. We're trying to ride the horse and go from here to there. We can really understand how the horse works inside, but we can get certain rules and certain approaches that we take to could persuade the horse to go from here to there and and take us there.

And that's the same type of thing that we're kind of dealing with, with a sort of incomprehensible computationally reduced able a. But we can identify these kinds. We can find these kind of pockets of reduced body that we can kind of you know that know we grabbing onto the main of the horse or something to be able to to ride IT. We figure out, you know, if we if we do this so that to to ride the horse, that's A A successful way to to get IT to do what what we're .

interested in doing. There does seem to be a difference between a horse and a large language model or something that could be called A G I connected to the internet. So let me just to ask you about big falls sofa question about the threats of these things.

There's a lot of people like allianz, a costi, who worry about the existence al risks of A I systems. And that's something that you worry about in a when you are building an incredible system like, well, from alpha, you can kind of get lost in IT. Oh.

I try and think a little bit about the implications of what ones doing like the manhattan .

project in a situation where you like is some of the most incredible physics and engineering being done. But huh, where's this kona? I think some .

of these arguments about kind of they'll always be a smart or A I, they'll always be, and eventually the I will get smarter than us and then not such a terrible things will happen to me. Some of those arguments remind me of kind of the ontological arguments for the essence of god, and things like this, that kind of arguments that are based on some particular model, fairly simple model, often of kind of, there is always a greater this, that, the other, this is.

And that those arguments, ten, what tends to happen in the sort of reality of how these things develop is that it's more complicated than you expect that the kind of simple logical argument that says all, eventually i'll be a super intelligence and then IT will, you know, do this. And that turns out not to really be the story IT turns out to be a more complicated. So it's so for example, here's an example of an issue.

Is there an apex s intelligence, just like there might be an apex predator in some you know, ecosystem, is going to be an apex intelligence the most intelligent thing that they could possibly be, right? I think the answer is no. And in fact, we already know this. And it's a kind of a back to the whole computational reduce abilities story.

There's kind of a question of, you know, even if you have if you if you have sort of A A turing machine and you have a turing machine that that runs as long as possible before IT holds, you say, is this the machine? Is this the apex machine that does that? There will always be a machine that can go longer.

And as you go out to the infinite collection of possible turing machines, you'll never have reached the end, so speak. You never you'll always be able to it's kind of like the same, same question of whether there will always be another invention. You always be able to invent another thing. The answer is yes, there's an infinite tower of possible inventions.

That's one definition of apex. But the others like, which I also thought you were, which I also think might be true, is, is there species that the apex intelligence right now on earth? So it's not trivial to say that humans are that yeah .

it's not travel. I agree it's you know I think one of the things that I I belong been curious about kind of other intelligence es so speak. I mean, know I view intelligence is like computation and it's kind of a you know your sort of you have the set of rules, you deduce what happens.

I have tended to think now that there's a sort of specialization of computation that is out of a consciousness like thing that has to do with these computational bounded ness, single thread of experience, these kinds of things. There are the specialization of computation that corresponds to wait somewhat human like experience of the world. Now the question is so so that there maybe other intelligence like, you know, you know the effort, know the weather has a mind of its own.

It's a different kind of intelligence that can computers inds of things that are hard for us to compute. But IT is not well aligned with us with the way that we think about things that doesn't doesn't he doesn't think the way we think about things. And you know in this idea of different, different intelligence is every different mind, every different human mind is a different intelligence that thinks about things in different ways.

And you know, in in terms of the kind of form alm of our physical project, we talk about this idea of rules, al space, the space of all possible to rule systems. And different minds are in a sense of different points in rural space. Human minds, once that have grown up with the same kind of cultural ideas and things like this, might be pretty close in the real space. Pret, easy for them to communicate prety easy to translate prety easy to move from one place in real space to corresponds to one mind to another place in royal space that corresponds to others of nearby mind when we deal with kind of more distant things in royal space, like, you know, the pet cat or something, you know, the pet cat has some aspects that are shared with us. The emotional responses of the cat are somewhat similar hours, but the cat is further away in redial space than people are.

And so then the question is, you can we identify sort of the can we make a translation from our thought processes to the thought process of a cat or something like this? And you know what? What will we get you what? What will happen when we get there? And I think it's the case that that many, many animals, I don't know, dogs, for example, you they have elaborate factory systems.

They have sort of the smell architecture of of the world, so to speak, in a way that we don't. And so you know, if you were sort of talking to the dog and you could communicate in a language, the dog will say, well, this is A A flowing, smelling this, that the other thing, concepts that we just don't have any idea about. Now, what's what's interesting about that is, one day we will have chemical sensors that do a really pretty good job.

You will have artificial al noses that worked pretty well, and we might have our augmented reality system show us kind of the same map that the dog could see and things like this, similar to what happens in the dog's brain. And eventually, we will have kind of expanded in rural space to the point where we will have those same sensor experiences that dogs have. And we will have internalized what IT means to have, you know, the smell, landscape, whatever.

And and so then we will kind of colonized that part of royal space until you, we haven't gone, you know, some things that that you know animals and so undo, we are of successfully understand others we do not. And the question of of what kind of what is the you, what what representation? How do we convert things that animals think about, the things that we can think about? That's not a trivial thing. And you, I ve long been cursed. I had a very bizarre project at one point of of trying to make an ipad game that a cat could win against the dona.

I said, they feels like there's a deep for offal. Go there though.

Yes, yes. You know, I was curious if, you know, if pets can work in minecraft or something and can construct things, what will they construct and will what they construct be something where we look at and we say, yeah, I recognize that. Or will IT be something that looks to us like something that's out there in the computational universe that one of my. Little author ata might have produced well, we saw yeah I can color see IT Operates go to some rules. I don't know why you would use those rules.

I don't know why you would. Can actually just to link on that seriously. Is there connector in the royal space between in a cat when the cat could generally win? So ipad is a very limited interface. I wonder if there's a game where cats win. I think the problem .

is the cats tend to be that interested in what's happening on the ipad.

Yeah doesn't interface issue.

Yeah right, right, right. No, I think IT is likely that I mean, you know there are plenty of animals that would successfully eat us if we were, you know if we were exposed to them. And so there's you it's going to pounce faster than we can get out of the way and so on. So there plenty of and probably it's going to you. We think we've hidden ourselves, but we haven't successfully hidden ourselves.

That's a physical strength. I wonder if there's something in more in the realm of intelligence where an animal like a cat out.

Well, I think there are things certain in terms of the the speed of processing, certain kinds of, for sure. I mean, the question of what you know, is there a game of chess, for example, is their cats chess that the cats could play against each other. And if we try to play a cat, we'd always lose. I don't know. I might have .

to do with speed, but I might have to do with concepts. 比方说, the right concepts in .

the cats had I, I tend to think that our species, from its invention of language, has managed to build up this kind of tower of abstraction, that for things like a chess, like game, will make us when, in other words, we've become, through the fact that we've kind of experienced language and learned abstraction, you know, we've sort of become smarter at those kinds of abstract kinds of things now, you know, that doesn't make a smarter at catching a mouse or something, that makes a smarter at the things that we've chosen to to sort. You know, the concern ourselves, which are, are these kind of abstract things here.

And and I think you know this is again, back to the question of, you know, what does one care about? Know one's if once you know the cat, you if you have the discussion with the cat, if can if we can translate things to have the discussion of the cat, the cat will say, you know, i'm very excited that this light is moving and will say, why do you care and the cat will say that the most important thing in the world, that this thing moves around I mean, like when you ask about, I don't know you, you look at archaic logical remains, and you say, these people had this belief system about this, and that was the most important thing in the world to them. And now we look at IT and say, we don't know what the points.

I mean, i've been curious to know that these handprints on caves from twenty thousand or more years ago, and it's like nobody knows what these hand prints were. Therefore, you know that they may have been a representation of the most important thing you can imagine. They may just to bin some some kid who who rubb their hands in the modern, stuck among the walls of the cave.

We don't we don't know and I think but this this whole question of what you know when you say this question of of what's the smartest thing around there's the question of what kind of computation you trying to do if you're saying, you know, if you say you've got some well defined computation and how do you implement IT? Well, you can implement IT by nerve cells in firing. You can implement IT with silicon and electronics.

You can implement IT by some kind of molecular computation process in the human immune system or in some molecular biology kind of thing that different ways implemented. And you know I think this question of of of sort of which you know those different implementation methods will be of different speeds. I'll be able to do different things if you say, you know which.

So an interesting question would be, what kinds of abstractions are most natural in these different kinds of systems? So for a cat, it's, for example, you know, the visual scene that we see. You might we pick outside objects we recognize, you know, certain things in that visual scene.

A cat might, in principle, recognize different things. I, I suspect evolution, biological evolution is very slow. And I suspect what a cat notices is very similar.

We even know that from some new physiology, what a cat notice says is very similar to what we notice because there's, you know, one obvious differences. Cats have only two kinds of color receptors, so they don't see in the same color color that we do. Now you we say we're Better. We have three color receptors in a red, Green, blue. We're not the overall winner.

I think I think the mantis trump is the overall winner, winner with fifteen color receptors, I think so IT, can I can kind of make distinctions that with our current you know like the mantis mps view of reality is in such in listening in terms of color is much richer than hours now. But what's interesting, how do we get there? So imagine we have the segmented reality system.

That is, even you, it's seeing into the infant, into ultraViolet things like this. And it's translating that into something that is connectable to our brains, either throw our eyes or more directly into our brains. And then eventually, our kind of web of the types of things we understand will extend to those kinds of contracts just as they have extended. I mean, there are plenty of things where we see them in the modern world because we made them with technology, and now we understand what that is. But if we'd never seen that kind of thing, we wouldn't have a way to describe IT, wouldn't have a way to understand .

that and so on. Alright, so that actually stem from our conversation about whether a is gonna kill all us. And you we've discuss this kind of spreading of intelligence to royal space, that in practice IT just seems that things get more complicated.

Things are more complicated than the story of, well, if you build the thing that's plus one intelligence, that thing we'll build to build the thing that plus two intelligence and plus three intelligence, and that will be exponential IT become, uh, more intelligent, exponentially faster and so on, until the complete to destroys everything. But you know that intuition might still might be so simple, but I still Carry validity in the two interesting trajectory. Aries, here, one, a super intelligence system remains in royal proximity to humans to were like, holy crap.

This thing is really intelligent. Let's elect the present. And then there IT could be perhaps more terrifying intelligence that h starts to moving away. They might be around us now. They're moving far away in royal space, but they're still sharing physical resources with this and so they can rob us of those physical resources and destroy humans kind of casually yeah just just like nature code.

like nature code.

But IT seems like there's something unique about eye systems where. There is a this kind of exponential growth, like a what that nature has. So many things in that one of the things that nature has, which is very interesting of viruses, for example, there is systems within nature.

They have this kind exponential effect. And that terrifies as humans because you can you know, there's only a billion of us. You can just come a IT is not that heart to just kind of work all very quick. So I mean, that something you think about.

I thought about that. Yes.

the thread of IT A U S. Concerned about us, somebody like the costly, for example, this big, big, painful negative effects of A I on society.

You know, no, but pap, that's because i'm intrinsically ally, an optimist. I mean, I think that there are things. I think the thing that one one sees is that's going to be this one thing and it's going to just example everything somehow.

You know, I maybe I have faith in computational reduced ability, so to speak, that there's always unintended of corners that you know, just like somebody says, i'm going to let her know somebody has some, some bioweapon and they say we're going to release this and it's going to do all this harm. But then IT turns out it's more complicated than that because, you know the kind of some humans are different and you know the exact way that works is a little different than you expect. It's something where sort of the the great big, you, you know, you smash the thing with something.

You know, the asteroids collides with the earth, yes. And that kind of you, yes, you know, the earth is called for two years or something, and you know, then lots of things die, but not everything dies. And there's usually, I mean, I kind of this is, in a sense, the sort of story of computational the reduce ability.

There are always unexpected corners. There are always unexpected consequences. And I don't think that they kind of wacked over the head with something. And then so gone is, you know, that can obviously happen. The earth can be swallowed up in a black hole or something.

And it's kind of presume, presumably all over the, but you know, I think this question of of what what what do I think the realistic pods are, I think that there will be sort of an increasing I mean, that the people have to get used to phenomenal computational disability. There's an idea that we built the machines so we can understand what they do and where we're going to be able to control what happens. Well, that's not really right now.

The question is, is the result of that lack of control going to be that the machines kind of conspire and sort of wipes out? Maybe just because i'm an optimist, I don't tend to think that that's you that's in the cards. I think that the you as a realistic thing, I suspect you know what will sort of emerge maybe is kind of an ecosystem of the just as you again, I I don't really know.

I mean, this is something it's hard to it's hard to be clear about what will happen. I know I think there there are lot of sort of details of, you know, what could we do? What systems in the world could be connect? And I do.

And I have to say I was just a couple of days ago, I was working on this ChatGPT plugin kit that we have the wolf language OK, where you can you know, you can create a plug in and IT runs wolf m language code, and I can run with language code back on your own computer. yeah. And I was thinking, well, I can just make IT you. I can tell chat bt creative piece of code and then just run IT on my computer. And i'm like, you know that that to a personalized for me, they, what could, what could possibly go wrong?

So this big was exciting or scary. That possibility .

IT was a little bit scary actually because it's kind of like like I realized i'm delegating to the AI just write a piece of code you know you're in charge, right, to piece a code running on my computer and present on .

my files and it's like russian related but like much more complicated.

Yes, yes.

it's a good drinking game. I don't know. So .

right? I mean that, but it's an interesting question then if if you do that, what is the sand boxing that you should have and that sort of a that's A A version of of that question for the world. That is, as soon as you put the air in charge of things, you know how much, how many constraints should they beyon these systems before you put the air in charge of all the weapons in all all these different kinds of systems.

So here's the fun part of sandbox. Es is the AI knows about them. He has the tools to a crack them.

Look, the fundamental problem of computer security is computational reduce ability. Yes, because the fact is, any sandbox, it's never and know it's never going to be a perfect sandbox if you want the system to be able to do interesting things. I mean, this is the problem that happened, the generic problem of computer security, that as soon as you have you, you know, firewall that is sophisticated enough to be a universal computer, that means that can do anything. And so long if you find a way to poke IT so that you actually get IT to do that universal computation thing, that's the way you kind of crawl around and get her to do the thing that wasn't intended to do. And that's sort of a another version of computational disability, as you can you know, you can kind of you get IT to do the thing you didn't expect you to do, so to speak.

There's so many interesting possibilities here that manifest themselves from the compute competition rediscovery here. It's just so many things can happen because in digital space, things moves so quickly. You can have a chat bar, you get a piece of code 的, you could basically have charged to party general viruses and said only on purpose.

And they are digital viruses and, uh they could be brain viruses too. They they convince kind like a fishing emails. They can convince your stuff.

yes. And no doubt you can know in a sense, we've had the loop of the machine learning loop of making things that convince people of things is surely going to get easier to do. Yeah and you know then what does that look like? Well, it's again, you know we humans, uh, you know where this is a new environment for us and admissible. It's an environment which a little bit scary, is is changing much more rapidly. Then I mean, you know people worry about, you know climate change is going to happen over hundreds of years and you the environment is changing, but the environment for in the kind of digital environment might change in in six months.

So one of the relevant concerns here in terms of the, uh, impact to be between our society is the nature of truth that's relevant to work from alpha because computation through symbolic reasoning, that symbol work from office, the interface there's kind of sense that what will from offer tells me is true.

So we hope yeah I mean .

you could probably analyze that you could show you can't prove ourselves can be true competition or disability um but it's GTA be more true than not it's look.

the fact is IT will be the correct consequence of the rules you're specified and in so far as IT talks about the real world and that is our job instead of curating and collecting data to make sure that that data is quoted as true as possible. Now what does that mean? Well, you know, it's always an interesting question.

I mean, for us, our Operational definition of truth is somebody says, who is the best actress who knows? But somebody won the Oscar and that's a definite fact yeah. And so you know, that's the kind of thing that we can make computational as a piece of truth if you ask you know these things which you know a sensor measured this thing.

I did IT this way, a machine learning system, this particular machine learning system, recognize this thing. That's a, that's a sort of a definite A, A fact, so to speak. And that, you know, there are there is a good network of those things in the world.

It's certainly the case that particularly when you say is so so a good person, you know that's that's a hopelessly you know we might have a computational language definition of good. I don't think to be very interesting because that's a very messy kind of concept, not really amenable to kind of know that. I think as far as we will get with those kinds of things that I want x, there's a kind of meaningful calculus of I want x, and that has various consequences. I mean, i'm not sure I haven't I haven't thought this through properly, but I think you know a concept like is so and so a good person. Is that true or not?

That's a mess. That's a mess is amenable to computation. I think I think the mess when humans try to define what's good, like the legislation, but when human started to find what's good to literature through history books, the poetry that .

I don't know, I mean, that particular thing, it's kind of like we're going into kind of the ethics of what what counts are good, so to speak. And you know, what do we think is right and so on. And I think that's a thing which you know one feature is, uh, we don't all agree about that. There's no themes about kind of, uh, you know there no there's no theodore framework that says this is this is the way that ethics has to be.

Well, first, all there's stuff would kind of agree on and there is some empirical backing for what works and what doesn't from just even the morals and ethics within religious texts. So we seem to mostly agree that murder is bad. The certain universals that seem to emerge.

I want to with the murder A I is bad well.

I tend to think yes, but anything we're going out to content without question and I wonder what A I would .

say yeah well, I think you know one of the things with with their eyes is it's one thing to wipe out that A I that is only, you know, that has no owner. You can even easily imagine an A I can hanging out on on the, you know, on on the internet without having any particular owner or on the thing like that.

And then you say, well, well, what harm does that? You IT it's it's okay to get rid that A I course of the A I has ten thousand friends were humans, and all those, you know, all those ten thousand humans will be incredibly upset that they say, I just got exterminated ated IT becomes a slightly different, more entangled story. But yeah, no, I think this question about what do humans agree about, it's, you know, there are no there are certain things that, you know, human laws have tended to consistently agree about.

Um you know there have been times in history when people have sort of gone away from certain kinds of laws, even ones that we would now say, how could you possibly not not done that that way? You know that just doesn't seem right at all. But I think I mean, this question of what I don't think one can say beyond saying if you have a set of rules that will cause the species to go extinct, that's probably you know you could say that's probably not a winnings that of laws because even to have a thing on which you can Operate laws requires that the species not be extinct.

But between sort of what's the distance of chicago, new york, that will from alpha can answer, and the question of if this person is good or not, there seems to be a lot of great area. And that starts becoming really interesting. I think your since location walk from alpha have been a kind of arbitrary of truth, that is, a large scale. The system is generates more .

truth and try to show to a when people write computational contracts. And it's kind of like, you know, if this happens in the world, then do this. And this hasn't developed as as quickly as I might have. One new this this spinner of a blockchain story in part.

And so one of the block change not really necessary for the idea of computational contracts, but you can imagine that eventually, sort of a large part of whats in the world, these giant chains and networks of computational contracts, and then something happens in the world, and this whole giant domino effect of contracts firing autonomously that cause other things to happen. And you know for us, we've been the main sort of source, the oracle of of quotes, facts or truth or something for things like block chain computational contracts in such like. And there's a question of, you know what you know, I considered that responsibility to actually get the stuff right.

And one of the things that is tRicky sometimes is when is IT true? When is that a fact? One is IT not a fact.

Yes, I think the best we can do is to say, um you know we we have a procedure. We follow the procedure. We might get IT wrong, but at least we won't be corrupt about getting IT wrong.

so to speak. That's beautifully poet. I have a transparent about the procedure. The problem starts emerge when um the things that you convert into competition language start to expand, for example, into the round politics. So this is where it's almost like this nice dance of all from alpha and H G G P T G G P T, like you said, is A A shallow and broad. So it's it's going to give you an opinion on everything.

But IT write fiction as well as fact, which is exactly how it's built. I mean, that's exactly IT is making language and IT is making both even in code at rights fiction. I mean, it's kind of fun to see sometimes little right fictional wolf language code yeah that that means of looks .

right .

yeah that looks right. But it's actually not pragmatically correct. But but yes, it's a IT has a view of kind of roughly how the world works at the same level as as books of fiction talk about roughly how the world works. They just don't happen to be the way the world actually worked or whatever but yes that that some no I agree that sort of a um you know we are attempting with with our whole world language computational language thing to represent uh at least well it's either IT doesn't necessarily have to be how the actual world works because we can invent a set of rules that aren't the way the actual world works and run those rules.

But then we're saying we're going to accurately represent the results of running those rules, which might might not be the actual rules of the world, but we also trying to capture features of the world a as accurately as possible to represent what happens in the world. Now again, as we've discussed, you know, the the atom in the world arrange, you say, I don't know, you know, was there a tank that showed up the, you drove somewhere? Okay, well, you know, what is a tank? It's an arrangement of atom that we abstractly described as a tank.

And you could say, well, you know that summer rangement of atoms, that is a different arrangement of atoms, but it's and it's not know we didn't we didn't decide. It's like the observer theory question of you know what what arrangement of atoms councils a tank for us is not a tank. So there's there's .

even things that would consider strong facts. You could start to kind of a disassemble them and show that are not.

I am. So the question of whether, oh, I don't know, was this gust of wind strong enough to blow over this particular thing? Well, a gust wind is a complicated concept. You know, it's full of little piece of fluid dynamics and little voters as here and there.

And you have to define, you know, was IT, you know what the aspect of the gust to win that you care about might be IT put this amount of pressure on this blade of some some, you know, winter by or something. And you know that that's the and but you know, if you say if you have something, which is the fact of the gust wind was the strong, whatever that you know that is, you have to have some definition of that. You have to have some measuring device. IT says, according to my measuring device that was constructed this way.

the gust win was this. So what can you say about the nature of truth that's useful for understand ChatGPT? Because you've you ve been contending with this idea, what is fact and not? And IT seems like chat deputies used a lot. now. I've seen you used by journalists write articles. So you have people that are working with large language models trying to desperately figure out how do we essentially sensor them through different mechanisms, uh, either manually or through reform, learning with human feedback, trying to line them to do not say fiction, say non fiction as much as fast. This is the importance .

of computational language as an immediate it's kind of like you've got the large language model. It's able to suffer something which is a formal, precise thing that you can then look at and you can run tests on IT and you can do all kinds of things that so what is going to work the same way, and it's precisely define what IT does. And then the large language model is the interface.

I mean, the way I view these large language models, one of their important, I mean that many use cases know it's a remarkable thing to talk about some of these literally, you know, every day we're coming up with a couple of new use cases, some of which are very, very, very surprising and things where mean. But the best youth case is the ones where it's you know, even if IT gets IT roughly, right, it's still a huge win like a use case we had from a week or two ago as a reader bug reports, we've got hundreds of thousands of bugger reports that accumulated over a decade. Es, and it's like, you know, can we have IT just read the bug report, figure out where where is the bug likely to be and you know, home in on that piece of code, maybe even suggest some so you sort of way to fix the code IT might get that IT might be nonsense, what IT says about how to fix the code. But it's incredibly useful that I was able to, you know.

ah so awesome it's so awesome because even the nonsense will somehow be instructive at. I don't quite understand that yeah yeah there's so many programing related things like for exam translated one program in lish and other really interesting is extremely effective but IT than you the failures reveal the path forward also.

Yeah, but I think I mean, the big thing, I mean, in that kind of discussion, the unique thing about our computational languages that was intended to be read by humans.

yes. And so IT has really important.

right? And so IT has the thing where you can but but you thinking about sort of ChatGPT and its use and so on. The one of the big things about IT, I think, is it's a linguistic user interface that is so a typical use case might be and take the journalist case for example.

It's like, let's say I have five facts that i'm trying to turn into an article or i'm trying to i'm trying to write a report where I have basically five facts and i'm trying to include in this report. But then I feed those five facts to chat you them out into the big report and then and then that's a good face for another. If I just gave, if I just had in my terms those five bullet points and I gave into some other person, the person said, I don't know what you're talking about because this this is your version of this set of quick notes about these five bullet points.

But if you puff IT out into this thing, which is kind of connects to the collective understanding of language, then somebody else can look at that. And OK, I understand what you're talking about. Now you can also have a situation where that thing that was pushed out is fed to another large language model.

It's kind like you you're applying for the pump to you, I don't know, grow fish in some place or something like this. And you know IT and you have these facts that you're putting in, you know i'm onna, have a you i'm going to have this kind of water. I don't know what.

Yes, you're just got a few bullet points IT puffs IT out and there's big application. You feel 了 that then at the other end, you know the Fisheries bureau's another large language model that just crushes IT down because the Fisheries bureau cares about these three points and IT knows what IT cares about and IT then. So it's really the natural language produced by the large language model is sort of a transport layer that you know is really LLM communicate with L M.

I mean, it's kind of like the you, I write a piece of email using my L M, and you puffed out from the things I want to say. Your L M turns IT into. And the conclusion is x.

Now the issue is, you know that the thing is going to make this thing that is sort of semantic ally playable. And IT might not actually be what you you know, IT might not be kind of relate to the world in the way that you think I should relate to the world. I've seen this.

You know, i've been doing okay. I'll give you a couple of examples. I was doing this thing when we announced this, this plugging for for ChatGPT.

I had this lovely example of a math word problem, some complicated thing. And I did a spectacular job of taking apart the celebrate thing about, you know, this person has twice as many chickens as this seat era. And IT turned IT into into a bunch of equations IT fed them to of language.

We solve the equations. Everybody did great. We get back the results. And I thought, okay, I mean, to put this in this blow post on writing OK, I thought I Better just check.

And turns out I got everything, all the hard stuff, got right in the very end last two lines IT just completely goofed up and give the wrong answer. And I would not have notice. Ed, this same thing happened to me two days ago.

Okay, so I, I thought, know, I made this with this chat. vt. Plugging kit. I made a thing that would emit a sound, would play a tune on my local computer, right?

So ChatGPT would produce, you know, series of notes and would play this two on my computer. great. okay.

So I thought i'm going to ask you play the tune that how saying when how was being disconnected in two thousand? one? yes. So IT, that is Daisy .

was a daily, yes, Daisy, yeah. right.

So, so good. So I think so. IT produces a bunch of notes, and i'm like, this is spectacular.

This is amazing. And then I thought I was just onna put IT in. And then I thought I Better actually play this. And so I did and I was, mary had a .

little lamb. No wo ow. But he was. Mary had a little man.

Ah ah wow.

So was correct. Iran, yes he was could easily be mistaken.

Yes, right in the fact I kind of gave the I had this quote from hell to explain, you know, it's as the hell. No states in the movie no, it's the house nine thousand is, know, the thing was just A A retorted device because i'm realizing, oh my gosh, you know, this ChatGPT could have easily food now.

And then I did all IT did this amazing thing of knowing this thing about the movie and being able to turn that into the the notes of the song, accept this the wrong song yeah. And you know how in in the movie how says, you know, I think it's something like, you know, no, how no nine thousand series computer has ever been found to make an era. We are all practical purposes, perfect and incapable of era. And I thought that was kind of a charming sort of a quote. From from hell to make a connection with with what chat P T yeah case .

the same things about the things like you said that they are very willing to admit their air.

Well, yes, I mean that's a question of they are like the reinforcement learning human feedback thing, all right. That's the really remarkable thing about ChatGPT is, you know, i've been following what was happening with largely with models and I played with the mole bunch and they were kind of like, yeah you know kind of like what you would expect based on out of statistical continuation languages is interesting, but it's not break out exciting.

And then I think the kind of the kind of reinforcement, the human feedback reinforcement learning, you know, in in making ChatGPT trying to do the things that humans really wanted to do that broke through that kind of reached this threshold where the thing really is interesting as humans. And by the way, it's interesting to see how you know, you change the temperature, something like that, the thing goes bunkers. And that no longer is interesting to humans.

It's producing garbage. And it's it's kind of right. But somehow IT managed to get this this above this threshold where IT really as well aligned what we humans are interested in and and kind that and I think nobody saw that coming. I think certains, nobody i've talked to a nobody who is involved in in that project seems to have known that was coming. It's just one of these things that is a sort of remarkable fresh ld.

I mean, you know, when we built with mouth a, for example, I didn't know was gonna, you know, we tried to build something that would have enough knowledge of the world that I could answer, reasonable set of questions that we could do, that good enough natural language understanding that typical things you type in would work. We didn't know where that threshold was. I mean, I was not sure that I was the right decade to try and build this, even the right fifty years to try and build that, you know, and I think that was with the same type of thing with ChatGPT. I don't think anybody could have predicted that. You know twenty, twenty two would be the year that this this became possible.

I think yeah, me tell the story about Marvin msk in showing a tone and saying that like, no, no, this time IT actually works. Yes.

yes. I mean, you know the same thing for me. Look at these large language models. It's like when people are first saying the first few weeks of ChatGPT is like, oh yeah, you know, i've seen these large language models and then you know then I actually try IT and you know, oh my god, should actually works and I think it's a bit IT.

But you know the things, and the things I found, and I remember one of the first things I tried, was right, a persuasive essay that the wolf is the blue west kind of animal. Okay, so writes the thing. And IT starts talking about this wools that live on the tibetan plato and named some latter name s on. And i'm like, really, i'm starting to look IT up on the web and it's like, well, it's actually complete nonsense, but it's extremely blauser and it's plausible enough that I was going looking up on the web and wondering if there was a wolf that was blue. I mentioned this on some live streams i've done and so people in sending me this blue was maybe .

it's on to something. Can you kind of give your wise sage advice about what humans who have never interacted with the eyes systems, not even like well, from alpha, are now interacting with ChatGPT because IT becomes, it's successful to a certain demographic that may have not touched a essence before. What do we do with truth like journalists, for example?

Yeah.

how do we think about the output of these systems?

I think this idea, the idea that you're going to get factual output is not a very good idea. I mean, it's just this is not IT is a linguistic interface. IT is producing language and language can be truthful or not truthful.

And that's a different slice of what's going on. I think that you know what we see and for example, a kind of, you know go check this with your fact source for and you can do that to some extent, but then it's going to not check something. It's going you know that is again a thing that is sort of a does IT check in the right place.

I think we see that know does IT call the know the wall from plugging in the right place. You know, often IT does, sometimes IT doesn't. You know, I think the real thing to understand about what's happening is, but I think is very exciting, is kind of the the great democrat ization of access to computation.

And and you know, I think that when you look at sort of the there has been a long period of time when computation and the ability to figure out things with computers has been something that kind of only only the dread at some level can achieve. I myself have been involved in trying to sort of d justify access to computation. I mean, back before mathematical existed in one thousand and eighty eight, if you were, you know, physicists to something like that, and you want IT to do a computation, you would find a programmer, you would go, and you delegate the the computation to that, hopefully the'd come back with something useful.

Maybe they wouldn't be this long. No multiple k loop that you go through. And then who's actually very, very interesting to see one thousand nine hundred and eighty eight. You like first people like physicists, mathematicians and so on. Then other, lots of other people.

But this very rapid transition of people realizing they themselves could actually type with their own fingers, and you make some piece of code that would do a computation that they cared about. And you know, it's been exciting to see lots of discoveries are made by by using that tool. And I think the same thing is, you know, we see the same thing with alpha is dealing with is not as deep computation as you can achieve with whole language mathematics stack.

But the thing that to me, particularly exciting about kind of the large language model linguistics interface mechanism is IT dramatically broadens the access to kind of deep computation. I mean, it's kind of like one of things I sort of thought about recently is, you know what's going to happen to all these programmes, what's going to happen to all these people who, you know a lot of what they do is, right, slabs of oil plate code. And in a sense, you i've been saying for forty years, that's not a very good idea.

You know, you can automate a lot of that stuff with a high enough level language. That slab of code that designed in the right way, you know, that slab of code turns into this one function we just implement that you can use. So in a sense that the fact that there's all of this activity of doing sort of lower level programing is something for me IT seemed like I don't think this is the right thing to do.

But you know, lots of people have used our technology and are not how to do that. But the fact is that, that you know so when you look at, I don't know, computer science departments that, that have turned into places where people are learning the trade of programing, so to speak, it's it's of a question of what's going to happen. And I think there are two dynamics.

One is that kind of sort of boiler plate programing is going to become is going to go the way the assembler language went back in the day of something where it's really mostly specified by at a higher level. You know, you stop the natural language, you turn this into a computational language that you look at the computational language, you run tests, you understand that's what supposed to happen. You know, if we do a great job with compilation of the, you know of the computation language, IT might turn into L V M or something like this.

But you or just directly gets gets run through the algorithm. Haven't on, but but then so that's kind of A A A tearing down of this kind of this big structure that's been built of of teaching people programing. But on the other hand, the other dynamic is vastly more people are going to care about computation. So all those departments of know art history or something that really didn't use computation before now have the possibility of accessing IT by virtue of this kind of english .

to identified mechanism. And if you create an interface that allows you to interpret the the bug and interact with the computational .

language .

that that makes me even more accessible.

Yeah but I mean, I think the thing is that right now, you know the average you art history student or something probably isn't going to you. They're not probably they don't think they know about programing and things like this. But by the time IT really becomes a kind of purely, you know you just walk up to IT, there's no documentation, you start just typing, compare these pictures with these pictures and you know see the use of this color, whatever, and you generate this piece of of computational language code that gets run, you see the results, you say that looks roughly right or you say, that's crazy. And maybe then you eventually get to say, well, I Better actually try to understand what this computational language code did and and that becomes the thing that you learn, just like it's kind of an interesting thing because unlike with mathematics, but you kind of have to learn IT before you can use IT. This is a case where you can use IT before you have to learn IT.

Well, I get a sad possibility here or maybe exciting possibility. They are very quickly people won't even look at the competition language, the trust that is generated correctly as you get Better, Better generating that language.

Ah yes, I think that there will be enough cases where people see you know because you can make a generate tests to and and so you'll say, um we're doing that. I mean it's it's a pretty cool thing actually you know say this is the code and you know here a bunch of examples are running the code. Yeah okay, people will at least look at those and they'll say that example is wrong.

And you know, then little kind of wine back from there. And I agree that the the kind of the intermediate level of people reading the computational language code, in some case, people do that. In other case, people just look at the tests and or even just look at the results.

And sometimes to be obvious that you got the thing you wanted to get because you are just describing, you know, make me this interface that has two slides here you can see that has that those two sliders there, and that's kind that's the result you want. But I think you know one of the questions that is in that setting where you know you have this kind of ability, badd ability, people to access computation. What should people learn?

In other words, right now, you you know, you go to computer science school, so to speak, and a large part of what people end up learning, I me, it's been a funny historical development because back thirty, forty years ago, computer science departments were quite small, and they taught you things like final automata theory and compiler theory and things like this. You company, like mine, rarely hired people who had come out of those programs because the stuff they knew was, I think, is very interesting. I love that critical stuff, but no, wasn't that useful for the things we actually had to build in software engineering.

And then because there was this big pivot in the in the nineties, I guess, where there was a big demand for sort of IT type programing and so on and softer engineering and then, you know, big demand from students and so on, you know, we want to learn this stuff. And and I think, you know, the thing that really was happening in part was lots of different fields of human endeavor were becoming computational, know for all acx. That was there was a computational, and this is a, that was the thing that the people were responding to.

And but then kind of this idea emerged that to get to that point, the main thing you had to do was to learn this kind of trade or or or skill of doing your programing, language type programing. And and that is kind of is a strange thing actually, because I, you know, I remember back when I used to be in the professor business, which is now thirty five years ago. So cash, rather on time as we IT IT was right when they were just starting to emerge kind of computer science department, that sort of fancy research universities s on.

I mean, someone already had IT. But the other ones that we're just starting to have that and that was kind of a thing where they were kind of wondering, are we're going to put this thing that is essentially A A trade like skill, are we going to somehow attach this to the rest of what we're doing? And a lot of these kind of knowledge work type activities have always seemed like things where that's where the humans have to go to school and learn all the stuff and that's never going to be automated yeah and know this is its kind of shocking that rather quickly, you know a lot of that stuff is clearly automated and I think you know but the question then is, okay, so if IT isn't worth learning, kind of, you know, how to do cm mechanics, you only need to know how to drive the car, so to speak.

What do you need to learn? And you, in other words, if you don't need to know the mechanics of how to tell the computer in detail, you know, make this loop, set this variable, but you set up this array, whatever else. If you don't have to learn that stuff, you don't have to learn the kind of under the hood things, what do you have to learn? I think the answer is you need to have an idea where you want to drive the car. In other words, you need to have some notion of you, you, you know, you need to have some picture of sort of the architecture of what is computationally .

possible is what is also this kind of artistic of of conversation because you ultimately use natural language to control the car. So it's not just where you want to go.

Well, yeah you know it's interesting. It's a question of who's going to be a great prompt engineer yeah okay. So my current theory this week.

good exposure to writers, a good prompt ine's exposure.

can explain stuff well.

but which department does that come from .

in the university? Yeah, I have no idea. I think they killed off all the exposition writing departments.

Well, there you got strong words are even more from well.

I don't know, I am not sure if that's right. I mean, I I actually curious because in fact, I just initiated this kind of study of of what's happened to different fields at universities because like you know, they used to be geography departments at all this. And then they disappeared.

Actually, right before G. S. Became palm common, I think they disappeared. You know, linguistics departments came in, went in many universities. It's kind of interesting, because these things that people have thought were worth learning at one time, and then they kind of die off. And then I do think that it's kind of interesting that for me, writing prompts. For example, I realize I think i'm in OK exposition writer and I realized when i'm sloppy writing, a prompt and I don't really think because i'm thinking it's i'm just talking to an A I I don't need to try and be clear and explaining things that when IT gets totally confused and I mean.

in some sense you have been writing process for a long time from alpha thinking about this kind of stuff yeah how do you convert natural linguists .

ing to competition well right but that you know the one thing that I am wondering about is no IT is remarkable. The extent to which you can address an alarm, like you can address the human, so is big and and I think that is because, you know I learned from all of us humans, it's the reason that IT responds to the ways that we will explain things to humans, is because IT is a representation of how humans talk about things.

But IT is bizarre to me. Some of the things that kind are set of expository mechanisms that i've learned and trying to write clear you expositions in english, that just for humans, that those same mechanisms seem to also be useful for for the L. M.

But on top of that, what's useful as the kind of mechanisms that may be a psychotic therapies employees, which is a kind of like almost manipulative game, the erratic interaction where maybe you would do with a friend, like I thought, experiment, that if this is the last day you were to live, if, if I ask you this question and you answer wrong, I will kill you.

Those kinds of problems seem to also help in the interesting ways, makes you wonder the way of therapies, I think, like a good therapies. Probably you, we create layers in our human mind to between, like a between, between outside world and what is true, what is true to us. And maybe about try and all those kinds of things, projecting that into an analogy. Maybe there might be a deep truth that is concealing from you is not aware of IT they to get to that truth, you have to have really manipulate that.

The right. It's like this jail breaking, jail breaking for aleem .

and but the space of jail breaking techniques, but I supposed to being fun little hacks that could be an entire system.

sure. I mean, just think about the computer security aspects of how you you fishing and and computer security, you know, fishing of humans and fishing of is the very similar kinds of things. But I think, I mean, you know, this whole thing about kind of the AI ranging las A I psychologists, all that stuff will come.

The thing that i'm curious about is right now, the things that are sort of prompt hacks are quite human. They are quite some of psychological human times of hacks. The thing I do wonder about is, if we understood more about kind of the science of the land, will let me some totally bizarre hack that is, you know, like repeater word three times and put of this that the other there, that somehow plugs into some aspect of how the other works. That is not, you know, that kind of like like an optical illusion for humans, for example, like one of these mind hacks for humans, what are the mind hacks for the alarms? I don't think we .

know that yet. And that becomes a kind of us figuring out reverse engineering the language that controls the alarms. And the thing is, the reverse ener can be done by a very large percent of of the population now because of natural language interface.

right?

It's kind of interesting to see that you were there at the birth of the computer sized department as a thing, and you might be there at the death of the computer size.

Yes, I don't know there. We're computer science departments that existed earlier, but the ones that the broadening of of every university had to have a computer science of, yes, I was. I was.

I watched that sort of speak. And but I think the thing to understand is okay. So first of all, there's a whole theoretical area of computer science that I think is great. And you know, that's a fine thing. The the, you know, in a sense, you know, people often say any field that has the word science tacked onto IT probably isn't one and drown words, right?

And see nutrition science, neuroscience.

that one's an interesting because that one is also very much you that's a ChatGPT informed science and sense because it's it's kind of like the big problem of neuroscience has always been we understand how the individual neurons work. We know something about the psychology of how overall thinking works yeah what's the kind of intermediate language of the brain? And nobody has known that.

And that's been, in a sense of, you ask, what is the core problem of neuroscience? I think that is the core problem. That is, what is the level of description of brains that above individual neuron firings and below psychology, so to speak.

And I think what ChatGPT is showing us is what one thing about neuroscience is one could have imagined that there's something magic in the brain, that some weird quantum mechanical phenomenon that we don't understand. One of the important, you know, discoveries from ChatGPT is it's pretty clear you brains can be represented pretty well by simple artificial neural net type models. And that means that's IT.

That's what we have to study now. We have to understand the science of those things. We don't have to go searching for you exactly how did that molecular biology thing happen inside the synapse says and you know all these kinds of things, we we've got the right level of modeling to be able to explain a lot of what's going on and thinking we don't necessarily have a science of what's going on there.

That's that's the remaining chAllenge, so to speak. But we know we don't have to dive down to some, some different layer, but anyway, we've talking about things that had science in their name. And you know I think that the you know what what happens to computer science? Well, I think the thing that there is a thing that everybody should know and that's how to think about the world computationally.

And that means, you know, you look at all the different kinds of things we deal with and they're a ways to kind of have a formal representation of those things. You is like, well, what is what is an image? How do we represent that? What is color? How do we represent that?

What is, you know, what are least different kinds of things? What is, I don't know, smell or something? How should we represent that? What are the shapes, molecules and things that correspond to that? What is, you know, these things about?

How do we represent the world in some kind of formal level? And I think my current thinking, i'm not really happy with this yet, but it's kind of computer science is kind of CS. And what really is important is kind of computational x for all x.

And there's this kind of thing, which is kind of like cx, not C S and C X, is this kind of computational understanding of the world that isn't the sort of details of programing and programing languages in the details of how particular computers are made. It's this kind of way of formalizing the world, kind of kind of a little bit like what logic was going for back in the day. And we're now trying to find a formalization of everything in the world.

You kind of see we made a poster years ago of kind of the the growth of systematic data in the world. So all these different kinds of things that you there were sort of systematic descriptions found for those things, like, you know what point to people have the idea of having calendar dates? No systematic description of what day IT was?

At what point to people have the idea, you know, systematic descriptions of these kinds of things, and as soon as one gun, you know, people, you know, as a wave of sort of formulating, how do you, how do you think about the world in a sort of a formal way, so that you can kind of build up a tower of of capabilities. You have to know sort of how to think about the world computationally. IT kind needs a name, and IT isn't.

You know, we implement IT with computers so that we talk about IT as computational. But really, what IT is is a formal way of talking about the world. What is the formalism of the world, so to speak, and how do we learn about kind of how to think about different .

aspects of the world in a formal way? So I think that a IT, a kind of implies highly constrained and perhaps that's not doesn't have to be highly constrained. The competition thinking does not mean like logic, as knows, it's a really, really brought thing. I wonder mean I wonder if if you think natural language will evolve such that everybody's doing competition, think I as well.

So one question is whether there will be a pigeon of computational language and natural language yeah and I found myself sometimes, you know, talking to ChatGPT, trying to get IT to write wealth language code. And I write IT in pigeon form. So that means i'm combining you you know, nest list, this collection of, you know, whatever.

No nest list is a term from other language, and i'm combining that. And ChatGPT do, this is a decent job of understanding that pigeon probably understand the pigeon beeing english and french as well. You mushing together of those languages. But yes, I think that's that's far from impossible.

What's the incentive for Young people? They are like eight years old, nine time to interactive to learn the Normal natural language, right? The the full poetic language was the why the same way we learn in mogens and short hand will near your taxi, yes, you'll learn like a language, have a strong incentive to evolve into, uh, maxim, uh, computational kind of loves.

You know, I had this experience a number years ago. I happened to a be visiting a person I know on the on the west coast who was work with a bunch of kids aged, I know, ten or twenty years old or something, who had learned of language really well. And these kids learned IT so well they were speaking IT.

And so show up in there like saying, oh, you know this thing and that speaking in the language i'd never heard of the spoken language they were heard disappointed that I couldn't not understand at IT at the speed that they were speaking IT. It's like kind of it's some. And so I think that I mean, i've actually thought quite a bit about how to turn computational language into a convenient spoken language. Haven't quite figured that .

out was spoken because it's it's readable.

right? Yeah it's readable as a you know as a way that we would read text. But if you actually want to speak IT and it's useful, you know, if you're trying to talk to somebody about writing a piece of code, it's useful to be able to say something and and IT should be possible. And I think it's very frustrating. One of those problems, maybe maybe this is one of these things, but I should try get alarm to help me how .

to make IT speakable. How to maybe, maybe is easier than you realized you.

I think IT is easier. I think it's one idea or so. I think it's think gonna be something where, you know, the fact is it's a tree. Structured language, just like human language, is a tree structured language. And I think it's going to be one of these things where one of the requirements that i've had is that whatever the spoken version is, the dictation should be easy. That is that shouldn't be the case that you have to relearn how the whole thing works.

IT should be the case that you know that open bracket is just a uh or something and it's and then but you know human language has a lot of tricks that are I mean, for example, human language has has features that a sort of optimized keep things within the bounds that our brains can easily deal with. Like, you know, I tried to teach a transformer new on that to do parenthesis matching. It's pretty crummy at that IT and chat.

Beauty is similarly quite crummy at a prantera sis matching. You can do IT for small prentices things for the same size of prentice is things where, if I look at IT as a human, I can immediately say these are matched, these are not match. But as soon as IT gets big, as soon as IT gets kind of to the point where sort of a deeper computation is hopeless.

And but the fact is that human language has avoided, for example, the deep sub closes. You know, we don't know, we arrange things. So we don't end up with these incredibly deep things because brains are not well set up to deal with that. It's found lots of tricks. And maybe that's what we have to do to make sort of a spoken version, A A human speakable version, because because what we can do visually is a little different than what we can do in the very sequent ized way that we that we hear things in the audio domain.

Let me just ask you about my tea briefly. So there's no there's a college of engineering and a new college of computing. I want to linger on the complied department thing. So, M, I, T, S, each electrical and precise, what do you think color your computing will be doing like in twenty years? Like what you.

what was the computer .

science like? really?

This is the question. This is, you know, everybody should learn kind of whatever cx really is, okay, this how to think about the world computationally. Everybody should learn those concepts and you know it's and some people will learn them with a quite formal level and they learn computational language and things like that. Other people just learn, you know, sound is represented as digital data and i'll get some idea of being programs and frequencies and things like this, and maybe that doesn't they learn things like you, a lot of things that are sort of data science, statistics, fish.

Like if you say, oh, i've got, you know, these people who who picked their favorite kind of Candy or something, and i've got, so what's the best kind of Candy given that i've done the sample of all these people and they will rank the Candy in different ways, you know, how do you think about that, that sort of a computational x kind of thing you might say, oh, I don't know what that is. Is is statistics is a data science? S I don't really know, but kind of how to think about .

a question like that or like a ranking of preferences?

Yeah yeah. How to aggregate those those ranked preferences into an overall thing? Know how does that work? How should you think about that? Know because you can just tell you might just tell ChatGPT sort.

I don't know. Even even the concept of an average, it's not obvious that you know that's a concept that people is worth people knowing. That's a rather straight forward concept.

People people you know have learned in kind of Matthew ways right now, but there there are lots of things like that about how do you kind of have these ways to sort of organized and formalized the world? And that these things, sometimes they live in math, sometimes they live in, I don't know what they know, I don't know what you know, learning about color space, I have no idea what I mean. You know that .

there's obviously field of you could be visions, science or no color space, space that there would be optics. So like.

not really, it's not optics. Open tics is about, you know, lenzing and traumatic Operation of lenders and things like .

that because because of design and art no.

I mean it's it's like no RGB space, X, Y, Z space, you know who saturation, brightness and space, all these kinds of things describe colors.

right? But doesn't the application define what that like be? Because obviously artists and designers use the color.

I think that just an example of kind of, how do you you know the typical person, how do you, how do you describe what the color is over? There are these numbers that describe what color is what's worth you. If you're an eight year old, you won't necessarily know it's not something we're born with to know that you know colors can be described by three numbers.

That's something that you have to you know it's a thing to learn about the world, so to speak. And I think that you know that whole copps of things that are learning about the formalization of the world or the computational ization of the world, that's something that should be part of kind of standard education. And you there isn't there isn't a course of richlun for that. And by the way, whatever might have been in IT just .

got changed because of I M. And so sign watching closely with interest, seeing how university is a dept.

Well, you know, so so one of my projects for hopefully this year, I don't know, is to try and write sort of A A reasonable textbooks, or to speak of whatever this thing, cx, whatever IT is, you know, what should you know know? What should you know about, like what a bug is? What is the intuition about bugs? What's the intuition about, you know, software testing? What is IT? What is these are things which are you? They're not.

I mean, those are things which i've gotten taught ten in computer science, is part of the trade of programing. But but kind of the the conceptual points about what these things are, you know, IT surprised me, just a very practical level. And I wrote the little to explain the thing about ChatGPT.

And I thought, well, you, i'm writing this partly because I wanted to make sure I understood IT myself and so on. And it's been you, it's been really popular, and surprisingly so. And then I realized, well, actually I was sort of assuming I didn't think about and actually I just thought this is something I can write.

I realized actually it's a level of description that is kind of you know what has to be. It's not the engineering level description. It's not the kind of just the quality native kind of description. It's some kind of sort of expository mechanistic description of what's going on together with kind of the bigger picture of the philosopher of things. And so I and I realized actually there's a pretty good thing for me, the right, I know I kind of know those things and I kind realized it's not a collection of things that know i've sort of been I was sort of a little shock that it's as much of an outlier in terms of explaining what's going on as IT turned out to be.

And that makes me feel more of an obligation to kind of right the kind of what is what is this thing that you should learn about, about the computational ization, the formal ization of the world, because, well, i've spent much of my life working on the kind of tooling and mechanics of that and the science you get from IT. So I guess this is my my kind of obligation to try to do this. But I think so, if you ask what's gonna en to like the computer science departments and so on, is this some interesting models? So for example, let's take math.

You know, math is the thing that's important for for all sorts of fields, you know, engineering, even chemistry, psychology, whatever else. And I think different universities have kind of evolved that differently. I mean, some say all the math is taught in the math department and some say we're onna have a you know a math for chemist or something that is taught in the chemistry department.

And you know I think this this question of whether there is a centralization of the teaching of sort of cx is an interesting question. And I think you know the way evolve with math, you know people understood that math was sort of a separately teaching thing and was kind of, you know, an independent element as opposed to just being absorbed into out. Now if you take the example of of writing english or something like this, the first point is that that you at the college level, at least of fancy colleges, there's a certain amount of english writing that that people do.

But mostly it's kind of assumed that they pretty much know I had a right, you know, that something they learned at an earlier stage in education, maybe rightly, wrongly believing that, but that's differences. Who the well, I think that reminds me of my kind of as i've tried to help people do technical writing and things, I i'm always reminded of my zero lor of technical writing, which is if you don't understand what you're writing about, your readers do not stand a chance. So it's it's I think the the thing that has some you know in when IT comes to like writing, for example, you know people in different fields are expected to write english says and they're not you know mostly the you know the history department or the engineering department.

They don't have their own let's it's it's not like there's a man is a thing which sort of people are assumed to have a knowledge of how to write that they can use in all these different fields? And the question is, you some level of knowledges of math kind of assumed by the time you get to the college level, but plenty is not on that sort of still centrally taught. The question is sort of how tall is the tower of kind of cx that you need before you can just go use IT in all these different fields? And you know there will be experts who want to learn the full elaborate tower and that will be kind of the C S, C X, whatever department, but they're also be everybody else who just needs to know a certain amount of that to be able to go into their art history classes and so on.

Yes, it's just a single class that everybody is required to take.

I don't know. I don't know how bigger is yet. I I hope to kind of define this curricular and i'll figure out whether it's some my guess is that I I don't know I don't really understand universities and professing that well, but my rough guests would be a year a year of college class will be enough to get to the point where most people have a reasonably broad knowledge of you will be sort of literate in this kind of computational way of thinking about things .

yeah a basic literacy. I'm still stuck perhaps because i'm hungry in the uh in the rating of human preferences for Candy. Have to ask what's the best Candy?

I like this ello rating for Candy. Somebody should come up because here somebody says you like chocolate. What things of us? I'll probably put milk duds up there. I don't if you know they have a preference for chocolate or Candy.

oh, I have lots of preference. I one of my all time favourites as my whole life is these things, these flake things, cbre flakes, which are not much sold in the U. S. And i've i've always thought that was a sign of of a lack of respect for the american consumer because that these sort of era chocolate that made in in a whole lot of it's a kind of a sheet of chocolate that's going to fold IT up. And when you eat IT flakes fall all over the place.

So IT requires the kind of elegance. IT requires you to have an elegance.

I know what I usually do is I eat a peace paper or something .

in race and cleaning up after.

no, I actually eat, I eat the flakes said, because, you know, IT turns out the way food taste depends a lot on its physical structure. And I really, i've noticed my piece of chocolate. I usually have some piece of chocolate, and I I was break off little pieces part, because then I eat IT less fast, but also IT actually taste different. You know, the small pieces, you know, have a different, you have a different experience. And if you have the big slab of chocolate.

for many reasons, yes, slower, more intent.

because the texture changes, right?

fascinating. Now, I dig back my milk does, because a basic cancer. Okay, do you think consciousness is fundamentally a competition? So when you think about C, X, what can be turned to computation and you're thinking about aleem, and do you think the the display of consciousness and the experience of conscious ously hard problem is, is fundamentally the computation?

Yeah what IT feels like inside, so to speak, is, you know, I did exercise eventually posted of, you know what it's like to be a computer yeah, right. It's kind of like, well, you get all the sensory input you have. Kind of the way I see IT is from the time you boost your computer to the time the computer crashes is like a human life.

You're building up a certain amount of state in memory. You remember certain things about your quotes. S life eventually, kind of like the the, you know, the next generation of humans is is born from the same genetic material. So dispute with a little bit left over, left on the disk, so to speak, and then you know that the new, fresh generation starts up, and eventually all kinds of crowd builds up in the, in the memory of the computer, and eventually the thing crashes, is whatever, or maybe has some trauma, because you plugged in some weird thing to some part of the computer, and that made IT crash. And that, you know that kind of, but but you have this this picture of you from start up to to to shut down.

You know, what is the life of a computer or a speaking? What does IT feel like to be that computer? And what in the thoughts does that have? And how do you describe IT? It's of interesting as you start writing about this to realize it's awfully like what you'd say about yourself that is IT awfully like even an an ordinary computer forgetting all the AI stuff is on, you know, it's kind of IT has a memory of the past.

IT has certain sensory experiences. IT can communicate with other computers, but IT has to pack a job how it's communicating in some kind of language like form. So I can you send so I can kind of map what's in its memory to what's in the memory of the other computer.

It's it's a surprisingly another thing I had an experienced just a week two ago. I I had, i'm a collector of all possible data about myself and other things, and so I you, I collect all sorts of weird medical data and so on. One thing I hadn't collected was i'd never had whole body.

mrs. can. So I went to get one of these OK. So I get get all the data back, right? I'm looking at this thing. I never looked at the kind of insides of my brain, so to speak, in in physical form. And it's really, I mean, it's kind of psychologically shocking in a sense that you know, he's this thing and you can see that has at least folds and all these the structure. And it's like that's where this experience that i'm having of the existing and so on, and that's where IT is.

And you know IT feels very you look at that and you're thinking, how can this possibly be all this experience that i'm having and you're realizing what I can look at a computer as well and it's it's kind of this IT IT. I think this idea that you are having an experience that is somehow um you know trans ends the mere sort of physicality of that experience no, it's something it's hard to come to terms with but I think no, I don't think I necessarily you my my personal experience, you know I look at the MRI of the brain and then I know know about all kinds of things about neurotic science and all that kind of stuff. And I still feel the way I feel, so to speak, and IT sort of seems disconnected. But yet as I try and rationalize IT, I can't really say that there's something kind of different about how I intrinsically feel from the thing that I can plainly see in the sort of physicality what's going on.

So do you think the computer, a large language model, will experience that transplant dance? How does that make you feel like I I don't believe that.

Will I think an ordinary computer is already there? I think an ordinary computer is already, you kind of it's it's now a large language model may experience IT in a way that is much Better aligned with us humans, that is IT much more, you know, if you could have the discussion with the computer, it's intelligence. So this speak is not particularly well aligned with ours. But the large language model, it's built to be aligned with our way of thinking about things.

You'll be able to explain that is afraid of being shut off until he did IT be able to say that. It's sad of the way you ve been speaking to IT over the past .

two days but that's a weird thing because when IT says it's afraid of something but we know that I got that idea from the fact they're read the internet yeah .

what did you get, Steven? What did you get IT .

when you say you afraid you the question yeah right.

I mean it's parents.

your friends, right? All my biology. I mean, in other words, there's a certain amount that is you know the end crime system kicking in and and you know the the these kinds of emotional overly type things that happen to be that are actually much more physical, even that much more of straightforwardly MC than the then kind of all of the higher level thinking.

Yeah, but your biology didn't tell you to say, i'm afraid just at the right time when people that love you are listening as so you know you're manipur them by saying so that's not your biology .

that's no that's well but the .

a large language model, that biological new network yours, yes.

But I mean the intrinsic thing of you know something sort of shocking is just happening and you have some sort of reaction, which is, you know some neurotic transmitter gets secreted and it's some that that is the beginning of some you that is that one of the pieces of input that then drives its kind like like a prompt for for the large language model. I mean, just like when we dream, for example, you know, no doubt there are only sort of random inputs that kind of these random prompts and then it's percuss ating through in kind of the way that a large language model does of kind of putting together things that seem meaningful.

I I mean you are you worried about this world where you you teach a lot on the internet and there's people asking questions and comments and so on? Uh you have people that work remotely. Um he worry about this world when large language models create human like bots that are leaving the comments asking the questions. You might even become fake employees.

no.

I mean, or or or worse or Better yet, friends, friends.

are you right? Look, I mean, one point is my mode of life has been, I build tools and then I use the tools. And a sense, i'm building this tower of automation, which you and in a sense, you when you make a company or something, you are making sort of automation.

But IT has some humans in IT, but also as much as possible. IT has IT has computer in IT. And so I think it's sort of an extension of that now.

Now if I really didn't know that, you know it's it's A A funny question. It's it's a funny issue when you know if we think about so what's gonna happen to the future of kind of jobs people do and so on. And there are places where kind of having a human in the loop, there are different reasons to have a human and loop.

For example, you might want a human in the loop because you want somebody to, you want another human to be invested in the outcome. You know, you want a human flying the plane, who's gona die if the plane crashes along with you, so to speak. And that gives you confidence that the right thing is going to happen. Or you might want, you know, right now you might want to human in the loop in some kind of sort of human encouragement persuading type profession, whether that will continue. And not sure for those types of profession because IT may be that the the greater efficiency of you of being able to have sort of just the right information delivered at just the right time will overcome the kind of the .

kind of I want to human that imagine like a or even higher stake, like a suicide hotline Operated by a large language model. Yeah, hooo boy is a pretty high stake situation.

right? But I mean, bit you know IT might in fact do the right thing yeah because IT might be the case. And that's really a partly a question of sort of how complicated is the human you know one of the things that that always surprising in some sense is that, you know, sometimes human psychology is not that complicated .

in some sense. Wrote the blog post the fifty year quest, my personal journey, good title, my personal journey with the second law, thermodynamics. So what is this law? And what have you understood about IT in the fifty year journey you had with IT?

right? So second roof yna ics, sometimes called law of entropy increase, is this principle of physics that says, well, my version of IT would be, things tend to get more random over time. A version of IT that there are many different of of formulations of IT that are things like heat doesn't spontaneously go from a hotter body to a colder one.

When you have a mechanical work kind of gets dissipated into heat, you have friction. And of when you systematically move things, eventually, they'll be they'll be sort of the energy of of moving things gets kind of ground down into heat. So people first sort of paid attention to this back in the eighteen twenties, when steam engines were a big thing.

And the big question was, how efficient could a steam engine be? And there's this chap called sadi cano, who was a french engineer. Actually his father was a sort of elaborate to mathematical engineer in france. But he figured out these this kind of rules for how kind of the efficiency of of the possible efficiency of something like a steam mangin and instead of a side part of what he did was this idea that mechanical energy tends to get dissipated as heat, that you, that you end up going from sort of systematic mechanical motion to this kind of random thing. But at that time, nobody knew what heat was.

At that time, people thought that heat was a fluid, I call IT chloric c and IT was a fluid that kind was absorbed into substances when heat, when one hot thing would transfer heat to a colder thing, that this fluid would flow from the hot thing to the colder thing anyway. Then by the, by the eighteen sixties, people had kind of come up with this idea that systematic energy tends to degrade into kind of random heat that would, that could then not be easily turned back into systematic mechanical energy. And then that that quickly became sort of as a global principle about how things work.

Question is, why does this happen that way? So you know, let's say, you have a bunch of molecules in a box, and they arranged these molecules arranged in a very nice set of flotilla of molecules in one corner of the box. And then what you typically observe is that after a while, these molecules were kind of randomly arranged in the box.

Question is, why does that happen? And people for a long, long time tried to figure out, is there from the laws of mechanics that determine how these molecules that say these molecules, like hot space, is bouncing off each other? From the laws of mechanics that described those molecules, can we explain why IT tends to be the case? But we see things that are orderly sort of degrade into disorder.

We tend to see things that you you you you scramble an egg, you that you know you take something quite older, you you disorder IT, so to speak. That's the thing that so of happens quite work regularly, or you he puts some income to water and IT will eventually spread out and and fill up, fill up the water. But you don't see those little particles of ink in the water or spontaneously kind of ranged themselves into a big blob, and then you jump out of the water or something.

And so the question is, why do things happen in this kind of irreversible way where you go from order to disorder? Why does that happen that way? And so throughout in the later part of the eighteen hundreds, a lot of work was done on trying to figure out, come one, derive this principle, the second world of the hyne ics, this law, about the the dynamics of heat, so to speak.

Come on, derive this from from some fundamental principles of mechanics. You know, first law is basically of the law of energy, energy conservation, that the total energy associated with heat IT, plus the total energy associated mechanical kinds of things, plus other kinds of energy that that total is constant. And that became a pretty well understood principle. But the the second of dynamics was always mysterious, like, why does IT work this way? Can not be derived from underlying mechanical laws.

And so when I was a uh well, twice years old, actually I had gotten interested, but i've been interested in in space and things like that because I thought that was kind of the future and interesting sort of technology and so on and for a while kind of, you know, every deep space probe was sort of a personal friend typed thing. And I knew all kinds of characteristics of IT and was kind of writing up all these all these things when I was and in eight, nine, ten years old and so on. And then I I got interested from being interested in kind of spacecraft.

I got interested. I like, how do they work? What are the instruments on them? And so, and that got me interest in physics, which was just as well, because of hide, stayed interest in space in the in middle late one thousand nine hundred and sixty. I would have been a long wait before. You know, space really blossomed as as as an area.

But something is everything right?

I got interesting in physics. And then, well, the actual sort of detailed story is when I when I kind of graduated elementary school, that is twelve.

That's the time when in in england where you've finish elementary school, I am sort of my my gift, sort of, I suppose, more less for myself as I got this collection of physics books, which were some college physics course of college physics books and volume five, but statistical physics, and has this picture on the cover that shows a bunch of kind of idealized molecules sitting in one side of a box. And then he has a series of frames showing how these molecules sort of spread out in the box. And I thought that's pretty interesting.

You know what? What causes that? And read the book and and the book, the book.

Actually, one of the things that was really significant to me about that was the book kind of claimed that identity. Understand what I said in detail. IT kind of claimed that this of principle of physics was derivable somehow.

And you know, other things i'd learned about physics, IT was all like, it's a fact that energy is conserved. It's a fact that reactivity works or something not. It's something you can derive from some fundamental so of IT has to be that way as as a matter of kind of mathematics or logic or something.

So I was sort of interesting to me that there was a thing about physics that was kind of inevitably true and derivable, so to speak. And so I think that so then I was like that this picture on this book, and I was trying to understand IT. And so that was actually the first serious program that I rote for a computer, was probably thousand and seven three written for this computer size of a desk program with paper, tape and so on. And um I tried to reproduce this picture on the book, and I didn't succeed.

What was the failure out there? Like, what mean he didn't succeed?

So look like I didn't look like what OK. So what happened is, okay, many years later, I learned how the picture on the book was actually made, and that IT was actually kind of a fake. But I didn't know that at that time.

But that picture was actually A A very high tech thing when I was made in the beginning of the one thousand nine hundred and sixty, was made on the largest supercomputer that existed at the time. And even so, IT couldn't quite simulate the thing that I was supposed to be simulating. But I didn't know that until many, many, many years later.

So at the time, I was like, you have these balls bouncing around in this box, but I was using this computer with eight killer words of memory, over eighteen bit words, memory words. Okay, so I was, whatever, twenty four kilo bites of memory. And I had, you know, I had these instructions.

I probably still remember all of its machine instructions and and I didn't really like dealing with floating point numbers or anything like that. And so I had to simplify this, this model of particles bouncing on the box. And so I thought, well, i'll put them on a grid and i'll make you make the things just sort of move one square at a time.

And so and so I did the simulation, and the result was IT didn't look anything like the actual pictures on the book. Now, many years later, in fact, very recently, I realized that the thing i'd simulated was actually an example of a whole sort of computational reduced abilities story that I absolutely did not recognize, that the time, at the time, I just looked like I did something random. And IT looks wrong as supposed to IT did something random.

It's super interesting that random, but I didn't recognize that at the time. And so as IT was at the time, I kind of I got interested in politics, physics, and I got interested in in other kinds of physics. And but this whole second of dynamics thing, this idea that sort of orderly things tend to degrade into disorder, continued to be something I was really interested in.

And I was really curious for the whole universe, why doesn't that happen all the time, like we start off at the in the big bang at the beginning of the universe, was the thing that seems like it's a very disordered collection of of stuff. And then its spontaneously forms itself into galaxies and creates all of this complexity in order in the universe. And so I was very curious how that happens.

And I but I was always kind of thinking, this is kind of somehow the second of dynamics is behind IT, trying to sort of pull things back into disorder, so to speak, and how was order being created? And so actually I was was interested. This is probably now hundred and eighty.

I got interested in kind of this, you know, galaxy formation and so on in the universe. I also, that time is interested in neural networks. And I was interested in kind of how brains make complicated things happen and so on.

Okay, 喂喂, what's the connection between the formation galaxies and how brains make complicated things happen?

Because most of placated things happen .

from simple origins.

Yeah, from some sort of no one origins. I had the sense that that what I was interested in was kind of in all these different the sort of different cases of where complicated things were rising from rules. And, you know, I also looked at snowflakes, some things like that.

I was curious and fluid dynamics in general. I was just kind of curious about how this complex erixon and the the thing that I didn't, you know, IT, took me a while to kind of realize that there might be a general phenomenon. You know, I sort of assumed all those galaxies over here, there's brains over here, that they're very different kinds of things.

And so what happened? This is all in nine hundred eighty one or so, I decided, okay, i'm going trying to make the minimal model of how these things work. Yeah, I was sort of an interesting experience because I heard built that starting ninety seventy nine.

I built my first big computer system. I think of the S. M. P. Symbolic manipulation program is kind of a four hour of modern movement language, with many of the same ideas about symbolic computation and so on.

But the thing that was very important to me about that was you, in building that language, I D basically tried to figure out what was the, what was the relevant computational primitives, which i've turned out to stay with me for the last forty something years, but was also important, because in building a language is very different activity from natural science, which is what i've mostly done before. Because the natural science, you start from the phenomenon of the world, and you try and figure out to, how can I make sense of the phenomenon of the world. And you know, kind of the world presents you with what IT has to offer, so to speak.

And you have to make sense of IT. When you build a, you know, computer language or something, you are creating your own primitives. And then you say, so what can you make from these sort of the opposite way around, from what you're doing, natural science. But i'd had the experience of doing that.

And so I was kind of like, okay, what happens if you sort of making artificial physics? What happens if you just make up the rules by which systems Operate? And then I was thinking you, for all these different systems, whether IT was galaxy or brains or whatever, what's the absolutely minimal model that kind of captures the things that are important about those .

systems in the competition premiers of that system?

And so that's what ended up with the cl automobile to where you just have a line of black and White cells. You just have a rule that says. Given the seven and its neighbors, what will the color of the cell be on the next step and just run into a series of steps? And the sort of the ironic thing is that cello tomato are great models for many kinds of things, but galaxies and brains are two examples where they do very, very badly. They're really irrelevant .

to those who is connection to the second law of dynamics and sale. The things you, the things you've discovered about seller automated?

yes. okay. So when I first started selling in the automata, my first papers about them, you know, the first sentence was always about, the second of the nomics was always about how does the order manage to be produced, even though that's the second of thynnes s which tries to pull things back into disorder.

And I kind of my early understanding of that had to do with these are intrinsically irreversible processes in salary author eta, that that form can form model structures even from random initial conditions. But then when I realized this was, well, actually it's so one of these things where there was a discovery that I should have made earlier, but didn't so know I had been studying so little, what I did was the sort of most obvious computer experiment. You just tried the different rules and see what they do.

It's kind like, you know, you've invented a complication telescope. You just pointed at the most, obviously in the sky, and then you just see what's there. And so I did that, and I know, making all these pictures of how a little comments are work.

And and I started these pictures. I started in great detail. That was, you can number the rules for sale automatic. And one of them has been a rule thirty. So I made a picture of rule thirty back in one thousand nine hundred and eighty one or so.

And rule thirty, well, it's an at the most, just like our kids and other one of these rules, I don't really IT happens to be asymmetric, left right asymmetric. And it's like let me just consider the case of the symmetric ones just to keep things simpler that and I just kind of ignored IT ah and then sort of an actually one thousand nine hundred eighty four. StrAngely, I ended up having a an early laser printer which made very high resolution pictures.

And I thought i'm going to print out an interesting I want to make an interesting picture. Let me take this rule thirty thing and just make a high resolution picture of IT. I did IT has this very remarkable property that this rule is very simple.

You started after just from one black cell at the top, and IT makes this kind of triangular pattern. But if you look inside this pattern, IT looks really random. Does you look at the center column of cells? And you know I studied that in great detail, and it's so far, as one can tell, it's completely random and it's kind of little bit like digits.

I once you know you know the rule for generating the digits supply, but once you've generated them, you know, we point one four, one, five, nine and zero a they seem completely random. And in fact, I have put up this prize back in what, once at twenty thousand or something, for prove anything about the sequence. Basically.

as I won't be able to do anything on that.

people have sent me some things. But it's you know, I don't know how how these problems are. I mean, it's kind of spoiled because I two thousand seven a put up a prize for a determining whether a particular turing machine that I thought was the simplest candidate for being universal during machine, determine whether IT is or isn't a universal turing machine.

And somebody did a really good job of of winning that prize, improving that IT was a universal turing machine. And about six months. And so I, you know, I didn't know whether that would be one of these problems that was out there for hundreds of years, or whether, in this particular case, Young chap called Alice smith, you nailed IT in six months. And so with this thirty collection, I don't really know whether these are things that are one hundred years away from being able to to get, or whether something is gna come and do something very clever.

It's such A I means like for A S last year rule, simple formulation IT feels like anyone can look at and understand IT yeah, I feel like it's within grass to be able to predict something to do, to do like some kind of law, right? And allows you to predict something about this, the middle column .

of real thirty, right? But you know.

this is, yes, you can't.

Yeah, right? This is the intuitional surprise of computational reduce ability. And on that, even though the rules are simple, you can't tell what's going to happen and you can't prove things about IT. And I think so, so anyway, that the the thing I started, nineteen eighty four or so, I started realizing that this phenomenon, that you can have very simple rules, they produce apparently random behavior. okay.

So that's a little bit like the second of thynne ics because it's like you have this simple initial condition, you can you know readily see that it's very, you know, you describe IT very easily. And yet IT makes this thing that seems to be random. Now turns out there's some technical detail about the kind of dynamics about the idea of reversibility.

When you have a, you have kind of A A movie of two billion balls colliding and you see them collide and they bounce off and you run that movie in reverse, you can't tell which way was the full direction of time, in which way was the backward direction of time. Just looking at individual Billy balls, by the time you've got a whole collection of them in a million of them or something, then IT turns out to be the case. And this is the the sort, the mystery, the second law, that the orderly thing you start with, the orderly thing and IT becomes disordered.

And that's the ford direction in time, and the other way round of its starts disorder and becomes ordered. You just don't see that in the world. Now in principle, if you you, if you have traced the detailed motions of all those molecules backwards, you would be able to IT IT. Will the reverse of time makes you know as you, as you go forwards in time order that goes to disorder, as you go backwards in time order that goes .

to disorder perfectly so yes, right.

So the the mystery is, why is that the case?

That one version of the mystery is, why is that the case that you never see something which happens to be just the kind of disorder that you would need to somehow evolve to order? Why does that not happen? Why you always just see order to goes to disorder, not the other way around? So the thing that I kind of realized, I started realizing in the one thousand nine hundred eighties, it's kind of like, it's a bit like cypher graph.

It's kind like you start off from this this key that's pretty simple, and then you can run IT and you can get this, you know, complicated random mess. And the thing that that tom, well, I sort of started realizing back then was that the second law is kind of A A story of computational reduce ability is a story of, you know what seems, know what we can describe easily at the beginning. We can only describe with a lot of computational effort at the end.

okay. So now we come many, many years later, and I was trying to sort of, well, having done this big project to understand fundamental physics, I realize that sort of a key aspect of that is understanding what observers are like. And then I realized that the second of dynamics is the same story as a bunch of these other cases.

Um IT is a story of A A computationally bounded observer trying to observe a computationally irreducible system. So it's a story of, you know, underneath the molecules bouncing around, they're bouncing around in this completely determined way, determined by rules. But the point is that that we, as computationally bounded observers, can't tell that there were these sort of simple underlying rules to us that just looks random.

And when IT comes this question about, can you prepare the initial state so that you know the disorder thing? You know how exactly the right disorder to make something orderly, a computationally bounded observer, cannot do that. We'd have to have done all of this out of irreducible computation to work out very precisely what this disordered state, what are the exact right disorder status, so that we would get this ordered thing produced drama.

What does that mean to be competition ally, bounded observer. So, observing a competition reducable system, so the competition bounded, is there something formal?

You can say there, right? So IT means, okay, you you can talk about turing machines. You can talk about computational complexity theory, you know paraNormal time computation and things like this. There are a variety of ways to make something more precise, but I think is more useful.

The intuitive version of IT is more useful yeah which is basically just to say that, you know how much computation are you going to do to try and work out what's going on? And the answer is you're not allowed to do a lot of we not able to do a lot of computation when we we've got in this room, there will be a trillion, trillion, trillion molecules, little bit less the big room, right? And you know, at every moment, you know that every microsecond or something, these molecules, molecules are colliding.

And that's a lot of computation that's getting done. And the question is, in our brains, we do a lot less computation every second. Then the computation dubia as molecules. If there is computational disciplined, we can't work out in detail what all those molecule is, are going to do. What we can do is only a much small amount of computation. And so the second of dynamics is this kind of interplay between the underlying computational reduced ability and the fact that we, as prepares of initial states, or as measures of what happens, you are not capable of doing that much computation. So to us, another big formulation of the second of thermal nomics is this idea, the law of entrepreneurship ase.

the characteristic that this universe entrepreneurs to be always increasing. What does that show to you .

about the evolution of well, for the university time? yes. And that's very confused in history of dynamics, because entropy was first introduced bike ico ridal of classes, and he did IT in terms of heats and temperature.

Okay, subsequently, IT was reformulated by girdle love rebel's man. And he formulated in a much more kind of commentary, al type way, but he always claimed that he was a current to cloudy as thing. And in one particular simple example, IT, is.

But that connection between these two formulations, entropy, they've never been connected. I mean, there's really so okay, so the more general definition of entropy due to belt men is the following thing. So you say I have a system and has many possible configurations.

Molecules can be in many different arrangements that that. If we know something about the system, for example, we know it's in a box IT has a certain pressure, has a certain temperature. We know these overall facts about IT.

Then we say how many microscopic configuration change of the system are possible given those overall constraints. And the entr ropy is the logical sm of that number. That's the definition.

And that's the kind of the general definition of entropy that that turns out to be useful. Now, in bolton's time, he thought these molecules could be placed anywhere you want. He didn't think, but he said, oh, actually we can make IT a lot simper by having the molecules be discard.

Actually he didn't. No molecules existed right in those in his time, eighteen sixties and so on. Ah the idea that matter might made of discreetly stuff had been floated ever since ancient greek times, but IT has been a long time.

Debate about is master discreet as IT continue us at the moment where at that time people mostly thought that matter was continuous. And I was all confused with this question about what heaters and people thought heat was this fluid and IT IT was a big, big model. And the and this boboli man said, let's assume they are discrete molecules.

Let's even assume they have discrete energy levels. Let's say everything is discard. Then we can do sort of commentary, al mathematics, and work out how many configuration change of these things that would be in the box.

And we can say we can compute this trope quality. But he said, but of course, it's just a fiction that these things are discrete. So he said, is an interesting piece of history, by the way. You know, that was at that time, people that no molecules existed.

There were other hints from from looking at kind of chemistry that there might be discard atoms and so on, just from the the common motorically of, you know, two hydrogen and one oxygen make what two, two amounts of hydrogen, plus one amount of oxygen together make water, things like this. But IT wasn't known that the script molecules existed, and and in fact, people know IT wasn't until the begin of the the transience century. That Browning emotion was the final, give away Brown emotion.

As you know, you look under a microscope at these little pieces from pollen grains, you see they're being discretely kicked, and those kicks are waterproof, cus, hitting them, and they are discrete. And in fact, IT was was really quite interesting history. Bolsin had worked out how things could be discreet.

And I basically invented something like quantum theory in in the eight hundred and sixties, and but he just thought IT wasn't really the way IT worked and then just a but piece of physical history, because I think it's kind of interesting. And in one thousand nine hundred, the sky called max plank have been a long time. The nomics person who is trying to everybody, he's trying to prove the second of thymine s including max plank.

A max plank believed that radiation, like electronic edition radiation, somehow the interaction of that with matter was going to prove the second of honda namic s but he had these experiments that people had done on black body radiation, and there were these curves, and you couldn't fit the curve based on his idea for how radiation interactive with matter those curves. You couldn't figure out how to fit those curves. Except he noticed that if he just did what bolsa en had done and assumed that electronic tic radiation was, he could fit the curves.

He said, but this just just happens to work this way. The night stand came along and said, I will buy the way, you know, the electric fuel might actually be discrete, IT might be made of photos. And then that explains how the sore works.

And that was in one thousand nine hundred and five. That was, that was how kind of that was, how that piece of quantum got started. Kind of interesting, interesting piece of history.

I didn't know until I was researching this recently. In one thousand nine hundred and four and ninety three, einstein wrote three different papers. And or so, you know, just of well known in physics history.

In one thousand nine hundred and five, einstein wrote these three papers, one introduced productivity theory, one explained Browny emotion, and one introduced special photo. So, you know, of A A big deal year for physics and Franklin. But in the years before that, he'd written several papers.

And what were they about? They were about the second of physics, and they were attempt to prove the second of dynamics and their nonsense. And so I I had no idea that he done this.

I'm neither.

And in fact, what he did, those three papers in one hundred and five, well, not so much about the paper, the one on brannan motion, the one on photos, both of these were about the story of sort of making the world discrete.

And he got that was like that idea from bolster, yeah, but bolster didn't think, you know, bolster kind of died believing, you know, he said, but he has a quote, actually, you know, in the end, things are onna turn out to be direct and gonna write down what I have to say about this, because, you know, eventually this stuff will be rediscovered. And I want to leave, you know what I can about how things going to be discreetly. But, you know, I think he has some quote about how, you know, one person can't stand against the tide of history in saying that, you know, matter is discrete.

so he stuck with guns. Yes, matter this great.

Yes, he did. And the, you know, what's interesting about this is at the time everybody, including einstein, kind of assumed that space was probably and end up being discreet too. But that didn't work out technically, because IT wasn't consistent with what theory I didn't seem to be.

And so then in the history of physics, even though people had determined that matter was direct election, manage, magnetic field was space was a hold out of not being discreet. And in fact, einstein, one thousand nine hundred and sixteen has a nicest, he wrote, he says, in the end, I will turn out spaces discreet, but we don't have the mathematical tools necessary to figure out how that works yet. And so, you know, I think it's kind of a cool in one hundred years later, we do.

Yes, for you, you're pretty, pretty sure that at every lay of reality is discard.

right? And that space is to and that the I mean, and in fact, one of things I realized recently as this kind of theory of heat that that the, you know, that heat is really continuous fluid. It's kind of like know the chloric theory of heat, which turns out to be completely wrong because actually heat is the motion of discrete molecules.

Unless, you know there are decreed molecules, it's hard to understand what he could possibly be. Well know. I think space is is discrete.

And the question is kind of what's the analog of the mistake that was made with chloric? In the case of space and so my my current guess is the dark matter is, as i've my little sort of afro ism of of the last few months has been, you dark matter is the coLoring of our time, that is, IT will turn out. The dark matter is the feature of space.

And IT is not a bunch of particles. You know, at the time when when people are talking about heat, they knew about fluids and they said, what heat must be just be another kind of fluid, because that's what they knew about. But now people know about particles, and so they say, well, what's dark matter? It's not. It's not just must be particles.

So what could that matter be as a feature space?

Oh, I don't know yet. I mean, I think the the thing i'm really one of the things i'm hoping to be able to do is to find the analog Brown emotion and space. So although it's Brown, emotion was was seeing down to the level of unempl from individual molecules.

And so in the case of space, you know most of the things the things we see about space so far, just everything seems continue. us. Browne emotion have been discovered in the eighteen thirties.

And IT was only identified what IT was, what I was the the the result of by small choky and einstein at the beginning of the twenty's century. And and you know, dark matter was discovered that phenomenon was discovered one hundred years ago. The rotation cause of galaxy is don't follow the luminous matter that was discovered a hundred years ago.

And I think you that I wouldn't be surprised if there isn't an effect that we already know about that is kind of the analogue of Brown emotion that reveals the discrete ness of space. And in fact, we we're beginning to have some guesses. We have some some evidence that black hole mergers work differently when there's discrete space.

And there may be things that you can see in gravitation wave signature and things associated with the directness of space. But this is kind of for me, it's it's kind of interesting to see the sort of recapitulation of the history physics where people with amenti say, you know, matter is continuous, election medical field is continuous and turns that that isn't true. And then they say space continues.

But but so you know, entropy is the number of states of the system consistent with some constraint. And the the thing is that if you have, if you know in great detail the position of every molecule in the gas, the entropy is is always zero because there's only one possible state, the the configuration of molecules in the gas, the molecule is bounced around. They have a certain rule for bouncing around.

There's just one state of the gas is to one state of the gas and so on. But it's only if you don't know in detail where all the molecules are that you can say, well, the entropy increases because the things we do know about the molecules, there are more possible microscopy. S states the system consistent with what we do know about whether the molecules are.

And so the question of whether um so people this sort of paradox and a sense of or if we knew where all the molecules where the entropy ouldn't increase there was this idea introduced by by gibs in the early twentie century, well actually the very beginning of the of the twentieth tury as a physics professor, american physics professor, was sort of the first distinguished american physics professor that yale and he he introduced the idea, of course graining this idea that, well, you know these molecules have a detail way they are bouncing around. But we can only observe a course grained version of that. But the confusion has been nobody knew what a valid course screening would be.

So nobody knew that whether you could have this costs raining that very carefully was stopped in just such a way that I would notice that the particular configurations that you could get from the simple initial condition, you know, they fit into this cost training. And the course training very carefully observed that why can't you do that kind of very detailed, precise course training? The answer is because if you are a computationally bounded observer and the underlying dynamics is computationally reducable, that's that's what defines possible course training is what a computationally bounded observer can do. And it's the it's the fact that a computationally bounded observer is is forced to look only at this kind, of course, grain version of what the system is doing. That's why and because what what's going on a knee is it's kind of filling out this this the different possible you're ending up with something where the sort of underlying computational reduce stability is, uh, your if if all you can see is what the course grain result is with the with a sort of computationally bounded observation, then inevitably there are many possible underlying configurations that are consistent with that.

Just to clarify, basically any observer that existence of the universe is going to be competition bounded?

No, any ABS over. Like us? I don't know.

I like us. What he mean? What he mean? Like us?

Well, humans with finite minds.

the year cleaning the tools of science.

Yeah, yeah. I mean, and as we you know, we have more precise and by the way, there are little sort of microscopic violations to the second of thermal s that you can start to have when you have more precise measures of where precisely molecules are. But for for a large scale, when you have enough molecules, we don't have know we are not chasing all those molecules and we just don't have the computational resources to do that.

And IT wouldn't be, you know, I think that the to imagine what an observer who is not computationally bounded would be like, it's an interesting thing because, okay, so what does computational boundary ss mean? Among other things, IT means we conclude that definite things happen. We go, we take all this complexity of the world, and we make a decision.

We're gonna turn left or turn right. And that is kind of reducing all this kind of detail. We're observing IT sort of crushing IT down this this one thing yeah and that if we didn't do that, we wouldn't we wouldn't have all the sort of symbolic structure that we build up that lets us think things through with our finite minds. We'd be instead you we'd be just we would be of one with .

the universe, the content to not simplify. Yes.

if we didn't simplify, then we wouldn't be like us. We would be like the universe, like the the intrinsic universe. But not having experiences, like the experiences we have where we, for example, conclude the definite things happen. We you know we we sort of have this this notion of being able to make make sort of narrative statements.

Yeah, I wonder if it's just like you imagined as I thought, experiment what it's like to be a computer. I want if it's possible to try to begin to imagine what it's like to be unbounded competition. Well, okay.

so here here's how that I think plays out. My brain is suck. yeah. So I mean, in this we talk about this rule at this space able, possible computations. And this idea of, you know, being at a certain place in the rule add, which corresponds to of a certain way of of a certain set of computations that you are representing things in terms of, okay.

So as you expand out in the really add, as you kind of encompass more possible views of the universe, as you encompass more possible kinds of computations that you can do, eventually you might say that's a real win. You know, we're colonizing the rule ad. We're building out more paradigms about how to think about things.

And eventually you might say, we we won all the way we managed to colonize the whole we add. Okay, here's the problem with that. The problem is that the notion of existence, coherent existence, requires some kind of specialization. By the time you are the whole really add. By the time you cover the whole rule, add in no useful sense, do you coherently exist.

So in other words, is the notion of existence, the notion of what we think of as, as, as definitely existence, requires this kind of specialization, requires this kind of idea that we are, we are not all possible things. We are the a particular set of things, and that's kind of how we that that kind of what what makes us have a coherent existence. If we were spread throughout the rule ad, we would not there will be no coherence to the way that we work.

We would work in all possible ways. And that wouldn't be kind of A A notion of identity. We wouldn't have this notion of kind of a of of of coherent identity.

I am geographically located somewhere exactly precisely in the russia. Therefore I am. Yes, yeah, the park.

Yeah, yeah. right. But you are in certain place in physical space, you're in certain place in real space. And if you are, if you are sufficiently spread out, you are no longer coherent and you no longer have, I mean, in in the in our perception, what that means to exist and to have doesn't happen now.

So they are thought so. To exist means to be competition. Ally bounded.

I think so, to exist in the way that we think of ourselves as existing. Yes.

the very active existence is like Operating this place is competition. You reduced so that this giant mess of things going on, you can possible predict, but nevermore, because of your limitations, you have an imperative of, like what is IT in imperative, or a skillset to simplify, or an ignorance, sufficient replicate.

So the thing which is not, obviously, that you are taking a slice of all this complexity, just like we have all of these molecules bouncing around in the room, but all we noticed, you know, the the kind of the flow of the air or the pressure of the air we just noticed in these particular things. And the big interesting thing is that there are rules, there are laws that govern those big things we observe. Yes, so not obvious because .

IT doesn't feel like the slice yeah well, a slice. Well, it's an abstraction.

yes. But I mean, the fact that the gas laws work that we can describe pressure volume at at, at, we don't have to go down to the level of talking about individual molecules. That is a non trivial fact.

And and here's the thing that I said of the exciting thing as far as i'm concerned, the fact that there are certain aspects of the universe. So you know, we think spaces is made, ultimately, these items of space and these hyper graphs and so on. And we think that, but we nevertheless perceive the universe and a large scale to be like continuous space, so on.

We in quantum mechanics, we think that there are these many threads of time, these many threads of history, yet we kind of span so so you know, in in quantum mechanics, in our models of physics, there are these time is not a single thread. Time breaks into many threads. They branch, they merge.

And but we are part of that branching, merging universe. And so our brains are also branching emerging. And so when we perceive the universe, we are branching brains perceiving a branching universe. yeah.

And so the fact that the claim that we we believe that we are persistent in time, we have this single thread of experience, that the statement that somehow we manage to aggregate together those separate threads of time that are separated in in the Operation and the fundamental Operation of the universe. So just as in space were averaging over some big region of space, and we're looking at many, many the aggregate effects of many items of space. So similarly, in what we call branch space, the space of these these quantum branches, we are effectively averaging of a many different branches of possible of history of the universe.

And so in in dynamics were averaging over many configuration change of many, many possible positions of molecules. So what what we see here is so the question is, when you do that averaging for space, what are the aggregate laws of space? When you do that averaging of a branche space, what are the aggregate laws Brown chill space? When you do that averaging over the molecules and so on, what are the aggregate laws you get? And this is, this is the thing that I think is just amazingly, amazingly need that there .

are a good look. Well, yes.

but the questions, what others are good to? So the answer is for space, the aggregate laws on instant's equations, for gravity, for the structural space time, for Brown chill space, the agent laws of the laws of quantum mechanics, and for, uh, the case of of molecules and things, the aggregate laws basically the second of the dynamics. And so the thought that on the things that follow from the second lot of dynamics.

And so what that means is that the three great theories of twenties century physics, which are basically general idea, the theory of gravity, quantum mechanics and statistical mechanics, which is what kind of grows out of the second of the dynamics, all three of the great theories of tranto century physics are the result of this interview between computational reduce ability and the computational boundary of observers. And, you know, for me, this is really neat, because IT means that all three of these laws are derivable. So we used to think that, for example, inside and equations were just sort of a wheeling feature of our universe, that they could be, my universe might be that way.

IT might not be that way. Quantum mechanics is just like, well, IT just happens to be that way. And the second law, people who kind of thought, what maybe IT is derivable.

okay. What turns out to be the case is that all three of the fundamental principles of physics are derivable. But they're not derivable just for mathematics. They require, or just from some kind of logical computation, they require one more thing. They require that the observer, that the thing that is sampling the way the universe works is an observer who has these characteristics of computational bounds, of belief in persistence in time. And so that that means that IT is the nature of the observer, the rough nature of the observer, not the details where we got two eyes and we observe photo of the frequency and so on uh but the the the kind of the very course features of the observer um then imply these very precise facts about physics and I think it's amazing.

So if we just look at the actual experience of the observer that we experience this reality IT seems real to us and you're same because of our abandoned nature is actually all an illusion, isn't a simplification?

Yeah it's a simplification, right?

You don't think a simplistic ation is an illusion?

No, I mean, it's it's well, I don't know. I mean, what was okay? That's an interesting question.

What's real? And that relates to the whole question of why does the universe exist? And you know, what is the difference between reality and a mere representation of what's going on? Yes.

we experience the representation.

yes. But the the question of so one question is, uh, you know, why is there a thing which we can experience that way? And the answer is.

Because this rule add object, which is this intangled limited all possible computations, there is no choice about IT. IT has to exist. IT has to, there has to be such a thing.

IT is in the same sense that you know, two plus two, if you define what two is, you thought pluses and saw on two plus two has to equal for. Similarly, this rule add this limit of possible computations just has to be a thing. You, that once you have the idea of computation, you inevitably have the russia to have ruler.

Yes, right.

And what's important about IT, that's just one of IT. It's just this unique object and that unique object necessarily exists. And then the question is what and then we are once once you know that we are sort of embedded in that and taking samples of IT, that is not inevitable, that there is this thing that we can perceive, that is you that our perception of kind of physical reality necessarily is that way, given that we are observers with the characteristics we have.

So in other words, the fact that the fact that the universe exists is it's actually it's almost like it's to think about IT almost theologically, so to speak. And I really it's funny because a lot of the questions about the existence of the universe and so on, they they track what kind of the science of the last few hundred years has really been concerned. But the science of the last few hundred years hasn't thought I could talk about questions like that.

And but I think it's kind, and a lot of the kind of arguments know, does god exist? Is that obvious that I think IT, in some sense, in some representation, it's sort of more more obvious that that something sort of bigger than us exists, then that we exist and we are, you know, our existence. And as observers, the way we are sort of a contingent thing about the universe.

And it's more inevitable that the whole, the whole universe, kind of the whole set diable possibilities exist. But but this question about you know is, is IT real or is IT an illusion? You know, all we know is our experience. And so the fact that, well, our experience is this absolutely microscopic peace of sample of the rule ad, and we are I know there's this this point about, you know, we might sample more, more of the really ad. We might learn more and more about.

We might learn, you know, like like different areas of physics, like quantum mechanics, for example, the fact that IT IT was discovered, I think, is closely related to the fact that electronic amplifiers were invented that allowed you to take a small effect and amplified up, which hadn't been possible before. You know, microscopes have been invented that ignited fy things and so on. But know, having a very small effect of being able to magnify was sort of a new thing that allowed one to see a different sort of aspect of the universe and let one discover this kind of thing. So, you know, we can expect that in the russia, there are an infinite collection of new things we can discover. There's, in fact, computational reduce ability kind of guarantees that there will be an infinite collection of kind of the pockets of reduce ability that can be discovered.

Boy would have be fun to take a walk down the rule, add and see what kind of stuff we find there. You write about alien intelligence? yes. I mean, just these worlds, yes, with computation.

Problem with these words is that we can talk on.

yes.

And you know, the thing is what i've kind of spend a lot of time doing of just studying computational system, seeing what they do, what I now call rios gy, kind of just the study of rules yeah and what they do, you can kind of easily jump somewhere else in the really ad and start seeing what are these do yeah and what you says, they just they do what they do and there's no human connection .

so as because you think, you know, some people are able to a communicate with animals, do you think you can become a whisper of these coalification .

trying that's what i've spent some bot of my life.

Have you have you heard and are you at the risk of losing your mind?

My favorite science discovery is this fact that these very simple programs can produce very a complicated behaviour here. And that and that fact is kind of, in a sense, a whispering of something out in the computational universe that we didn't really know is there before.

I mean, you it's like, you know, back in the one thousand nine hundred and eighty, I was during bunch of work with some very, very good mathematicians and they were like trying to pick away you. Can we figure out what's going on in these computational systems? And they they basically said, look, the math we have just doesn't get anywhere with this was stuck.

There's nothing to say. We have nothing to say. And you, in a sense, perhaps my main achievement at that time was to realize that the very fact that the good mathematicians had nothing to say was itself a very interesting thing that was kind of A A sort of, in some sense of whispering of a different part of the ruling ad. That one hadn't, one wasn't wasn't not accessible from what we knew in mathematics and so on.

Just make you sad that you're expLoring some these gigging tic ideas and IT feels like where on the verge of breaking through to some very interesting discoveries and yet you're just a finite being that's going to die way too soon. And that kind of your brain, your full body, kind of shows that your yeah.

it's just a bunch meat.

It's just a bunch meat. Yeah, I make you, make you sad.

Kind of a shame. I mean, I kind like to see how all this stuff works out. But I think the thing to realize is an interesting sort of thought experiment. You know, you you say, okay, you let's assume we can get crying to work on one day.

I will there will be one of these things that kind like ChatGPT one day somebody will figure out, you know, how to get water from zero degrees centigrade downer minus forty four or something without an expanding, and you crying x will be solved. And you'll be able to like just, you know, put a poison, so to speak, and you know, kind of reappear one hundred years later or something. And the thing though, that i've kind of increasingly realized is that, in a sense, this this whole question of kind of the sort of one is embedded in a certain moment in time.

And you kind of the things we care about now, the things I care about now, for example, had I lived five hundred years ago, many of the things I care about now, it's like that bizarre and nobody would care about that. It's not even the thing one thinks about in the future. The things that most people think about, you know, one, will be a strange relic of thinking about you.

The kind of know that might be what might have been a theologian thinking about, you know, how many of Angels fit on the head of a pin or something. And that might have been the, you know, the big intellectual thing. So I think it's it's a but yeah, it's a it's one of these things were particularly, you know i've had the I know a good bad fortune.

I'm not sure I think it's it's a mixed thing that i've invented a bunch of things which I kind of can, I think, see well enough what's going to happen that you know in fifty years, one hundred years, whatever, assuming the world doesn't examination itself, so to speak. You know these are things that will be sort of centrally important to whats going on. And it's kind of it's both a good thing in a bad thing in terms of the passage of one's life.

I mean, it's kind of like if everything i'd figured out was like OK, I figured that out when I was twenty five years old and everybody says it's great and we're done and it's like, okay, but i'm gona live another how many years and that kind of it's all downhill from there. And a sense that is Better in some sense to to be able to know there's it's sort of keeps things interesting that you, I can see, know a lot of these things. I think it's kind of I didn't expect you ChatGPT.

I didn't expect the kind of the sort of opening up of this idea of computational and computational language that's been made possible by this. I didn't expect that this is is the head of schedule sort of big, you know, even though the sort of the big of flowering that stuff i'd sort of been assuming was another fifty years away. So if IT turns out it's a lot less time, that's pretty cool because, you know, i'll hopefully get to see and speak .

rather than than well, I I think this week for a very, very large number of people in saying that I hope used to go on for a long time to come. You've had so many interesting ideas. You've created so many interesting systems over the years.

I can see now that GPT language models broke up in the world even more. I can't wait to see you at the forefront of this development. What what you do. And I have been a fan of yours, like I told you many, many times since the very beginning.

I'm deeply, gratefully, you want a new cut of science that you explore this mystery of clia toma and inspire this one little kid in me to to pursue artificial intelligence in all this beautiful also. So you can thank you so much as a huge honor to talk to you to to just be able to pick your mind and to explore all these ideas as you and please keep going. And I can't see you come up and next and thank you for talking the day we we had past midnight.

We only did four and half hours. I mean, we could probably go more will save that the next time too. This is wrong number.

Before I will, i'm sure talk many more times. Thank you so much. My pleasure. Thanks for listening to this conversation.

Stephen will, from the support us by guests, we check out our sponsors in the description. And now let me leave you some words from George counter. The essence of mathematics lies in its freedom. Thank you for listening and hope to see you next time.