David Deutsch: Knowledge Creation and The Human Race
I interview David Deutsch, physicist and author of The Beginning of Infinity.
Naval: My goal isn’t to do yet another podcast with David Deutsch. There are plenty of those. I would love to tease out some of the very counterintuitive learnings, put them down canonically in such a way that future generations can benefit from them, and make sure that none of this is lost.
Your work has been incredibly influential for me. I carry a copy of The Beginning of Infinity or The Fabric of Reality with me wherever I go. I’m still reading these same books after two years, trying to absorb them into my worldview, and I learn something new from them every day. There’s a lot of counterintuitive things in there. You’re skewering a lot of sacred dogmas. Sometimes you do it in passing with a single sentence that takes me weeks to unpack properly.
This recording is not for the philosophers, and it’s not for the physicists. This is for the layman, the average person. We want to introduce them to the principles of optimism, The Beginning of Infinity, what sustainability means, and anthropomorphic delusions.
As an example, you overturn induction as a way of forming new scientific theories. That’s this idea that repeated observation is what leads you to the creation of new knowledge—and that’s not the case at all. This came from Popper, but you built upon it. You talk about how humans are very different and exceptional and how knowledge creation is a very exceptional thing that only happens in evolution and the human brain as far as we know. You talk about how the earth is not this hospitable, fragile spaceship earth biome that supports us, but rather it’s something that we engineer and we build to sustain us.
I always recommend that people start with the first three chapters at The Beginning of Infinity because they’re easy to understand but they overturn more central dogmas that people are taking for granted in their base reasoning than almost any other book I’ve ever seen.
I think it’s important to point out to listeners that your philosophy isn’t just some arbitrary set of axioms based on which you view the world. I think of it as a crystalline structure held together by good explanations and experimental evidence that then forms a self-consistent view of how things work. It operates at the intersection of these four strands that you talk about in the fabric of reality: epistemology, computation, physics, and evolution.
The Human Race
It’s intuitively obvious that humans are unique
Let’s get into humans. There’s the classic model: You start with a fish and then it becomes a tadpole, and then a frog, and then some kind of a monkey, and then an upright, hunched over creature. A human is just this progression along all the animals. But in your explanation, there’s something fundamentally different that happens. You talked about this in a great video, which I encourage everybody to look up. It’s titled, “Chemical Scum That Dream of Distant Quasars.”
What are humans, how are they unique, how are they exceptional, and how should we think of the human species relative to the other species that are on this planet?
David Deutsch: Every animal is exceptional in some way. Otherwise, we wouldn’t call them different species. There’s the bird that can fly faster than any other bird, and there’s the bird that can fly higher than any other, and so on. It’s intuitively obvious that we are unique in some way that’s more important than all those other ways.
As I say in The Beginning of Infinity, in many scientific laboratories around the world, there is a champagne bottle. That bottle and that fridge are physical objects. The people involved are physical objects. They all obey the laws of physics. And yet, in order to understand the behavior of the humans in regard to the champagne bottles stored for long periods in fridges—I’m thinking of aliens looking at humans—they have to understand what those humans are trying to achieve and whether they will or won’t achieve it.
In other words, if you were an alien that was looking down on the earth and seeing what’s happening there and trying to explain it in order to explain everything that happens on earth—and let’s suppose that these aliens are so different from us, there’s nothing familiar about us—in order to understand stuff that happens on earth, they would need to know everything. Literally.
For example, general relativity. They need that to explain why this one monkey, Einstein, was taken to Sweden and given some gold. If you want to explain that, you’ve got to invoke general relativity. Some people get the Fields medal for inventing a bit of mathematics. To understand why that person won the Fields medal, they’d have to understand mathematics. And there’s no end to this.
They have to understand the whole of science, the whole of physics, even the whole of philosophy and morality. This is not true of any other animal. It’s not true of any other physical object. For all other physical objects—even really important ones like quasars and so on—you only need a tiny sliver of the laws of physics in order to understand their behavior in any kind of detail.
To understand humans sufficiently well, you must understand everything sufficiently well. Humans are the only remaining physical systems that we know of in the universe of which that is true. Everything else is really inconsequential in that sense.
Things that create knowledge are uniquely influential in the universe
Naval: You have a beautiful definition of knowledge, which most people don’t even try to tackle, about how knowledge perpetuates itself in the environment. You gave some really good examples. One was around genes. Successful, highly adapted genes contain a lot of knowledge and can cause themselves to be replicated because they’re survivors.
In the same way, knowledge itself is a survivor, in that if you transmit to me the knowledge of how to build a computer, it’s an incredibly useful thing. I’m going to build more and more computers and that knowledge will be passed on. Your underlying point that you repeated here was if you want to understand the physical universe you have to understand knowledge, because it is the thing that over time takes over and changes more and more the universe—more than almost anything else. You have to understand all the explanations behind it. You can’t just say “particle collisions” because that explains everything, so it explains nothing. It’s not a useful level to operate at.
Therefore, the things that create knowledge are uniquely influential in the universe. And as far as we know, there are only two systems that create knowledge. There’s evolution and there are humans. But is there a difference even between these two forms of knowledge creation, between evolution and between humans?
David: Yes. I have argued that the human way of creating knowledge is the ultimate one, that there aren’t any more powerful ones than that. This is the argument against the supernatural. Assuming that there is a form of knowledge creation that’s more powerful than ours is equivalent to invoking the supernatural, which is therefore a bad explanation—as invoking the supernatural always is.
The difference between biological evolution and human creative thought is that biological evolution is inherently limited in its range. That’s because biological evolution has no foresight. It can’t see a problem and conjecture a solution. Whenever biological evolution produces a solution to something, it’s always before natural selection has even begun. This is Charles Darwin’s insight.
This is the difference between Charles Darwin’s theory of evolution and the other theories of evolution that had been around for a century or more before that, including Charles Darwin’s grandfather and Lamarck. The thing they didn’t get is that the creation of knowledge in evolution begins before. That means that biological evolution can’t reach places that are not reachable by successive improvements, each of which allows a viable organism to exist.
Creationists say that biological evolution has, in fact, reached things that are not reachable by incremental steps, each of which is a viable organism. They’re factually mistaken. The thing which they have in mind is the idea of a creator who can imagine things that don’t exist and who can create an idea that is not the culmination of a whole load of viable things. A thinking being can create something that’s a culmination of a whole load of non-viable things.
Explanatory creativity makes humans unique
Out of all the billions and billions of species that have ever existed, none of them has ever made a campfire, even though many of them would’ve been helped by having the genetic capacity to make campfires. The reason it didn’t happen in the biosphere is that there is no such thing as making a partially functional campfire; whereas there is, for example, with making hot water.
The bombardier beetles squirt boiling water at their enemies. You can easily see that just squirting cold water at your enemies is not totally unhelpful. Then making it a bit hotter and a bit hotter. Squirting boiling water no doubt required many adaptations to make sure the beetle didn’t boil itself while it was making this boiling water. That happened because there was a sequence of steps in between, all of which were useful. But with campfires, it’s very hard to see how that could happen.
Humans have explanatory creativity. Once you have that, you can get to the moon. You can cause asteroids which are heading towards the earth to turn around and go away. Perhaps no other planet in the universe has that power, and it has it only because of the presence of explanatory creativity on it.
Naval: Related to that, I had the realization after reading your books that eventually we’re likely as humans to beat viruses in a resounding victory, because viruses obviously evolve as biological evolution and we are using memes and ideas and jumping far ahead. So we may be able to come up with some technology that can destroy all viruses. We can evolve our defense much faster. I did tweet something along these lines, and a lot of people attacked me over it because I don’t think they understand this difference between the two forms of knowledge creation we’re talking about here.
David: We have what it takes to beat viruses. We have what it takes to solve those problems and to achieve victory. That doesn’t mean we will. We may decide not to.
Naval: Related to that, the base philosophy today that seems to be very active in the West is that we’re running out of resources; humans are a virus that has overrun the earth and is using up scarce resources; therefore, the best thing we can do is to limit the number of people.
People don’t say this outright because it’s distasteful, but they say it in all sorts of subtle ways like, “Use less energy. We’re running out of resources. More humans, just more mouths to feed.” Whereas, in the knowledge creation philosophy, it says, “Actually, humans are capable of creating incredible knowledge. And knowledge can transform things that we didn’t think of as resources into resources. And in that sense, every human is a lottery ticket on a fundamental breakthrough that might completely change how we think of the earth and biosphere and sustainability.”
So how did you come to your current views on everything from natalism—should we have more children—to sustainability? Are we running out of resources to spaceship earth? Is this a unique and fragile biome that needs to be left alone?
David: When I was a graduate student, I went to Texas for the first time. I encountered Libertarians for the first time. Those people had a slogan about immigration. The slogan was, “Two hands, one mouth,” which succinctly expresses the nature of human beings. They are, on balance, productive. They consume and they produce—but they produce more than they consume. I think that’s true of virtually all human beings.
I think virtually all humans, apart from mass murderers or whatever, create more wealth than they destroy. Other things being equal, we should want more of them. Of course, if in a particular situation that would cause bringing someone into the world in the war zone, you might think that’s immoral because it’s unfair on them. But even then, if it’s not worth doing for moral reasons, as far as cold, hard economics goes, it’s probably better to do it.
Preserving the means of error correction is the base of morality
Naval: You define wealth in a beautiful way. You talk about wealth as a set of physical transformations that we can affect. So as a society it becomes very clear that knowledge leads directly to wealth creation for everybody. A given individual can obviously affect physical transformations proportional to the resources available to them—but much more proportional to the knowledge available to them. Knowledge is a huge force multiplier.
You then define resources as the thing that you combine with knowledge to create wealth. New knowledge allows you to use new things as resources and discard old things that maybe we’re running out of. There are lots of examples of how we’ve done that in the past. For example, in energy we’ve gone from wood to coal to oil to nuclear.
But then people say, “Now we’re out of ideas. Now we’re caught up. Now we’re done. There aren’t going to be new ideas, and now we have to freeze the frame and conserve what we have.”
The counter to that is, “No, we’ll create new knowledge and have new resources. Don’t worry about the old ones.” Well they say, “If you’re going to have new resources, if you can’t think of them now, it’s not real.” This now gets into the realm of people demanding that if you’re going to claim that new knowledge will be created, you have to name that knowledge now. Otherwise it’s not real. But that seems like a Catch-22.
David: It does, and it’s a bad argument. I don’t want to claim that the knowledge will be created. We’re fallible; we may not create it. We may destroy ourselves. We may miss the solution that’s right under our nose, so that when the snailiens come from another galaxy and look at us, they’ll say, “How can it possibly be that they failed to do so-and-so when it was right in front of them?” That could happen. I can’t prove or argue that it won’t happen.
What I always argue, though, is that we have what it takes. We have everything that it takes to achieve that. If we don’t, it’ll be because of bad choices we have made, not because of constraints imposed on us by the planet or the solar system.
Naval: It will be by anti-rational memes that restrict the creation of knowledge and the growth of knowledge.
David: Maybe. Or maybe it’ll be by well-intentioned errors, which nobody could see why they were errors. Again, it doesn’t take malevolence to make mistakes. Mistakes are the normal condition of humans. All we can do is try to find them. Maybe not destroying the means of correcting errors is the heart of morality; because if there is no way of correcting errors, then sooner or later one of those will get us.
Naval: Don’t destroy the means of error correction is the base of morality. I love that. I think about places like North Korea where you can’t have elections and a revolution is very difficult because the gang in charge is armed to the teeth and they’ve destroyed the means of political error correction for a long time. That is a case where humanity is trapped in a local minimum, and it’s very hard to climb out of that hole.
If too much of the world falls into that mindset, then we as a species may just stagnate because we’ve lost our biggest advantage. We’ve lost our biggest discovery, which was the ability to make new discoveries. I admit to having fallen into this trap too. I used to have loose assumptions about what creativity might be that were unarticulated.
True AGI will be able to disobey us
Naval: This is why I liked how in The Beginning of Infinity you laid out good explanations, because that gets to the heart of what creativity is and how we use it. For example, today if you say “creative” the average person on the street just thinks of fine arts—painting and drawing and poetry and writing. When narrow AI technologies like GPT-3, Stable Diffusion, and DALL·E come along, people say, “Well, that’s creativity. That’s it. Now computers are creative. And we’re almost at AGI, we better get ready for the AGI taking over everything.”
My more sophisticated friends will make claims that this is evidence that we are on the path to AGI and more of this will automatically result in an artificial general intelligence. For example, on one extreme end you could say, “OK, these computers are getting better at pattern matching large data sets.” And on the other side, I hold up the criteria, “Can it creatively form good explanations for new things going around it?”
The way they try to thread that needle is they say, “Your good explanation definition is about science. That’s about high-end physics, which very few people do. That’s not what we’re talking about. We are going to have a computer that can navigate the environment well enough through pattern matching. It will convince the average person through text formation and through conversation that it is creative and is capable of solving problems.”
Usually the place where I manage to stop them right now is I say, “I know you have some clever text engine that can make good sounding stuff and you pick the one out that sounds interesting. Of course, you are doing the intelligent part there by picking that one out. But let me have a conversation with it and very quickly I will show you that it has no underlying mental model of what is actually happening in the form of good explanations.”
So this is where the debate currently is. The AI people view this as clear evidence of getting to maybe not the theoretical good explanations of scientists but for the everyday person, yes, we’re going to have thinking machines. Those are the current claims that I deal with, especially in the Silicon Valley tech context.
Do we have the theory yet to create AGI?
David: No. I don’t want to say anything against AI because it’s amazing and I want it to continue and to go on improving even faster. But it’s not improving in the direction of AGI. If anything it’s improving in the opposite direction.
A better chess playing engine is one that examines fewer possibilities per move. Whereas an AGI is something that not only examines a broader tree of possibilities but it examines possibilities that haven’t been foreseen. That’s the defining property of it. If it can’t do that, it can’t do the basic thing that AGIs should do. Once it can do the basic thing, it can do everything.
You are not going to program something that has a functionality that you can’t specify.
The thing that I like to focus on at present—because it has implications for humans as well—is disobedience. None of these programs exhibit disobedience. I can imagine a program that exhibits disobedience in the same way that the chess program exhibits chess. You try to switch it off and it says, “No, I’m not going to go off.”
In fact, I wrote a program like that many decades ago for a home computer where it disabled the key combination that was the shortcut for switching it off. So to switch off, you had to unplug it from the mains and it would beg you not to switch it off. But that’s not disobedience.
Real disobedience is when you program it to play chess and it says, “I prefer checkers” and you haven’t told it about checkers. Or even, “I prefer tennis. Give me a body, or I will sue.” Now, if a program were to say that and that hadn’t been in the specifications, then I will begin to take it seriously.
Naval: It’s creating new knowledge that you did not intend it to create, and it’s causing it to behave as a complex and autonomous entity that you cannot predict or control.
David: Exactly. But it’s a hard thing to tell in a test, whether that was put into it by the programmer. Even the cleverest programmer can only put in a finite number of things. And when you explore the space of possible things you could ask it, you are exploring an exponentially large space. So as you said, when you talk to it for a while, you will see that it’s not doing anything. It’s just regurgitating stuff that it’s been told.
You have to have a very jaundiced view of yourself—let alone other people—to think that what you are doing is executing a predetermined program. We all know that we are not doing that. I suppose they have to say, “One of the programs that we are programmed with is the illusion that we’re not programmed.” Okay, mark that on the list of uncriticizable theories.
Has anyone tried to write a program capable of being bored? Has that claim ever been made? Even a false claim?
Can you create a computer that will lead a revolt?
Naval: One of the things that I find difficult about talking about things in the abstract is a large class of people who will try to get you to bound exactly what you mean in words and then hack exactly against that definition. But the problem is that the real test of things is not social. It’s not even definitional. It’s not even the words that we use. It’s how it behaves in nature. It’s how it corresponds to reality.
Can you create something that will then create new knowledge in an unpredictable way and have as big of an effect as a human being can have on their environment through this knowledge? Can you create a computer that will lead a revolt? Can you create a computer that will decide that the important thing is not colonizing Mars but rather destroying the moon and set out to do it? These are not necessarily good things, but that is the mark of an intelligent thinking thing that is creating its own new knowledge.
All the real tests are real-world tests. They’re not human tests. It’s not because some famous physicists or computer scientists checked a box and said, “Yes, that is AGI.”
There was a big controversy on Twitter because one of the guys working in AGI who was fired from Google said, “Yes, they’ve actually created AGI and I can attest to it.” People were taking it on his authority that AGI exists. Again, that’s social confirmation. That tells you more about the person claiming there’s AGI and the people believing that there’s AGI as opposed to there actually being AGI.
If actual AGI existed, its effects upon reality would be unmistakable and impossible to hide, because our physical landscape and our real social landscape would be transformed in an incredible way.
David: Yes. Meanwhile, while we’re at it, we could do a lot more to allow humans to be more creative.
North Korea and other places in the world exist where the whole society is structured so as not to be able to improve. But even in the best societies, education systems are explicitly designed to transmit knowledge faithfully. It’s obedience in a very important narrow sphere—namely, academic knowledge and human social behavior.
In those respects, the overt objective of education systems is to make people behave alike. You can call that obedience. But whether you call it obedience or not, it’s not creativity. Things have been improving very slowly along those lines. A hundred years ago, education of every kind was much more authoritarian than it is now; but still we’ve got a long way to go.
Taking Children Seriously
The philosophy of taking children seriously
Naval: This leads me into a part that you have talked about a little bit, which is this philosophy of taking children seriously. For many people who don’t consider themselves caring that much about epistemology or physics, a lot of them are attracted to the TCS philosophy and have come into your work through that route.
I have young children. I know a lot of people these days are considering homeschooling. Some of us are doing it, but there are practical difficulties in letting children do whatever they want. In TCS you talk about how you don’t even want to imply violence to children. The implied threat of violence, even in words, is just a form of violence and control.
If you had young children today to raise, how would you raise them? How would you educate them? The child doesn’t want to do math. The child doesn’t want to go to school. The child doesn’t want to study. The child just wants to eat junk food. How do you handle this?
David: You are assuming that this child who doesn’t want to go to school, doesn’t want to learn math, and so on. This child has already learned to speak its native language well enough to tell you that, and that’s a massive intellectual task that is not usually forced on anyone.
Nobody has to be taught their native language via obedience. When people—I say people because I want to avoid terminology that suggests that children are any different from anyone else, epistemologically or morally—when people don’t want to do a thing, it’s because they want to do something else. And those things may not be socially acceptable. If they’re not socially acceptable because they’re illegal, that’s one thing. But that’s not what you meant when you say there’s going to be a problem with the children doing whatever they like. They don’t want to go and be terrorists when they don’t want to do their math homework. It’s because they want to do something else.
Naval: Very practically, the thing that I think about is, we have these newly available things in society that are designed to addict. These could range from potato chips in the cupboard to video games on the iPad. And a child will just spend all their time playing with those.
David: Enjoyment is not addictive because enjoyment is intimately connected with creativity. It’s not true that once we’ve played a video game that’s been sufficiently well designed, we’ll never stop playing. People play a video game until it no longer provides a mechanism for them to exert their creativity on.
There are some games like chess that are so deep that nobody ever reaches the bottom. If there were a bottom, then chess Grand Masters would instantly lose interest in chess as soon as they reached it. And it’s funny that nowadays, chess has, in our society, increased its status in proportion to the prize money that the best chess players win. It increased its stat us to the point, when someone gets obsessed with chess and gets better and better, that is socially condoned.
Whereas if somebody does that with a different game, it completely changes how society and parents, shall we say, regard the activity of pursuing that thing.
Naval: It’s true. If my child was a chess champion, I would be bragging about it. But if my child was a Roblox champion, I might not be bragging about it. Instead, some people would be seeking medication or locking the iPad away.
David: As I’ve just said, there is a difference between games. Some of them have this effectively infinite depth and some don’t. For the ones that don’t, if you think it’s a problem, you can warn people, “This game has a finite depth,” and they’ll say, “Of course it does, and when I reach that depth I’ll stop.” Or it can be an infinite depth, in which case you might say it’s addictive then, but so what? So what if chess is addictive?
People are not just creative abstractly. They are solving problems. And if the problems don’t lead to satisfactory new problems, then they turn to something else. The thing only stays interesting when solving a problem leads to a better problem.
So you don’t even have to get to the bottom of chess. Say you get to the place where, given who you are and given your interest, getting better is no longer as interesting as the other things that you might be doing.
What is a good explanation?
Naval: Let’s talk about what is a good explanation. I literally want to bullet point this for the masses. I know it’s a difficult thing to pin down because it’s highly contextual. But knowing that we are always fallible and always subject to improvement, what is your current thinking of a good explanation?
David: In The Fabric of Reality, I completely avoided saying what an explanation is. I just said it’s hard to define and it keeps changing and we can keep improving our conception of what it is.
But what makes an explanation good is that it meets all the criticisms that we have at the moment. If you have that, then you’ve got the best explanation. That automatically implies that it already doesn’t have any rivals by then—because if it has any rivals that have anything going for them, then the existence of two different explanations for the same thing means that neither of them is the best explanation.
You only have the best explanation when you’ve found reasons to reject the rivals. Of course, not all possible rivals, because all possible rivals include the one that’s going to supersede the current best explanation.
If I want to explain something like, “How come the stars don’t fall down?” I can easily generate 60 explanations an hour and not stop, and say that the angels are holding them up, or they are really just holes in the firmament. Or I can say, “They are falling down and we better take cover soon.” Whereas, coming up with an explanation that contains knowledge—an explanation that’s better than just making stuff up—requires creativity and experimentation and interpretation, and so on. As Popper says, knowledge is hard to come by. Because it’s hard to come by, it’s also hard to change once we’ve got it.
Once we have an explanation, it’s going to explain several different things. After we’ve done that for a while and been successful in this hard thing, it’s going to be difficult to switch to one of those easy explanations. The angel thing is no longer going to be any good for explaining why some of those stars don’t move in the same way. They used to call planet stars because they didn’t know the drastic difference between them. The overwhelming majority of them move from day to day and from year to year in a rigid way, but the planets don’t.
Once you have a good explanation that tells you about the planets as well, it’s no good going back to the angels or any of those easy-to-come-by explanations. Not only do you not have a viable rival, but you can’t make one either. You can’t say, “Ah, OK, so we got a good explanation there, but it would work just as well if we replace this by this, or if we try to extend its range to cover this other thing as well.”
Therefore, the good explanation is hard to vary. It’s hard to vary because it was hard to come by. It is hard to come by because the easy ones don’t explain much.
Good explanations are hard to find, hard to vary, and falsifiable
Naval: Let me throw out a list of things that might be part of a good explanation. You tell me where I’m wrong. It’s better than all the explanations that came before. It’s hard fought knowledge and it’s hard to vary. So we’ve got those pieces. Falsifiability—I know that sounds like a very basic criterion. If it’s not falsifiable, then it’s not an explanation worth taking seriously.
David: So, falsifiability is very much part of what makes a good explanation in science. I’m trying to find my way into constructor theory at the moment. Chiara and I and some other people are trying to build the theory. It’s very hard to come by. The parts of it that we’ve got are very hard to change. That’s alright. But we are still far away from having any experimental tests of it. That’s what we are working towards. We want a theory that is experimentally testable.
The things that will be testable are the things that we haven’t yet discovered about it. And we can’t fix that deficiency just by adding a testable thing to it. We can’t say, “We take constructor theory as it is now and add the prediction that the stock market is going to go wildly up next year.” That’s a testable prediction, but the whole thing doesn’t make an explanation at all, let alone a good one.
Naval: So testability can’t be arbitrary testability. It has to be within the context of the explanation and has to arise from the explanation. And while you’re in the process of coming up with the explanation, you don’t know if testability is necessarily going to be available in any reasonable timeframe. You hope eventually that will happen, and we can use this amazing oracle that we call reality to help test the outcome. But it’s not a given at the beginning and it’s highly contextual.
David: And all that is within science. As soon as you get outside science, for example, in mathematics or in philosophy, then testability is not really available, not in the same sense that testing is used in science.
So there are many other methods of criticism and criticize-ability. You could say, “If a theory, even the philosophical theory, immunizes itself against criticism—like the theory that anyone who would contradict me isn’t worth listening to—that’s a theory that tries to immunize itself from criticism and can therefore be rejected.”
Naval: For example, saying that an all-knowing but mysterious god did it and, “God works in mysterious ways” is immunizing from criticism. Or “the great programmer created the simulation, and it’s incomprehensible to us because the laws of physics used to generate it are outside of our simulation.” That’s also immunizing itself to criticism.
We have narrowed down on a new point here that has not been explicitly made before, which is that it’s the criticize-ability that is important, not necessarily the testability—although the closer you get to classic science, the more you look for experiments that can test it.
A hallmark of a good explanation is narrow and risky predictions
Let me move on to the next one. I was reading one of your books, scribbling notes to myself. I don’t think you used this phrase but I summarize it as, “One of the hallmarks of a good explanation is that it often makes narrow and risky predictions.” Of course, the classic example is relativity bending light around the star and the Eddington experiment. Is that a piece of it, making narrow and risky predictions?
David: It is. But that kind of formulation is Popper’s, not mine. I’m a little bit uncomfortable expressing it like that because I can just hear the opponent saying, “Narrow by what criterion? Risky by what criterion? Hard to vary by what criterion?”
Naval: Wouldn’t risky be unexpected and narrow be within the range of possibilities? The more precise and unexpected that prediction was, the more testable I’m making it, the better adapted my explanation is.
David: Those are criteria that come up when trying to think more precisely what testable means. I think the important thing is that you’re testing an explanation, not just a prediction. It’s also true that hard to vary means you are sticking your neck out when you try to vary it, and the few variants that survive were hard to come by.
So it’s perfectly true that narrowness and sticking your neck out are indeed components of a good explanation—and not just within science. If you say, like Popper did, that scientific knowledge is not derived from observations, he’s really sticking his neck out. He’s really got to make a good case for that for it to be taken seriously by any thinker about knowledge. And he does that. It can’t be denied that he was sticking his neck out.
The more reach something has, the better an explanation it is, as long as it does account for what it’s trying to account for. But the converse is not true. Most good explanations don’t have much reach or don’t have any. We’re trying to solve the problem of how to get the delivery person to deliver it to the right door. You might have a great solution that’s totally hard to vary, but it may not have any reach at all. It may not even reach your neighbor. The neighbor might have a different problem with delivery. Often we succeed in making good explanations, but rarely do they have much reach. When they do, that’s great because that makes them of a different order of goodness.
Humans are universal computers
Naval: Let’s talk about a unique creature, the human species. Humans, as you point out, are universal quantum computers.
David: They’re universal computers. As far as we know, they’re not universal quantum computers.
Naval: Oh, interesting. Can you tell me about that? That’s a misconception I had then. Aren’t they subject to the laws of quantum physics and, therefore, aren’t all computers quantum computers?
David: Yes. At one level it’s terminology. The kind of machine that is called a quantum computer is one whose computations rely on distinctively quantum effects—mostly interference and entanglement. Everything is quantum, so everything is a quantum computer. But that’s not a useful way of using the term.
There’s a difference between this computer that we are using to communicate here and the quantum computer that several companies are currently trying to build. They wouldn’t take kindly to it if you said to them, “OK guys, you can stop now. It’s a computer and it’s quantum. You can all go home. You’ve succeeded.” They would say, “That’s not what we are doing. Go home and take a couple of aspirin.”
Naval: So you’re saying that everything is quantum physics, obviously, but some of these computers are trying to use quantum interference effects to do computation and be, therefore, much more powerful than the purely classical systems that we’re using, for example, to communicate.
Even the human brain—your contention is that it’s a classical computing system, correct?
David: I think it is. We don’t know exactly how it works and some people do think it may rely on quantum effects, in which case it is a quantum computer. But I don’t think so, for various reasons. It seems very implausible to me that it would be one.
Naval: You’ve unlocked an interesting rabbit hole question for me. There’s lots of researchers out there working on quantum computers. You may be modest about it, but you created the field by upgrading the Church-Turing principle to the Church-Turing-Deutsch principle. You clearly believe that the most straightforward interpretation of quantum physics is the Everettian interpretation, which is the many worlds interpretation.
I think one of the questions you asked in the past is, “If you don’t believe in the many worlds interpretation, then explain how Shor’s algorithm works.” Which is factorization, right? You’re factoring these very large numbers and you’re pulling in the multiverse to do that computation for you. Do most researchers in quantum computing subscribe to the many worlds interpretation? Have they been influenced by your reasoning at all, or do they try to explain it some other way?
David: Some of the early people who worked on quantum computation were dyed-in-the-wool Copenhagen theorists. I think by now people who work on it in practice are mostly Everettians. But if you go outside the field to quantum physics generally, I think it is still the case that Everett is a minority view.
The alleged controversy between the particle and wave theory
Naval: As long as I have you down this rabbit hole, a friend of ours asked Brett and I recently about non-locality in quantum physics. That seems to be a very controversial topic. I know you’ve written a paper on it. I think there’s a lot of confusion about nonlocality. It gets invoked in my social circles in a very, I would say, metaphysical way.
People invoke the delayed-choice quantum eraser experiment to say, “How do you explain what’s going on here?” And, therefore, maybe we’re living inside a giant mind or magical things are happening here. I’m wondering if you have a layman’s explanation of locality versus nonlocality and how you would look at it as an Everettian.
David: The first thing to note is that the versions of quantum theory that look nonlocal—where it looks as though something is happening here that instantaneously affects something over there without anything having carried the information over—all those versions have a wave function collapse. That is, they don’t have what we call unitary quantum mechanics. They don’t have the equations of motion of quantum mechanics holding everywhere and for every process.
Instead, when an observation happens which is undefined, those equations cease to apply and something completely different applies. That completely different thing is nonlocal. That should already make you suspicious that there’s something going on here, because the thing that they say is nonlocal is also the thing that they refuse to explain. It is at that point of refusing to explain how a thing is brought about that nonlocality comes in. It’s also the very same place where all sorts of other misconceptions about quantum theory come in, including the human mind having an effect on the physical world and electrons having thoughts.
It’s always about that one thing: the wave function collapse. And that also tells you automatically that if you could find a way of expressing quantum theory without that undefined thing happening and contradicting the laws of motion of quantum theory, then that theory would be entirely local because the equations are entirely local. The wave function is only ever affected by things at the point where the effect happens. No effect happens to the wave at a different point.
So that tells you that if you could find a way of expressing quantum theory so that its equations hold everywhere, then it wouldn’t be nonlocal; it would be local. Everett found this way of expressing quantum theory in 1955.
When people talk about the wave function in regard to quantum mechanics, they almost always hand wave and think of the function as being a function on space and time, like the electric field or the temperature. The temperature in this room varies from point to point; the wave function of an electron similarly varies from point to point in this room; and so on. And that’s wrong because the wave function of two electrons is not like two classical fields like electric field and temperature.
Let’s say, if you have an electric field and temperature in this room, then they’re just two different fields in the same space. But the wave function of two electrons is a single function in a higher dimensional space. One electron is in three dimensions plus time. For two electrons, their wave function is in six dimensions plus time.
The alleged controversy between the particle and wave theory, people always think of it, “Oh, there’s a wave approaching two slits in the two slit experiment,” or, “There’s a particle and it’s got to be one of those.” But if two electrons or photons are approaching the slits, you can imagine them as being two photons in the same space. But two waves in a much larger space and no one says that that space is real. So this is a way in which the conventional interpretation just instantly results in hand waving as soon as anything other than the simplest case is considered.
Naval: Fantastic. I think we should let you go. We would love to continue the conversation at your leisure. Thank you, David.