More Compute Power Doesn’t Produce AGI
Naval: The artificial general intelligence crew gets it completely wrong, too: “Just add more compute power and you’ll get intelligence,” when we don’t know what it is underneath that makes us creative and allows us to come up with good explanations.
People talk a lot about GPT-3, the text-matching engine that OpenAI put out, which is a very impressive piece of software. They say, “Hey, I can use GPT-3 to generate great tweets.” That’s because, first, as a human you’re selecting the good tweets out of all the garbage that it generates. Second, it’s using some combination of plagiarism and synonym matching and so on to come up with plausible sounding stuff.
The easiest way to see that what it’s generating doesn’t actually make any sense is to ask it a follow-up question. Take a GPT-3 generated output and ask it, “Why is that the case?” Or make a prediction based on that and watch it completely fall apart because there’s no underlying explanation.
It’s parroting. It’s brilliant Bayesian reasoning. It’s extrapolating from what it already sees out there generated by humans on the web, but it doesn’t have an underlying model of reality that can explain the seen in terms of the unseen. And I think that’s critical.
That is what humans do uniquely that no other creature, no other computer, no other intelligence—biological or artificial—that we have ever encountered does.
And not only do we do it uniquely, but if we were to meet an alien species that also had the power to generate these good explanations, there is no explanation that they could generate that we could not understand.
We are maximally capable of understanding. There is no concept out there that is possible in this physical reality that a human being, given sufficient time and resources and education, could not understand.