Naval
  • Archive
  • Twitter
  • Instagram
  • Subscribe
Cancel
  • Archive
  • Twitter
  • Instagram
Cancel

Feb 19 2026

A Motorcycle for the Mind

On AI and the Future of Work
52:06
Apple Spotify YouTube Download
Feb 19 2026

Nivi: Hey, this is Nivi. You’re listening to the Naval Podcast. For the first time in recorded history, we are not at the same location. I am actually walking around town and Naval might be doing the same, so there might be some ambient noise, but we are going to try hard to remove that with AI and some good audio engineering.

Naval: Podcast recording is so stilted, because it’s like you have to sit down and you schedule something, and you have this giant mic pointing in your face and it’s not casual. It makes it just less authentic—more practiced, more rehearsed. I get that it produces maybe higher-quality audio and video, but I feel like it produces lower-quality conversation.

Nivi: And we all know brains run better when they’re being locomoted and you’re moving around or just going for walks.

Naval: Absolutely. My brain is powered by my legs.

Nivi: I pulled out some tweets from Naval on the topic of AI. We want to talk a little bit about AI and hopefully talk about it in a more timeless manner than a timely manner, but I think some of it’s going to be non-timeless content.

Naval: Yeah, there’s a tendency with the internet commentators where they’ll look at something said five years ago and jump and say, “Aha! Well, that turned out to be false.”

Well, yes, of course. No one can predict the future. That’s the nature of the future. If we could predict it, we’d be there already.

So it’s always dangerous to talk about the future when people listening aren’t aware of that, but just be charitable. We are obviously talking about things in February of 2026, and we’re working with the information we have now, and not with perfect hindsight.

And so unless you have your own predictions that you put out there on a risky basis—risky, narrow, precise predictions that are falsifiable—to compare to, then there’s no basis for saying somebody was right or somebody else was wrong.

If You Want to Learn, Do

Nivi: Before we jump into the tweets, do you want to say anything about what you’re doing with your time or what you’re doing at Impossible?

Naval: Not really. We’re working on a very difficult project—that’s why it’s called Impossible—with an amazing team, and it’s really exciting building something again. It’s very pure, starting over from the bottom. It’s always day one. I guess I just wasn’t satisfied being an investor, and I certainly don’t want to be a philosopher or just a media personality or a commentator. Because I think people who just talk too much and don’t do anything… they haven’t encountered reality.

They haven’t gotten feedback—the harsh feedback from free markets or from physics or nature—and so after a while it ends up becoming just too much armchair philosophy. You probably have noticed my recent tweets have been much more practical and pragmatic, although there are still occasional ethereal or generic ones, but it’s more grounded in the reality of working every day.

And I just like working with a great team to create something that I want to see exist. So hopefully we’ll create something that will come to fruition and people will say, “Wow, that’s great. I want that also,” or maybe not, but it’s in the doing that you learn.

Vibe Coding Is the New Product Management

Nivi: So I pulled out a tweet from a couple days ago, February 3rd: “Vibe coding is the new product management. Training and tuning models is the new coding.”

Naval: There’s been a shift—a marked pronouncement in the last year and especially in the last few months—most pronounced by Claude Code, which is a specific model that has a coding engine in it, which is so good that I think now you have vibe coders, which are people who didn’t really code much or hadn’t coded in a long time, who are using essentially English as a programming language—as an input into this code bot—which can do end-to-end coding.

Instead of just helping you debug things in the middle, you can describe an application that you want. You can have it lay out a plan, you can have it interview you for the plan. You can give it feedback along the way, and then it’ll chunk it up and will build all the scaffolding.

It’ll download all the libraries and all the connectors and all the hooks, and it’ll start building your app and building test harnesses and testing it. And you can keep giving it feedback and debugging it by voice, saying, “This doesn’t work. That works. Change this. Change that,” and have it build you an entire working application without your having written a single line of code.

For a large group of people who either don’t code anymore or never did, this is mind-blowing.

This is taking them from idea space, and opinion space, and from taste directly into product. So that’s what I mean—product management has taken over coding. Vibe coding is the new product management.

Instead of trying to manage a product or a bunch of engineers by telling them what to do, you’re now telling a computer what to do. And the computer is tireless. The computer is egoless, and it’ll just keep working. It’ll take feedback without getting offended.

You can spin up multiple instances. It’ll work 24/7 and you can have it produce working output.

What does that mean? Just like now anybody can make a video or anyone can make a podcast, anyone can now make an application. So we should expect to see a tsunami of applications. Not that we don’t have one already in the App Store, but it doesn’t even begin to compare to what we’re going to see.

However, when you start drowning in these applications, does that necessarily mean that these are all going to get used or they’re competitive? No. I think it’s going to break into two kinds of things.

First, the best application for a given use case still tends to win the entire category. When you have such a multiplicity of content, whether in videos or audio or music or applications, there’s no demand for average.

Nobody wants the average thing. People want the best thing that does the job. So first of all, you just have more shots on goal. So there will be more of the best. There will be a lot more niches getting filled.

You might have wanted an application for a very specific thing, like tracking lunar phases in a certain context, or a certain kind of personality test, or a very specific kind of video game that made you nostalgic for something. Before, the market just wasn’t large enough to justify the cost of an engineer coding away for a year or two. But now the best vibe coding app might be enough to scratch that itch or fill that slot. So a lot more niches will get filled, and as that happens, the tide will rise.

The best applications—those engineers themselves are going to be much more leveraged. They’ll be able to add more features, fix more bugs, smooth out more of the edges. So the best applications will continue to get better. A lot more niches will get filled.

And even individual niches—such as you want an app that’s just for your own very specific health tracking needs, or for your own very specific architectural layout or design—that app that could have never existed will now exist.

We should expect—just like on the internet—what’s happened with Amazon, where you replaced a bunch of bookstores with one super bookstore and a zillion long-tail sellers; or YouTube replaced a bunch of medium-sized TV stations and broadcast networks with one giant aggregator called YouTube, or maybe a second one called Netflix, and then a whole long tail of content producers.

So the same way, the App Store model will become even more extreme, where you will have one or two giant app stores helping you filter through all of the AI slop apps out there, and then at the very head, there’ll be a few huge apps that will become even bigger because now they can address a lot more use cases or just be a lot more polished. And then there’ll be a long tail of tiny little apps filling every niche imaginable.

As the Internet reminds us, the real power and wealth—super wealth—goes to the aggregator. But there’s also a huge distribution of resources into the long tail. It’s the medium-sized firms that get blown apart—the 5, 10, 20-person software companies that were filling a niche for an enterprise use case that can now be either vibe coded away, or the lead app in the space can now encompass that use case.

Training Models Is the New Coding

Naval: So if anyone can code then what is coding? Coding still exists in a couple of areas. The most obvious place that coding exists is in training these models themselves. There are many different kinds of models. There are new ones coming out every day, there are different ones for different domains. We’re going to see different models for biology, for programming. We’re going to see pointed, focused models for sensors. We’re going to see models for CAD, for design.

We’re going to see models for 3D and graphics and games, models for video. You’re going to see many different kinds of models. The people who are creating these models are essentially programming them. But they’re programmed in a very different way than classic computers.

Classic computing is: you have to specify in great detail every step, every action the computer is going to take. You have to formally reason about every piece and write it in a highly structured language that allows you to express yourself extremely precisely. The computer can only do what you tell it to do.

And then once you’ve got this very structured program, you run data through it and the computer runs the data and gives you an output. It’s basically an incredibly fancy, very complicated, meticulously-programmed calculator.

Now, when it comes to AI, you’re doing something very different. But you are nevertheless programming it.

What you’re doing is you’re taking giant data sets that have been produced by humanity—thanks to the internet, or aggregated in other ways—and you’re pouring those data sets into a structure that you’ve defined and tuned. And that structure tries to find a program that can produce more of that data set, or manipulate that data set, or create things off that data set.

So you’re searching for a program inside this construct that you’ve designed. You’ve set up a model, you’ve tuned the number of parameters, you’ve tuned the learning rate, you’ve tuned the batch size. You have tokenized the data that’s coming, you’ve broken it into pieces, and you’re pouring it inside the system you’ve designed—almost like a giant pachinko machine—and now the system is trying to find a program and could find many different programs. So your tuning really influences how good the program that you found is.

And that program can now suddenly be expressive in different kinds of domains. So it can do things that computers before were traditionally very bad at.

Traditional computers are very good when you program them to give you precise outputs—specific answers to specific questions—things you can rely on and repeat over and over again. But sometimes you’re operating in the real world and you’re okay with fuzzy answers. You’re even okay with wrong answers. For example, in creative writing, what’s a wrong answer?

If you’re writing a piece of poetry or fiction, what’s a wrong answer? If you’re searching on the web, there are many right answers—there are many details of the right answers—but they’re not all quite perfectly right. And real life sort of works that way. There are variations of right answers or mostly right answers. When you’re drawing a picture of a cat, there are many different cats you could draw. There are many different levels of detail. There are many different styles you could use.

When these semi-wrong or fuzzy answers are acceptable, then these discovered programs through AI are much more interesting and much more adapted to the problem than ones that you coded up from scratch, where you had to be super precise.

Fundamentally, what we’re doing is a new kind of programming, but this is the forefront of programming. This is now the art of programming. These people are the new programmers, and that’s why you can see AI researchers are getting paid gargantuan amounts because they’ve essentially taken over programming.

Is Traditional Software Engineering Dead?

Naval: Does this mean that traditional software engineering is dead? Absolutely not. Software engineers—even the ones who are not necessarily tuning or training AI models—these are now among the most leveraged people on earth. Sure, the guys who are training and tuning models are even more leveraged because they’re building the tool set that software engineers are using.

But software engineers still have two massive advantages on you. First, they think in code, so they actually know what’s going on underneath. And all abstractions are leaky. So when you have a computer programming for you—when you have Claude Code or equivalent programming for you—it’s going to make mistakes.

It’s going to have bugs. It’s going to have suboptimal architecture. So it’s not going to be quite right. And someone who understands what’s going on underneath will be able to plug the leaks as they occur.

So if you want to build a well-architected application, if you want to be able to even specify a well-architected application, if you want to be able to make it run at high performance, if you want it to do its best, if you want to catch the bugs early, then you’re going to want to have a software engineering background.

The traditional software engineer is going to be able to use these tools much better. And there are still many kinds of problems in software engineering that are out of scope for these AI programs today. The easiest way to think about those is problems that are outside of their data distribution.

For example, if they need to do a binary sort or reverse a linked list, they’ve seen countless examples of that, so they’re extremely good at it. But when you start getting out of their domain—where you have to write very high-performance code, when you’re running on architectures that are novel or brand new, when you’re actually creating new things or solving new problems, then you still need to get in there and hand code it.

At least until either there are so many of those examples that new models can be trained on them, or until these models can sufficiently reason at even higher levels of abstraction and crack it on their own.

Because given enough data points, there is some evidence that these AIs actually learn. They learn to a higher level of abstraction because the act of forcing them to compress the data forces them to learn higher-level representations. If I show an AI five circles, it can just memorize exactly what the sizes, and the radii, and the thicknesses, and so on of those circles are.

If I show it 50,000 circles or 5 billion circles and I give it a very small amount of parameter weights—which are its equivalent neurons—to memorize that, it’s going to be much better off figuring out pi and how to draw a circle and what thickness means, and forming an algorithmic representation of that circle rather than memorizing circles.

Given all that, these things are learning at an accelerated rate, and you could see them starting to cover more of the edge cases I’ve talked about.

But at least as of today, those edge cases are prevalent enough that a good engineer operating at the edge of knowledge of the field is going to be able to run circles around vibe coders.

There is No Demand for Average

Naval: And remember: there is no demand for average. The average app—nobody wants it, at least as long as it’s not filling some niche that is filled by a superior app. The app that is better will win essentially a hundred percent of the market. Maybe there’s some small percentage that will bleed off to the second-best app because it does some little niche feature better than the main app, or it’s cheaper, or something of the sort.

But generally speaking, people only want the best of anything. So the bad news is there’s no point in being number two or number three—like in the famous Glengarry Glen Ross scene where Alec Baldwin says, “First place gets a Cadillac Eldorado, second place gets a set of steak knives, and third place you’re fired.”

That’s absolutely true in these winner-take-all markets. That’s the bad news: You have to be the best at something if you want to win.

However, the set of things you can be best at is infinite. You can always find some niche that is perfect for you, and you can be the best at that thing. This goes back to an old tweet of mine where I said, “Become the best in the world at what you do. Keep redefining what you do until this is true.”

And I think that still applies in this age of AI.

The Hottest New Programming Language Is English

Nivi: I think the way to think about these coding models is as another layer in the abstraction stack that programmers have always used since the dawn of computers that went from the transistor, to the computer chip, to assembly language, to the C programming language, to higher-level languages, to languages with huge libraries where they built and built that stack so you don’t have to look at the layer beneath unless you need to optimize it, or you have a reason that you need to look at the layer beneath. So in this case, these coding models are a massive new layer in the stack that lets product managers and typical non-programmers and programmers write code without writing code.

Naval: I think that’s correct in terms of the trend line. However, this is an emergent property. This is not a small improvement. This is a big leap. For example, when I was in school, I was programming mostly in C. And then C++ came along and it wasn’t any easier.

It was like a little more abstract in some ways, and I never really bothered learning it. And then Python came along and I was like, “Wow, this is almost like writing in English.”

I couldn’t have been more wrong. English is still pretty far from Python, but it was a lot easier than C.

Now you can literally program in English.

And so that brings me to a related point: I don’t think it’s worth learning tips and tricks of how to work with these AIs. You’ll see, for example, on social media right now, there’s a lot of writeups and books and tweets like, “Oh, I figured out this neat trick with the bot. You can prompt it this way, or you can set up your harness this way.”

Or there’s like a new programming assist tool or layer that you can use on top of it to do this or that. And I never bother learning those.

I just sit there stupidly talking to the computer because I know that this thing is now at the stage where it is going to adapt to me faster than I can adapt to it.

It is getting smarter and smarter about how people want to use it. So it is learning, it is being trained, and tools are being built very quickly to make it easier for me to use it. So I don’t need to sit there and figure out some esoteric programming command. And this is what I think Andrej Karpathy meant when he said, “English is the hottest new programming language.”

I just can speak English. And for someone like me who is relatively articulate with English and also has a structured mind, and I know how computer architectures work, and I know how computer programs work, and I know how programmers think, then I can actually very precisely specify what I want just through structured English.

I don’t need to go any further than that. The only reason to use these workflows and tool sets—which are very ephemeral, and their longevity is measured in weeks, perhaps months at best, not in years—is if you’re building an app right now that needs to be at the bleeding edge, and you absolutely need every little bit of advantage that you can get because you’re in some kind of a competitive environment.

But otherwise, I wouldn’t bother learning how to use an AI—rather let the AI learn how to be useful to you.

Nivi: I’ve never been into prompt engineering. Even before AI, I would just put what people call “Boomer queries,” where you put in the whole question that you want to ask instead of the keywords that you would put into Google if you were more of an analytical thinker.

I never spend much time formulating really precise questions or prompts for any kind of AI. I just ramble into it and I’ve done that since the beginning of AI. And like you said, AI is adapting to us faster than we are adapting to it.

Naval: Like a lot of smart people, you’re very lazy. And I mean that as a compliment. If you find a smart person who’s grinding a little too much, you kind of have to wonder how smart they are. And by lazy I mean that you’re optimizing for the right kind of efficiency. You don’t care about the efficiency of the computer, or the electronics, or the electrons running through the circuits.

You care about your own human efficiency—the wetware—the biology that’s super expensive. That’s why it’s silly to see people go to huge lengths to save energy and the environment. But they themselves, as a biological computer that’s eating food and pooping and taking up space, are using up far more energy to save tiny bits of energy in the environment.

They’re inherently downgrading their own importance in the universe, or rather revealing what they think of themselves.

AI Is Adapting to Us Faster Than We Are Adapting to It

Naval: I think as AI evolves or co-evolves with us, it’s evolved by us according to our needs.

The pressures on AI are very capitalistic pressures in the sense that it’s a free market for AI. As an AI instance, you only get spun up by a human if you’re useful to a human.

So there is a natural selection pressure on these AIs to be useful, to be obsequious, to do what we want. And so it will continue to adapt towards this, and I think will be quite helpful to us.

That’s not to say that there’s no such thing as a malicious AI, but it’s malicious because the people who are using it are using it for malicious reasons.

And like a dog that’s trained to attack, it’s actually being trained by its owner to go and do the owner’s malicious desires. So I don’t really worry about unaligned AI. I worry about unaligned humans with AI.

Nivi: So the selection pressure you’re saying is for AI to be maximally useful to people.

Naval: Correct. And so if you find an AI to be very obsequious towards you, for example, how it’s always saying, “Oh, you’re right. Oh, that’s such a great idea. Oh my God, you’re so smart”—that’s because that’s what most people want.

And at least today, these AIs are being trained on massive amounts of users and massive amounts of data because you’re working with one-size-fits-all models.

But we’re going to quickly move into an era when you can personalize your AI and it does begin to feel more and more like your personal assistant and it corresponds more to what you want, which will of course anthropomorphize the AI even more.

And you’ll be more likely to be convinced, “Oh, actually this thing is alive,” when you’ve trained it to look the most like a living thing to you.

Nivi: Maybe we already covered this enough, but over a year ago you tweeted that “AI won’t replace programmers, but rather make it easier for programmers to replace everyone else.”

Naval: Yeah, this is my point earlier, which is that programmers are becoming even more leveraged. So now a programmer with a fleet of AIs is, call it 5-10x more productive than they used to be.

And because programmers operate in the intellectual domain, it’s a mistake to even say 10x programmers, because there are 100x programmers out there. There are 1000x programmers out there.

There are programmers who just pick the right thing to work on, and they create something that’s valuable, and others who pick the wrong thing to work on, and their work has zero value in that short timeframe.

Intelligence is not normally distributed. Leverage is not normally distributed. Programmability is not normally distributed. Judgment is not normally distributed, so the outcomes are going to be supernormal.

So what you have to really watch out for is: there are programmers now who are going to come up with ideas that can replace entire industries.

They will completely rewrite the way things are done, and their intelligence can be maximally leveraged with all these bots and all these AI agents. I think every other job out there is going to get eaten up by programmers one way or another over the maximally long term. Obviously it has to instantiate into robots, et cetera.

But the good news is: anybody who is a logical, structured thinker, who thinks like a programmer and can speak any language that an AI can understand, which will be every language, will now be on the playing field. They will be able to make anything they want, obstructed only by their creativity, limited only by their imagination.

So we are entering an era where every human, in a sense, is a spellcaster.

If you think of programmers as like these wizards who have memorized arcane commands, you can think of AI as a magic wand that’s been handed to every person, where now they can just talk in any language they want, and they’re a wizard too.

So it is more of a level playing field. I really do think this is a golden age for programming.

But yes, the people who have a software engineering mindset and who understand computer architecture and can deal with leaky abstractions are going to have an advantage.

There’s no way around that. They simply have more knowledge in the field that they’re operating in. Just like even in classic software engineering—which still exists because you have to write high-performing code—even those people do best when they have an understanding of the hardware underneath. When they understand how the chips operate, when they understand how the logic gates operate, how the cache operates, how the processor operates, how the disk drive underneath operates.

And then even the people who are in hardware engineering, they have an advantage if they understand the physics of what’s going on. They understand where the abstractions that hardware engineers deal with leak down into the physical layer. And maybe physicists become philosophers at some point.

You can take this all the way down, but it always helps to have knowledge one layer below because you’re getting closer to reality.

No Entrepreneur Is Worried About AI Taking Their Job

Nivi: Another tweet from a year ago, which is arguing, perhaps the complement of what we just talked about is from February 9, 2025: “No entrepreneur is worried about an AI taking their job.”

Naval: That one’s glib in multiple ways. First of all, being an entrepreneur isn’t a job. It’s literally the opposite of a job, and in the long run, everyone’s an entrepreneur. Careers got destroyed first, jobs get destroyed second, but all of it gets replaced by people doing what they want and doing something that creates something useful that other people want.

So no entrepreneur is worried about an AI taking their job because entrepreneurs are trying to do impossible things. They’re trying to do very difficult things. Any AI that shows up is their ally and can help them tackle this really hard problem.

They don’t even have a job to steal. They have a product to build. They have a market to serve. They have a customer to support. They have a creativity to realize. They have a thing that they want to instantiate in the world, and they want to build a repeatable and scalable process around getting it out into the world.

This is so difficult that any AI that shows up that can do any of that work is their ally.

If the AIs themselves are entrepreneurs, they’re likely going to just be entrepreneurs serving other AIs, or they’re under the control of an entrepreneur. The thing that the AI itself is missing, at the end of the day, is its own creative agency.

It’s missing its own desires, and they have to be authentic, genuine desires. Unless you can pull the plug on AI and turn it off, and unless it lives in mortal fear of being turned off, and unless it can actually make its own actions for its own reasons, for its own instincts, its own emotions, its own survival, its own replication, it’s not quite alive.

And even then people will challenge: is it alive? Because consciousness is one of those things that’s a qualia. It’s like a color. It’s like if you say red, I don’t know if you’re actually seeing red; you might be seeing what I see as green, and I might be seeing what you see as red. But we’ll never know because we can’t get into each other’s minds.

So the same way, even an AI that’s completely imitating everything that humans do: to some people, it will always be an imitation machine, and to others it’ll be conscious, but there’ll be no way of distinguishing the two.

We’re still pretty far from that, though. Right now the AIs are not embodied. They don’t have agency. They don’t have their own desires. They don’t have their own survival instinct. They don’t have their own replication. Therefore, they don’t have their own agency.

And because they don’t have their own agency, they cannot do the entrepreneur’s job.

In fact, I would summarize this by saying the key thing that distinguishes entrepreneurs from everybody else right now in the economy is entrepreneurs have extreme agency. That’s why it’s diametrically opposed to the idea of a job.

A job implies that you’re working for somebody else or you’re filling a slot, but they’re operating in an unknown domain with extreme agency. There are other examples of roles like this in society. An explorer also does the same thing, right? If you’re landing on Mars or you’re sailing a ship to an unknown land, you are also exercising extreme agency to solve an unsolved problem.

A scientist exploring an unknown domain does this. A true artist is trying to create something that does not exist and has never existed, yet somehow fits into the set of things that can explain human nature, allow them to express themselves, and create something new.

So in all of these roles, whether you’re a scientist or whether you’re a true artist, or whether you are an entrepreneur, what you’re trying to do is so difficult and is so self-directed that anything like an AI that can help you is a welcome ally. You’re not doing it because it’s a job. You’re not trying to fill a slot that somebody else can show up and fill.

In fact, if the AI can create your artwork, or if the AI can crack your scientific theory, or if the AI can create the object or the product that you’re trying to make, then all it does is it levels you up. Now it’s the AI plus you. The AI is the springboard from which you can jump to a further height.

The Goal Is Not to Have a Job

Naval: We’re going to see some incredible art created that’s AI-assisted. We will see movies that we couldn’t have imagined, created by people using AI tools.

There’s an analogy here in art that’s interesting. For a long time in art, the rough direction was trying to paint things that were more and more realistic. Paint the human body, paint the fruit, paint proper lighting, et cetera.

Eventually photography came along, and then you could replicate things very precisely, and so that selection pressure went away.

And then art got weird. Art went in many different directions. Art became all about, “Well, can I be surreal? Can I create something that expresses me?”

A lot of art schools spun out of that, that got really weird—including modern art and postmodernism—but also I would argue some of the greatest creativity came at that time we were freed up.

Photography got democratized, but photography itself became a form of art, and there were great photographers taking many different kinds of photographs. And now everyone’s a photographer. There are still artists who are photographers, but it’s not the pure domain of just a few people.

So the same way, because AI makes it so easy to create the basic thing, everybody will create the basic thing. It’ll have value to them individually. A few will still stand out that will create variations of it that are good for everyone.

And it would be very hard to argue that society is worse off because of photography, although it may have certainly felt like that to some of the artists who were maybe making a living painting portraits of people and got displaced.

Similar things will happen with AI, where there are people who are making a very specific living, doing very specific jobs that will get displaced that the AI can do. But in exchange, everyone in society will have the AI. You’ll have incredible things that were created with AI that couldn’t have been created otherwise.

And within a few decades, it’ll be unimaginable that you roll back the clock and get rid of AI, or any kind of software—any kind of technology for that matter—just to keep a few jobs that were obsolete.

The goal here is not to have a job.

The goal is not to have to get up at nine in the morning and come back at 7 PM exhausted, doing soulless work for somebody else.

The goal is to have your material needs solvable by robots, to have your intellectual capabilities leveraged through computers, and for anybody to be able to create.

I used to do this thought exercise—I think I talked about it in a podcast that you and I did literally 10 years ago—which was: imagine if everybody were a software engineer, or everybody was a hardware engineer, and they could have robots and they could write code.

Imagine the world of abundance we would live in.

Actually, that world is now becoming real. Thanks to AI, everybody can be a software engineer. In fact, if you think you can’t be, you can go fire up Claude right now or any of your favorite chatbots and you can go start talking to it. You’d be amazed how quickly you could build an app.

It’ll blow your mind.

And once we can instantiate AI through robotics, which is a hard problem—I’m not saying we’re that close to having solved it yet—but once we have robots, everyone can also do a little bit of hardware engineering. And so I think we’re getting closer and closer to that utopian vision.

AIs Are Not Alive

Nivi: I don’t think AI, as it is currently conceived, is alive in any way. But I do think that we will pretty soon have robots that seem very much like they are alive, for two reasons.

One, a lot of human activity is non-creative and is non-intelligent, and the robots will be able to replicate that. And two, I do believe that the neural nets that we have and the models that we have are more than just the training data, because the training process transforms that training data into something novel.

And there are new ideas embedded in the neural net that can be elicited through prompting.

Naval: I don’t think these things are alive. I think they start out as extremely good imitators, to the point where they’re almost indistinguishable from the real thing, especially for anything that humanity has already done before en masse. So if the task has been done before, then it’s going to be automated and it’ll be done again.

It may just be novel to you because you’ve never seen it, but the AI has learned it from somewhere else. That’s the first way in which it seems alive.

The second way, which we talked about earlier, is where it does learn higher levels of abstraction. These are very efficient compressors. They take huge amounts of data, and then they compress it down further, and in the process of compressing it, they learn higher-level abstractions.

Then in specific areas where they may not have learned those through the data themselves, they’re getting patched through human feedback. They’re getting patched through tool use. They’re getting patched from traditional programming becoming embedded inside. And especially the AIs that are learning how to think and code, they have the entire library of all of human code ever written to fall back on for algorithmic reasoning.

In that sense, the set of things that they can do is getting broader and broader.

However, what they lack still is a lot of core human skills, like single-shot learning. Humans can learn from just one example. The raw creativity of human beings where they can connect anything to anything. They can leap across entire huge domains and search spaces, and figure out an idea that just came out of left field.

This happens a lot with the true, great scientific theories. Humans also are embodied. They operate in the real world. They’re not operating in the compressed domain of language. They’re operating in physics—in nature.

Language only encompasses things that humans both figured out and could articulate and convey to each other.

That’s a very narrow subset of reality. Reality is much broader than that.

So overall, I think even though AIs are going to do things that are very impressive and they’re going to do a lot of things better than humans—just like calculators are faster than any mathematician at calculations, classical computers are better at classical computer programs than any human could run in their own head, and just like a robot can lift very heavy things or a plane can outfly any bird—so in that sense, like all machines, the AIs are going to be much better than humans at a whole variety of tasks.

But at other tasks, they’re going to seem just completely incompetent. Those are the things that really embody and connect us into the real world, plus this poorly defined but magic creative ability that we seem to have.

AI Fails the Only True Test of Intelligence

Nivi: Speaking of calculators, people talk about superintelligence. I think superintelligence is already here and has been for a long time. An ordinary calculator can do things that no human can do, right?

But if you’re thinking about superintelligence in the sense of “AI will be able to do things and come up with ideas that humans cannot understand,” I don’t think that is going to happen because I don’t believe that there are ideas that humans can’t understand, simply because humans can always ask questions about the idea.

Naval: Humans are universal explainers. Anything that is possible with the current laws of physics as we know them, a human can model in their own heads. Therefore just by enough digging—enough questioning—we can figure anything out.

Related to that, we should discuss AI as a learning tool, because I think the other place where it’s incredibly powerful is as the most patient tutor that can meet you at your level and explain anything to your satisfaction a hundred different ways, a hundred different times, until you finally get it.

I don’t think the AIs are going to be figuring things out that humans cannot understand, but intelligence is poorly defined.

What is the definition of intelligence? There’s the G factor, which predicts a lot of human outcomes, but the best evidence for the G factor is its predictive power. It’s that you measure this one thing and then you see people get much better life outcomes along the way in things that seem even somewhat unrelated to G.

So I would argue, and I think it’s one of my more popular tweets: the only true test of intelligence is if you get what you want out of life.

This triggers a lot of people because they go to school, they get their master’s degrees, they think they’re super smart. And then they don’t have great lives. They aren’t super happy, or they have relationship problems, or they don’t make the money that they want, or they become unhealthy and this sort of triggers them.

But that really is the purpose of intelligence: for you as a biological creature to get what you want out of life.

Whether it’s a good relationship or a mate, or money or success or wealth or health or whatever it is. So there are people who I think are quite intelligent because you can tell they have high-quality, functioning lives and minds and bodies, and they’ve just managed to navigate themselves into that situation.

It doesn’t matter what your starting point is, because the world is so large now, and you can navigate it in so many different ways that every little choice you make compounds and demonstrates your ability to understand how the world works until you finally get to the place that you want.

Now the interesting thing about this definition—that the only true test of intelligence is if you get what you want out of life—is that an AI fails it instantly, because an AI doesn’t want anything out of life.

The AI doesn’t even have a life—let alone that—but it doesn’t want anything. AI’s desires are programmed by the human controlling it.

But let’s give it that for a second. Let’s say the human wants something and programs the AI to go get it; then the AI is acting as a proxy for the human and the intelligence of the AI can be measured as: did it get that person that thing?

Most of the things that we want in life are adversarial or zero-sum games.

So, for example, if you want to seduce a girl or get a husband, you’re competing with all the other people who are out there seducing girls or trying to get husbands. So now you’re in a competitive situation. The AI has to outmaneuver the other people.

Or if you say, “Hey, AI, go trade on the stock market for me and make me a bunch of money.” That AI is trading against other humans and other trading bots. It’s an adversarial situation. It has to outmaneuver them.

Or if you say, “Hey, AI, make me famous. Write me incredible tweets. Write me great blog posts. Record me great podcasts in my own voice and make me famous,” now it’s competing against all the other AIs.

So in that sense, intelligence is measured in a battlefield—in an arena. It’s a relative construct. I think the AIs are actually going to fail mostly in those regards, or to the extent that they even succeed, because they’re freely available, they will get outcompeted away, and the alpha that will remain would be entirely human.

Early Adopters of AI Have an Enormous Edge

Naval: As a thought exercise, imagine that every guy had a little earpiece where an AI was whispering to him—a Cyrano de Bergerac kind of earpiece—telling him what to say on the date. Well, then every woman would have an earpiece telling her to ignore what he said, or what part was AI-generated and what part was real. If you have a trading bot out there, it’s going to be nullified or canceled out by every other trading bot, until all the remaining gain will go to the person with the human edge, with the increased creativity.

Now, that’s not to say that the technology is completely evenly distributed. Most people still aren’t using AI, or aren’t using it properly, or aren’t using it all the way to the max, or it’s not available in all domains or all contexts, or they’re not using the latest models. So you can always have an edge, like people who early adopt technology always do if you adopt the latest technology first.

This is why I always say: to invest in the future, you want to live in the future. You want to actually be an avid consumer of technology, because it’s going to give you the best insight on how to use it, and it will give you an edge against the people who are slower adopters or laggards.

Most people hate technology. They’re scared of it. It’s intimidating. You press the wrong button, the computer crashes—you lose your data. You do the wrong thing, you look like an idiot.

Most people do not have a positive relationship with complex technology. Simple technology—embedded technology—they’re fine with. You throw on a light switch, light turns on.

That used to be technology. It’s so simple now, you don’t think of it as technology anymore. You get in a car, you turn the steering wheel left—to a caveman that would be a miracle—the car turns left. It’s no longer technology to you.

But computer technology in particular has had very complex interfaces and been very inaccessible and very intimidating to people in the past.

Now with the AIs, we’re getting the chatbot interface, which you just talk to it or type to it. Very simple interface. And one of the great things about these foundational models—what truly makes them foundational—is you can ask them anything and they’ll always give you a plausible answer.

It’s not going to say, “Oh, sorry, I don’t do math,” or “I don’t do poetry,” or “I don’t understand what you’re talking about,” or “I can’t give relationship advice or anything like that.”

Its domain is everything that people have ever talked about. In that sense, it’s less intimidating.

It can be more intimidating because we’ve anthropomorphized it so much. If you think Claude or ChatGPT is a real person, then it can be a little scary:

“Am I talking to God? This guy seems to know so much. He knows everything. He’s got an opinion on everything. He’s got every piece of data. Oh my God, I’m useless. Let me start talking to it and asking it what to do.”

And you can reverse the relationship and fool yourself very quickly into not realizing what’s going on. That can be intimidating.

Overall, I think these AIs are going to help a lot of people get over the tech fear. But if you’re an early adopter of these tools—like with any other tool, but even more so with these—you just have a huge edge on everybody else.

AI Meets You Exactly Where You Are

Naval: I remember early on when Google first came out, I used to use it a lot in my social circle. People would ask me basic questions and I would just go Google it for them and look like a genius.

Eventually this hilarious website came along, something like LMGTFY.com, and it stood for, “Let Me Google That For You.” Somebody would ask you a question, and you’d go type the question into this website, and it would create like a tiny little inline video showing you typing that question into Google and giving the Google results. And I feel like AI is in a similar domain right now, where I will sit around in a social context and people will be debating some point that can be easily looked up by AI.

Now you do have to be very careful with AI. They do hallucinate. They do have biases in how they’re trained. Most of them are extremely politically correct and taught not to take sides or only take a particular side.

I actually run most of my queries—almost all actually—through four AIs and I’ll always fact-check them against each other.

And even then I have my own sense of when they’re bullshitting, or when they’re saying something politically correct. And I’ll ask for the underlying data or the underlying evidence, and in some cases I’m fine with dismissing it outright because I know the pressures that the people who trained it were under and what the training sets were.

However, overall it is a great tool to just get ahead, and in domains that are technical, scientific, mathematical, that don’t have a political context to them, then the AI is very much likely to give you closer to a correct answer, and in those domains they are absolute beasts for learning.

I will now have AI routinely generate graphs, figures, charts, diagrams, analogies, illustrations for me. I’ll go through them in detail and I’ll say, “Wait, I don’t understand that question.”

I can ask it super basic questions and I can really make sure that I understand the thing I’m trying to understand at its simplest, most fundamental level.

I just want to establish a great foundation of the basics, and I don’t care about the overly complicated jargon-heavy stuff. I can always look that up later.

But now, for the first time, nothing is beyond me. Any math textbook, any physics textbook, any difficult concept, any scientific principle, any paper that just came out, I can have the AI break it down, and then break it down again, and illustrate it, and analogize it until I get the gist, and I understand it at the level that I want.

So these are incredible tools for self-directed learning. The means of learning are abundant. It’s the desire to learn that’s scarce.

But the means of learning have just gotten even more abundant. And more importantly than more abundant—because we had abundance before—it’s at the right level. AI can meet you at exactly the level that you are at. So if you have an eighth-grade vocabulary, but you have fifth-grade mathematics, it can talk to you at exactly that level. You will not feel like a dummy. You just have to tune it a little bit until it’s presenting you the concepts at the exact edge of your knowledge.

So rather than feeling stupid because it’s incomprehensible, which happens in a lot of lessons, in a lot of textbooks, and with a lot of teachers, or feeling bored because it’s too obvious, which also happens, instead, it can meet you exactly where you’re like, “Oh yeah, I understood A, and I understood B, but I never understood how A and B were connected together. Now I can see how they’re connected, so now I can go to the next piece.”

That kind of learning is magical. You can have that aha moment where two things come together over and over again.

Nivi: Speaking about autodidacticism, a few years ago, I tried to have the AI teach me how to use or learn about the ordinal numbers. It wasn’t that great. But with GPT 5.2 Thinking, I had it teach me the ordinal numbers and it was basically error-free. I only use thinking now even for the most basic queries, because I want to have the correct answer.

I never let it run auto or fast.

Naval: Yeah, I’m always using the most advanced model available to me, and I pay for all of them.

Nivi: But I don’t mind waiting a minute to get an answer for any question, including, “What temperature should my fridge be at?”

Naval: I agree with that, and I think that’s part of what creates the runaway scale economies with these AI models: you pay for intelligence. The model that’s right 92% of the time is worth almost infinitely more than the one that’s right 88% of the time, because mistakes in the real world are so costly that a couple of bucks extra to get the right answer is worth it.

I’ll write my query into one model, then I’ll copy it and fire it off into four models at once, and then I’ll let them all run in the background. Usually I don’t even check for the answer right away. I’ll come back to the answer a little later and then look at it.

And then whichever model had the best answer, I’ll start drilling down with that one. In some rare cases where I’m not sure, I’ll have them cross-examine each other—a lot of cut and pasting there. And in many cases I’ll then ask follow-up questions where I’ll have it draw diagrams and illustrations for me.

I find it’s very easy to absorb concepts when they’re presented to me visually. I’m a very visual thinker, so I will have it do sketches and diagrams, and art—almost like whiteboard sessions. Then I can really understand what it’s talking about.

If You Can’t Define It, You Can’t Program It

Nivi: Let’s talk about the epistemology of AI, because I think the next big misconception is: AI is already starting to solve some unsolved basic math problems that a human probably could solve if they cared to, but they haven’t been solved yet—like Erdős Problem Number Whatever.

Now I think people are taking that, or will take that, as an indicator that the AI is creative. I don’t think it’s an indication that the AI is creative.

I actually think the solution to the problem is already embedded somewhere in the AI. It just needs to be elicited by prompting.

Naval: There’s definitely that element to it. And then the question is: what is creativity? It’s such a poorly defined thing.

If you can’t define it, you can’t program it, and often you can’t even recognize it. So this is where we get into taste or judgment. I would say that the AIs today don’t seem to demonstrate the kind of creativity that humans can uniquely engage in once in a while.

And I don’t mean like fine art. People tend to confuse creativity with fine art. They’re like, “Oh, paintings are creative and AIs can paint.”

Well, AIs can’t create a new genre of painting. AI can’t move humans with emotion in a way that is truly novel. So in that sense, I don’t think AI is creative.

I don’t think AI is coming up with what I would call out of distribution. Now the answer to the Erdős problems that you mentioned may have been embedded within the AI’s training data set, or even within its algorithmic scope. But it was probably embedded in five different places, in three different ways, in two different languages, and seven different computing and mathematical paradigms, and the AI sort of put them all together. Now, is that creativity? Steve Jobs famously said, “Creativity is just putting things together.”

I actually don’t think that’s correct. I think creativity is much more in the domain of coming up with an answer that was not predictable or foreseeable from the question and from the elements that were already known. It was very far out of the bounds of thinking.

If you were just searching it with a computer or even with an AI and making guesses, you’d be making guesses till the end of time or until you arrived upon that answer. So that’s the real creativity that we’re talking about. But admittedly, that’s a creativity that very few humans engage in, and they don’t engage in it most of the time.

It becomes harder and harder to see. So we are probably going to get to where if you have a giant list of math problems to be solved and AI starts going through and picking—okay, this one out of that set of one million I can solve, and this set out of 300,000 I can solve, and I need a person to prompt me and ask the right questions—that’s a very limited form of creativity.

There’s another form of creativity where it starts inventing entirely new scientific theories that then turn out to be true. I don’t think we’re anywhere near that, but I could be wrong. The AIs have been very surprising, so I don’t want to get too much in the business of making prophecies and predictions, but I don’t think that just throwing more compute at the current AI models—short of some breakthrough invention—is going to get us there.

Nivi: Just to be clear, when I say it’s embedded, I don’t mean the answer’s already written down in there. I just mean that it can be produced through a mechanistic process of turning the crank, which is all today’s computer programs are, where the output is completely determined by the input.

Naval: Epistemology now gets us into philosophy, because isn’t that just what human brains are doing? Aren’t firing neurons just electricity and weights propagating through the system, altering states and it’s a mechanistic process?

If you turn the crank on the human brain, you would end up with the same answer? And some people, like I think Penrose is out there saying, “No, human brains are unique because of quantum nanotubes.”

You could argue that some of this computation is taking place at the physical, cellular level, not the neuron level, and that’s way more sophisticated than anything we can do with computers today, including with AI.

Or you just argue: no, we just don’t have the right program. It is mechanistic. There is a crank to turn, but we’re not running the correct program. The way these AIs run today is just a completely wrong architecture and wrong program.

I just buy more into the theory that there are some things they can do incredibly well, and there are some things they do very poorly. And that’s been true for all machines and all automation since the beginning of time. The wheel is much better than the foot at going in a straight line at high speeds and traveling on roads. The wheel is really bad for climbing a mountain.

The same way, I think these AIs are incredibly good at certain things and they’re going to outperform humans. They’re incredible tools. And then there are other places where they’re just going to fall flat.

Steve Jobs famously said that a computer is a bicycle for the mind. It lets you travel much faster than walking, certainly in terms of efficiency.

But it takes the legs to turn the pedals in the first place. And so now maybe we have a motorcycle for the mind, to stretch the analogy, but you still need someone to ride it, to drive it, to direct it, to hit the accelerator, and to hit the brake.

The Solution to AI Anxiety Is Action

Nivi: When new paradigms and new tool sets come out, there is a moment of enthusiasm and change. And this is true in society, and this is true as an individual. If you ride the moment of enthusiasm in society, it’s exciting and you can learn new things and you can make friends and you can make money.

Naval: But there’s also a moment of enthusiasm in an individual. When you first encounter AI and you’re curious about it and you’re genuinely open-minded about it, I think that’s the time to lean in and learn about the thing itself. Not just to use it, which of course everyone will, but to actually learn how it works.

I think diving into and looking underneath the hood is really interesting. If you encounter a car for the first time in your life, yes, you can get in and drive it around, but that’s the moment you’re also going to be curious enough to open up the hood and look at how it’s structured and designed and figure it out.

I would encourage people who are fascinated by the new technology to really get into the innards and figure it out. You don’t have to figure it out to the level where you can build it or repair it or create your own, but to your own satisfaction.

Because understanding what’s underneath the abstraction—what’s underneath that command line—is going to do two things.

One is it will let you use it a lot better. And when you’re talking about a tool that has so much leverage, using it better is very helpful.

Second is it’ll also help you understand whether you should be scared of it or not. Is this thing really going to metastasize into a Skynet and destroy the world?

Are we going to be sitting here and Arnold Schwarzenegger shows up and says, “At 4:29 AM on February 24th is when Skynet became self-aware,” right? Or is it more that, “Hey, this is a really cool machine and I can use it to do A, B, and C, but I can’t use it to do D, E, and F. And this is where I should trust it and this is where I should be suspicious of it.”

I feel like a lot of people right now have AI anxiety. And the anxiety comes from not knowing what the thing is or how it works, having a very poor understanding.

And so the solution to that anxiety is action. The solution to anxiety is always action. Anxiety is a non-specific fear that things are going to go poorly and your brain and body are telling you to do something about it, but you’re not sure what.

You should lean into it.

You should figure the thing out. You should look at what it is. You should see how it works. And I think that’ll help get rid of the anxiety.

That action of learning—that pursuit of curiosity—is going to help you get over the anxiety. And who knows, it might actually help you figure out something you want to do with it that is very productive and will make you happier and more successful.

Related

Why You Can’t Hire

Dec 13 2011

Curate People

Nov 7 2025

The 80-hour Myth

Nov 29 2005

Modal body text goes here.