Naval
  • Archive
  • Twitter
  • Instagram
  • Subscribe
Cancel
  • Archive
  • Twitter
  • Instagram
Cancel

May 4 2026

‘Nothing Ever Happens’ Is Over

Startup design, drone warfare, and the hardware renaissance
19:42
Apple Spotify YouTube Download
May 4 2026

Nivi: You’re listening to the Naval Podcast. This is Nivi. There’s no set topic for this episode—it will be a potpourri.

The Fully Interconnected Startup

Nivi: Naval, how are you using AI at Impossible, your current company, to change how you manage the business? Or are you guys just too small and a bunch of brilliant independent contributors where it’s not having an effect on how you actually run the company?

Naval: It’s more the latter. We’re a hub-and-spoke architecture. My co-founder is the CEO, and everyone kind of reports into him. He’s just kind of the one product manager who runs around with everything in his head to try to bring this whole impossible task together. And everybody interfaces through him, and people are pretty smart.

We keep a very flat structure. We try to push people to communicate with each other directly. We don’t even use Slack if that gives you a sense. So we’re not using AI as a communication method explicitly inside. But implicitly, AI is still very helpful. So we’re not like Square. I know Jack Dorsey has reorganized Square around AI and maybe Tobi at Shopify is doing that. There are some guys who are very good at organizational management and they do these kinds of experiments.

I’ve never been good at organizational management. I actually hate organizational management because I hate organizations. I hate large groups. I think it’s just so hard to get things done and you’re not dealing with the best and the brightest and there’s always politics. So I just prefer keeping groups small.

And we count on people to just operate independently and communicate with each other as needed. Like I said, we don’t even use Slack. We don’t use any project management software. I think it’s just GitHub.

And then when people want to talk to each other, they just text each other. Literally—they talk one-on-one. And sometimes it’s chaotic and they have to figure out who to navigate their way towards. But that’s part of the skillset.

It’s sort of like in computer networks. How do you organize a network for efficiency? Because at some point the communication overhead gets very high. The traditional answer is hierarchy. It’s a tree system. It’s like there’s one person at the top—the CEO—then they have a bunch of VPs or SVPs reporting to them. Then you have a bunch of VPs below that, and then middle managers and so on, and that keeps things organized and marching in one direction.

But it’s stifling. There’s a lot of politics. You can’t talk to people two or three levels below you unless you go founder mode like Elon or Brian Chesky, and then it’s celebrated as some wonderful achievement that all of a sudden the CEO is allowed to talk to an engineer. You can tell I’m being sarcastic there. Like I just think that’s a terrible way to operate, but it’s a requirement of size, and we’re just not at that size, so I don’t like it.

Instead, I like the fully interconnected graph. And that’s insane. Fully interconnected graph is everyone talking to anyone, with a light hub-and-spoke, with one person in the middle who’s trying to keep everything in their heads.

The thing about a fully interconnected graph in networking is that every node has to be highly intelligent. So that’s what you do. You hire highly intelligent people who can operate in a fully interconnected graph, and if they can’t navigate their way to the person they need to talk to, to solve a specific problem—or if they can’t cooperate or communicate with other people—then they don’t belong in this kind of an organization, and they should just go and find a hierarchical organization where they’re going to be more comfortable.

So we don’t really rely on any tools.

You Don’t Need the Explicit Intranet Anymore

Naval: Now, AI is implicitly still a very helpful tool within the organization, and I can give you two examples, although there are more.

One is just if you’re reading code that was written by somebody else, and it’s very complicated, you can just have the AI read it for you and give you a summary. Papers: they can read other people’s papers and give you a summary. It can actually go through the codebase and tell you who in the organization is likely to be an expert on what topic and guide you to them.

So AI can do a lot of that digging for you. You don’t need the explicit intranet as much anymore. You don’t need the explicit marking down of things because the AI can figure out where you are.

You could even unleash the AI on the codebase—on the designs. Like, say you have hardware designs, you can unleash them on designs. If you have suppliers and vendors, you can release them on the database or the file folder in which all the documents with suppliers and vendors are kept.

You could even unleash it on the company email if you wanted to and just say, “Where are we? How far are we actually from shipping? Draw me a Gantt chart based on where you think we actually are in terms of the estimates and the timelines, and who’s behind, and who’s ahead, and which divisions are lacking resources.”

AI can constantly be doing this data analysis and digging and reporting for you—reports on demand. You don’t need specific charts and dashboards and business integration systems. You can just have AI literally recreate it on the fly. You maybe don’t want to be doing it every time because it might be too slow, but you can have it build these dashboards on demand, and you can have it update them on demand. So that’s one huge thing.

The other is that traditionally in a company you would have the hardware people—and at a company like ours, you have the hardware people, you have the software people, and you have the AI people—and they kind of wouldn’t be doing each other’s work. But now with AI they can at least get to 20%–30% of others’ work. So it makes the gluing between them a little easier.

The AI people, for example, can create their own software harnesses if they need to test something. It may not be good for production deployment, but it’s better than having to sit around and wait for a software person to come by and write you some custom code.

Same way, the hardware people can also write a little bit of software to bring up a new hardware device, where otherwise they might have needed to wait for software people. So having AI just lets everybody do a little bit of everything. It makes them more generalist. And by being more generalist, it means that you have better touchpoints to interface with other people.

You don’t necessarily need to have someone write you an explicit API to work with their code. You can actually just have the AI go and discover an API or create its own API, or you can just bypass the AI and connect directly at whatever level it wants to, whether in the database or within the codebase. So it’s naturally a force multiplier, but we haven’t done anything explicit with it.

May You Live in Interesting Times

Nivi: What are you trying to figure out right now?

The reason I ask is because you rarely get to see work product from smart people while it’s in motion. One of my obsessions is trying to excavate the secrets and inner thoughts of smart people.

Naval: The world is very different than it was a few years ago. There are two, maybe four, companies that are dominating AI—or five if you count hardware with Nvidia. And the question is, “Is that the stable situation?”

Is this going to be a commodity business or is this going to be a monopoly business, or is it going to be an oligopoly business? Does it top out at some point? Do they run out of data and do the models stop improving? Or do we go all the way to AGI?

Certainly the people inside the labs are believers in AGI, and think that all value is going to disappear into the AI labs. Does this end up even more consolidated than the Mag 7 world, where there’s just Mag 2 or Mag 1?

Or does it somehow fragment? Does open source really have a chance? Or do people just always want the smartest model? And so for that, they’ll give up privacy, they’ll give up open source, and they’ll just pay up in the cloud?

So I think these are huge questions. Huge. These are world-shattering questions, but I don’t know the answer to this.

Can you train AI in a distributed way? Is distributed training possible, or are these things going to centralize more and more and more? I think now the conventional wisdom is going centralized training: two to four companies dominating, data centers and power are the limits, and everyone is rushing towards that.

But what if that’s wrong? That would be an interesting contrarian bet. But I don’t yet see the evidence. I think the emerging conventional wisdom for that part in AI is right.

As for AGI, I don’t know. I don’t want to be in the futurist business. Certainly the people in the frontier labs believe it. They’ve believed it for quite a while. The AI that I’m seeing has jagged intelligence. It’s also pretty bad at multimodal reasoning. I don’t think it has a good model of the world, although there are all these world model companies coming up.

Although I think they confuse something that looks like a world that you navigate in, which people are like, “Oh, that’s a world model, because it looks like you’re generating something that looks like a world, and I can wander around in it.”

That’s not a world model. A world model is when you have an agent that has a model of the world inside its head, which allows it to take actions and then predict the consequences of its actions, and then adjust its own behavior based on what happened—whether it learned or not—so you have like a reinforcement learning loop. That’s a world model.

And so we’re seeing world model companies emerging. I think Yann LeCun famously did one recently with JEPA. And so we are going to see new kinds of models, new kinds of agents, new kinds of intelligence. Are we going to get to AGI? I don’t know. Now that’s the same thing that everybody’s trying to figure out, right?

But this world is changing. The famous meme I think on X was like, “Nothing ever happens,” right? I think that’s over. I haven’t quite been able to put my finger on why, but I think anyone who is paying attention would tell you that post-COVID, the world is changing a lot faster.

There was some dislocation around COVID, or perhaps it was just we were in unstable equilibrium and COVID just broke that equilibrium, and then we had a phase shift.

But the world seems to be moving a lot faster now. And that’s true geopolitically. That’s true economically. That’s true technologically. VCs are now being forced to fund more hardware, rockets, drones, AI—you know, sci-fi technologies if you would call it.

So I think sci-fi technologies are in high demand. Sci-fi scientists and sci-fi authors are in low supply. Sci-fi engineers are in low supply. So we are seeing the world shift, and maybe it’s for the better, maybe it’s for the worse, but things are changing very, very fast now.

We are living within that Chinese curse of: ‘May you live in interesting times.’

Drones Democratize Violence

Nivi: Is there anything you’re trying to figure out in the world of hardware?

Naval: I think drones are still underleveraged, even though they’ve come to prominence on the battlefield recently. We still haven’t seen anywhere near the end game of drones. There’s nothing in particular I’m trying to figure out there.

I mean, I think drone defense is going to be very difficult, because a drone that’s attacking has the advantage of both kinetic energy—because it’s coming down on you—and it’s got the advantage of surprise, where the attacker can mass all the attack drones in one area, whereas the defender is always spread thin. The defender has one advantage, which is short range. The defender has to traverse a much smaller range going up than the attacking drone probably had to cover coming in.

But I think that drone warfare changes the structure of violence in society. So it’s going to actually fundamentally change how militaries and entire states are architected.

You could argue that the modern state rose up as a consequence of the rifle, because a rifle allowed a former peasant to take down a feudal knight on the battlefield. Then you need a factory to make rifles, and you had to drill musket men and arm them and train them. And so nation states sprung up and became dominant instead of feudal states as the right structure to do that within.

And then post-nuclear, there’s only seven to nine really independent sovereign nations, and everybody else lives underneath someone else’s nuclear umbrella. So those seven to nine call the shots, whether in the Security Council or elsewhere.

And so nuclear weapons were the new logic of violence after 1945.

Now the newest logic of violence is drones. And that’s going to fundamentally shift the game again, because drones bring the logic of mutually assured destruction down to the individual level. If you really hate somebody, in the future, a drone will be able to get them. That’s a weird form of violence coming up that’s going to basically restructure society as we know it.

I don’t know which way it goes. Is it going to be the case that you have a few very large, very powerful countries that control all the drones? Or is it that drones get so democratized that any individual can be deadly?

Biothreats Could Also Get Democratized

Naval: Also, I think one of the fears with AI is biological weapons. I don’t want to get people worked up but, in theory, if you were smart in the past, you could have figured out how to make a biological weapon. But the number of people who could have done it—who had both the expertise and had the access—were very low. Although it was still too high because the coronavirus that coincidentally got unleashed right next to the bioweapons lab in Wuhan figured it out.

So now that power is going to be democratized, just like vibe coding is democratized. Now the number of people who can vibe code is hundreds or thousands of times greater than the number of people who were coding. And so the same way, the number of people who can get access to biological weapons or viruses is hundreds of thousands of times what could have gotten access to them before. So that’s a pretty scary thought.

Now we can also do the opposite, which is hopefully now the same AIs can also research how to create vaccines or how to create things to stop them. But the problem is that all the official research—all the good guy research—is always gated behind regulations and there are almost no regulations out there as bad as medical regulations.

One of the real opportunities out there, I think, is for AI to solve medicine and biology and therapies. But to do that, you need the data. You need to be able to look at everyone’s dataset. You need to be able to look at all the outcomes. You want as much data as possible.

And this data is hidden behind so many silos, and so many regulations and rules. And for good reason—you don’t want to target individuals. But if you could anonymize, clean up, and allow that dataset to get out there, and then you could let people test therapies with a right to try, then I think you could have reasonable defenses. But my fear is this will only happen in an emergency situation.

Even during COVID, when we had the emergency situation, we took a long time with the vaccines, which turned out not to be that effective anyway. But it took a long time with the vaccines, because we just didn’t let people operate under volunteer situations and right to try.

It just took way too long, whereas I think in the old days you would’ve had a bunch of healthy, young volunteers would’ve said, “Sure, give me this vaccine and then give me COVID. I’ll take one for the team.”

But now because of “bioethicists,” we don’t even allow that. There’s just too much bureaucracy in the system. Too many people who can say “no” to the few people who are trying to get things done. And so for that, I do worry a little bit about the future.

AI Interfaces Unlock Hardware

Naval: What else is interesting in hardware? Hardware, I think, is going to undergo a renaissance, because historically the problem with a lot of hardware is that it’s very hard to write good software. And so you get all this incredible hardware coming out, but the software’s terrible so the device itself doesn’t function well.

Apple has done really well because they integrate hardware with high-quality software. Most companies do one or two things well. Apple does two things really well: they build great hardware; they build great software. They’re not that good at cloud and AI. Google is very good at cloud, and very good at AI, but they’re not very good at hardware, for example. And software, I would say they’re good at certain kinds of software. They’re good at cloud software—they’re not good at consumer software.

Now, all of a sudden, you have all these companies that are very good at hardware but not good at software—they can make good enough software. Or they don’t even need to make software. My AI agent will interact with the hardware directly and I don’t need software anymore.

So if you’re someone, for example, who is making security cameras, or you’re making toys for kids, or you’re making programmable lamps, all of a sudden the software for that just got a lot easier. You can have some bright kid with Claude Code, just get in there and build you all the software that you need. Or maybe you don’t need any software because your security cameras are now controlled by each person’s agent and don’t need custom software any longer. So I think that hardware itself is getting unlocked through software.

And this is, I think, one of the reasons why China is so big into open source. Now they’re behind, so when you’re behind, you try to catch up through open source. I think also it’s a little bit of their nationalist pride that, “We’re in it together.” Maybe the government’s funding them and encouraging them to do open source. But it also plays well into their hardware dominance. China is manufacturing most of the consumer electronics goods, and so for them, open source is hugely beneficial because it commoditizes their complement.

Same thing for Nvidia. Nvidia just wants to sell as many cards as possible, so they want people to use as many AI models as possible. So they want it all to be open source. So you have a bunch of hardware players, including most of China and Nvidia, whose incentive is, “Hey, it should all be open source.”

Hyperscalers also—they want it all open source. So they drive open source in the AI models, and then that commoditizes software, and the software unlocks more hardware. So I think we’re going to see more and more interesting usable hardware because now the software is figured out enough that that hardware becomes unlocked and quite usable.

Optimism Requires Creativity

Nivi: I don’t get scared or worked up about the future, partly because I’m a blind optimist and partly because I live in the first world.

Naval: Yeah, I don’t get worked up about it because I think it’s just so much easier to imagine doom scenarios than it is to imagine positive scenarios. Because optimism requires creativity. For example, the job loss thing is a clear example. It’s very easy to look at existing jobs and see how they will go away, but it’s very hard to predict what the next job will be. Yet inevitably there’s always a next job.

Because of that, I think people tend to fixate on the doom scenarios. It’s much easier to imagine the methods of doom than to imagine the methods of rising up.

There is no one—no one 200 years ago—who could have imagined how we would end up where we are today in terms of technological advancement and capitalism and economics and the rise of various societies. They just couldn’t have imagined it. They couldn’t have imagined 10% of the jobs that exist today, because back then everybody was working on a farm. But nevertheless, here we are.

So the same way, I think the doom scenarios they imagined are actually very similar to the same doom scenarios that we imagine today—like even a hundred years ago. Every decade I’ve been alive, there’s been a new environmental catastrophe to come along. Someone’s talking about the end of the world because of the environment. And then every decade there’s a catastrophe coming along because of a war that’s going to end the world.

Yeah, sometimes you get really close. COVID was scary. If COVID had actually turned out to be a much more nasty virus, we could have been in a bad spot. If there was a World War III where we start exchanging nukes, that would be a very bad scenario. So these things are easier to imagine. They’re more legible to our minds, so we hold them closer to us.

Plus the outcome there is so catastrophic that people obviously fixate on it. But I think it’s very hard to imagine creativity. It’s very hard to be optimistic. And so I think we have to nurture optimism. We have to reward optimism. We have to be irrationally optimistic, because that’s the only way out of this anyway.

So whenever people do the crabs in a bucket thing where they try to pull the optimists back down and they keep saying, “Doom, doom, doom,” they might be right, but it’s certainly not helping matters. That’s not the person you want to be in a foxhole with.

Related

Life is Lived in The Arena

Jul 17 2025

If You Want to Learn, Do

Jul 21 2025

Most Books Should Be Skimmed, A Few Should Be Devoured

Sep 26 2025

Modal body text goes here.