Historical Note

The below is a lightly edited version of a thread I posted on Farcaster, Bluesky, and Twitter.

Venkatesh Rao’s formulation of AI as “a camera, not an engine” turns out to not just be an interesting perspective, but to finally fill in the “unknown knowns” quadrant of Donald Rumsfeld’s (in)famous 2x2 in a satisfying way.

For those who don’t remember, during a 2002 press conference on the eve of the Iraq War Rumsfeld said:

“… [A]s we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns — the ones we don’t know we don’t know.”

Rumsfeld is setting up a rather unusual 2x2 here, where both axis are “things we know”. But 2x2s have four quadrants, and Rumsfeld only describes three of them. A description of “unknown knowns” is missing.

Slavoj Žižek later tried to fill in the “unknown knowns” quadrant by postulating that it was knowledge that is widely understood but only at a sort of “subconscious” level, as truly surfacing that knowledge would be too destructive to the ego / status quo. I’ve always found Žižek’s formulation unsatisfying, however. What Žižek is talking about is a sort of willful ignorance, while Rumsfeld’s stated cases revolve around actually not knowing.

So where does Rao’s work fit into this? Well, in “A Camera, Not an Engine”, Rao reformulates AI not as a tool for expanding the frontier of knowledge, but rather a way to fill in voids that exist in the space of things we already know. AI is “discovering” the thoughts / ideas / knowledge that is implied by existing thoughts / ideas / knowledge. “Filling in the blanks” in our existing knowledge space.

Importantly, it’s not actually expanding the frontier of knowledge. After all, how could it? Even with access to the entire Internet, AIs are still “a ship in a bottle”, capable only of sailing the seas or thoughts humans have already recorded in some way.

An Aside

A “discovery” made by an AI is more like a “discovery” in mathematics than a discovery in the physical sciences. Self-contained logical systems are not like the human world, which is a porous, unclear, and — for any one person — finite realm.

I think this train of reasoning has something interesting to say about science in the age of AI.

There’s been a lot of excitement about using AI to discover new materials or as a tool for hypothesis generation. But really what’s happening is that AI is finding new angles on known data sets.

Now, finding new angles on old data is important! One of the things that we’re rapidly learning is that there’s a lot of latent space in human knowledge. And filling in that space is going to bring a lot of benefits.

But it’s also, in a very real way, all low-hanging fruit.

I suspect that what we’re going to find is that AI is going to make weird science moonshots — things that can’t really involve AI, because they’re looking for truly new things — more important. Weird moonshots are how we expand the frontiers of human thinking. AI will be, more and more, how we fill in the space behind that frontier.

This is going to make science much more exciting (amazing, horrifying, wonderful new things!), but also a lot less efficient (most moonshots fail, and most weird ideas come from the minds of crackpots). But this also represents an almost 180° reversal as to how we increasingly handle basic research. Over the course of my lifetime (and stretching before), we’ve come to focus more and more on research that is obviously applicable in some way. But “applicable” research is by definition research behind the frontiers. This is what AI’s going to first accelerate, and then cannibalize.

What’s going to make this even harder is that in the short term AI’s going to give us a lot of returns by filling in the gaps in the explored space of human knowledge — turning our “unknown knowns” into “known knowns”. So for a while it’s going to look like we don’t need as many weird moonshots or out-there exploration, because the returns from using AI to fill in the existant latent space will be so large. It will feel like we’re learning a lot (and in a sense, we will be). But we will not be expanding what is fundamentally a finite explored realm of human ideas and knowledge. If we don’t start seriously investing in expanding that frontier, we’re going to get stuck.

What does expanding the frontier of human ideas an knowledge look like? Well, close to the frontier, it’s the process of turning “known unknowns” into “known knowns”. But even more important will be true exploration — the process of turning “unknown unknowns” into “known unknowns”. It is only these two processes that actually expand the frontiers of human ideas and knowledge.

Marc Andreessen’s contention that “it’s time to build” has become something of a rallying cry among techno-optimists. But in the age of AI, “it’s time to build” is only half the picture.

Now is also the time to explore.