SourceMarkdown · Talk

GAZP vs. GLUT

In “The Unimagined Preposterousness of Zombies,” Daniel Dennett says:1

To date, several philosophers have told me that they plan to accept my challenge to offer a non-question-begging defense of zombies, but the only one I have seen so far involves postulating a “logically possible” but fantastic being—a descendent of Ned Block’s Giant Lookup Table fantasy…

A Giant Lookup Table, in programmer’s parlance, is when you implement a function as a giant table of inputs and outputs, usually to save on runtime computation. If my program needs to know the multiplicative product of two inputs between 1 and 100, I can write a multiplication algorithm that computes each time the function is called, or I can precompute a Giant Lookup Table with 10,000 entries and two indices. There are times when you do want to do this, though not for multiplication—times when you’re going to reuse the function a lot and it doesn’t have many possible inputs; or when clock cycles are cheap while you’re initializing, but very expensive while executing.

Giant Lookup Tables get very large, very fast. A GLUT of all possible twenty-ply conversations with ten words per remark, using only 850-word Basic English, would require 7.6 × 10585 entries.

Replacing a human brain with a Giant Lookup Table of all possible sense inputs and motor outputs (relative to some fine-grained digitization scheme) would require an unreasonably large amount of memory storage. But “in principle,” as philosophers are fond of saying, it could be done.

The GLUT is not a zombie in the classic sense, because it is microphysically dissimilar to a human. (In fact, a GLUT can’t really run on the same physics as a human; it’s too large to fit in our universe. For philosophical purposes, we shall ignore this and suppose a supply of unlimited memory storage.)

But is the GLUT a zombie at all? That is, does it behave exactly like a human without being conscious?

The GLUT-ed body’s tongue talks about consciousness. Its fingers write philosophy papers. In every way, so long as you don’t peer inside the skull, the GLUT seems just like a human… which certainly seems like a valid example of a zombie: it behaves just like a human, but there’s no one home.

Unless the GLUT is conscious, in which case it wouldn’t be a valid example.

I can’t recall ever seeing anyone claim that a GLUT is conscious. (Admittedly my reading in this area is not up to professional grade; feel free to correct me.) Even people who are accused of being (gasp!) functionalists don’t claim that GLUTs can be conscious.

GLUTs are the reductio ad absurdum to anyone who suggests that consciousness is simply an input-output pattern, thereby disposing of all troublesome worries about what goes on inside.

So what does the Generalized Anti-Zombie Principle (GAZP) say about the Giant Lookup Table (GLUT)?

At first glance, it would seem that a GLUT is the very archetype of a Zombie Master—a distinct, additional, detectable, non-conscious system that animates a zombie and makes it talk about consciousness for different reasons.

In the interior of the GLUT, there’s merely a very simple computer program that looks up inputs and retrieves outputs. Even talking about a “simple computer program” is overshooting the mark, in a case like this. A GLUT is more like ROM than a CPU. We could equally well talk about a series of switched tracks by which some balls roll out of a previously stored stack and into a trough—period; that’s all the GLUT does.

A spokesperson from People for the Ethical Treatment of Zombies replies: “Oh, that’s what all the anti-mechanists say, isn’t it? That when you look in the brain, you just find a bunch of neurotransmitters opening ion channels? If ion channels can be conscious, why not levers and balls rolling into bins?”

“The problem isn’t the levers,” replies the functionalist, “the problem is that a GLUT has the wrong pattern of levers. You need levers that implement things like, say, formation of beliefs about beliefs, or self-modeling… Heck, you need the ability to write things to memory just so that time can pass for the computation. Unless you think it’s possible to program a conscious being in Haskell.”

“I don’t know about that,” says the PETZ spokesperson, “all I know is that this so-called zombie writes philosophical papers about consciousness. Where do these philosophy papers come from, if not from consciousness?”

Good question! Let us ponder it deeply.

There’s a game in physics called Follow-The-Energy. Richard Feynman’s father played it with young Richard:

It was the kind of thing my father would have talked about: “What makes it go? Everything goes because the Sun is shining.” And then we would have fun discussing it:

“No, the toy goes because the spring is wound up,” I would say. “How did the spring get wound up?” he would ask.

“I wound it up.”

“And how did you get moving?”

“From eating.”

“And food grows only because the Sun is shining. So it’s because the Sun is shining that all these things are moving.” That would get the concept across that motion is simply the transformation of the Sun’s power.2

When you get a little older, you learn that energy is conserved, never created or destroyed, so the notion of using up energy doesn’t make much sense. You can never change the total amount of energy, so in what sense are you using it?

So when physicists grow up, they learn to play a new game called Follow-The-Negentropy—which is really the same game they were playing all along; only the rules are mathier, the game is more useful, and the principles are harder to wrap your mind around conceptually.

Rationalists learn a game called Follow-The-Improbability, the grownup version of “How Do You Know?” The rule of the rationalist’s game is that every improbable-seeming belief needs an equivalent amount of evidence to justify it. (This game has amazingly similar rules to Follow-The-Negentropy.)

Whenever someone violates the rules of the rationalist’s game, you can find a place in their argument where a quantity of improbability appears from nowhere; and this is as much a sign of a problem as, oh, say, an ingenious design of linked wheels and gears that keeps itself running forever.

The one comes to you and says: “I believe with firm and abiding faith that there’s an object in the asteroid belt, one foot across and composed entirely of chocolate cake; you can’t prove that this is impossible.” But, unless the one had access to some kind of evidence for this belief, it would be highly improbable for a correct belief to form spontaneously. So either the one can point to evidence, or the belief won’t turn out to be true. “But you can’t prove it’s impossible for my mind to spontaneously generate a belief that happens to be correct!” No, but that kind of spontaneous generation is highly improbable, just like, oh, say, an egg unscrambling itself.

In Follow-The-Improbability, it’s highly suspicious to even talk about a specific hypothesis without having had enough evidence to narrow down the space of possible hypotheses. Why aren’t you giving equal air time to a decillion other equally plausible hypotheses? You need sufficient evidence to find the “chocolate cake in the asteroid belt” hypothesis in the hypothesis space— otherwise there’s no reason to give it more air time than a trillion other candidates like “There’s a wooden dresser in the asteroid belt” or “The Flying Spaghetti Monster threw up on my sneakers.”

In Follow-The-Improbability, you are not allowed to pull out big complicated specific hypotheses from thin air without already having a corresponding amount of evidence; because it’s not realistic to suppose that you could spontaneously start discussing the true hypothesis by pure coincidence.

A philosopher says, “This zombie’s skull contains a Giant Lookup Table of all the inputs and outputs for some human’s brain.” This is a very large improbability. So you ask, “How did this improbable event occur? Where did the GLUT come from?”

Now this is not standard philosophical procedure for thought experiments. In standard philosophical procedure, you are allowed to postulate things like “Suppose you were riding a beam of light…” without worrying about physical possibility, let alone mere improbability. But in this case, the origin of the GLUT matters; and that’s why it’s important to understand the motivating question, “Where did the improbability come from?”

The obvious answer is that you took a computational specification of a human brain, and used that to precompute the Giant Lookup Table. (Thereby creating uncounted googols of human beings, some of them in extreme pain, the supermajority gone quite mad in a universe of chaos where inputs bear no relation to outputs. But damn the ethics, this is for philosophy.)

In this case, the GLUT is writing papers about consciousness because of a conscious algorithm. The GLUT is no zombie, any more than a cellphone is a zombie because it can talk about consciousness while being just a small consumer electronic device. The cellphone is just transmitting philosophy speeches from whoever happens to be on the other end of the line. A GLUT generated from an originally human brain-specification is doing the same thing.

“All right,” says the philosopher, “the GLUT was generated randomly, and just happens to have the same input-output relations as some reference human.”

How, exactly, did you randomly generate the GLUT?

“We used a true randomness source—a quantum device.”

But a quantum device just implements the Branch Both Ways instruction; when you generate a bit from a quantum randomness source, the deterministic result is that one set of universe-branches (locally connected amplitude clouds) see 1, and another set of universes see 0. Do it 4 times, create 16 (sets of) universes.

So, really, this is like saying that you got the GLUT by writing down all possible GLUT-sized sequences of 0s and 1s, in a really damn huge bin of lookup tables; and then reaching into the bin, and somehow pulling out a GLUT that happened to correspond to a human brain-specification. Where did the improbability come from?

Because if this wasn’t just a coincidence—if you had some reach-into-the-bin function that pulled out a human-corresponding GLUT by design, not just chance—then that reach-into-the-bin function is probably conscious, and so the GLUT is again a cellphone, not a zombie. It’s connected to a human at two removes, instead of one, but it’s still a cellphone! Nice try at concealing the source of the improbability there!

Now behold where Follow-The-Improbability has taken us: where is the source of this body’s tongue talking about an inner listener? The consciousness isn’t in the lookup table. The consciousness isn’t in the factory that manufactures lots of possible lookup tables. The consciousness was in whatever pointed to one particular already-manufactured lookup table, and said, “Use that one!”

You can see why I introduced the game of Follow-The-Improbability. Ordinarily, when we’re talking to a person, we tend to think that whatever is inside the skull must be “where the consciousness is.” It’s only by playing Follow-The-Improbability that we can realize that the real source of the conversation we’re having is that-which-is-responsible-for the improbability of the conversation—however distant in time or space, as the Sun moves a wind-up toy.

“No, no!” says the philosopher. “In the thought experiment, they aren’t randomly generating lots of GLUTs, and then using a conscious algorithm to pick out one GLUT that seems humanlike! I am specifying that, in this thought experiment, they reach into the inconceivably vast GLUT bin, and by pure chance pull out a GLUT that is identical to a human brain’s inputs and outputs! There! I’ve got you cornered now! You can’t play Follow-The-Improbability any further!”

Oh. So your specification is the source of the improbability here.

When we play Follow-The-Improbability again, we end up outside the thought experiment, looking at the philosopher.

That which points to the one GLUT that talks about consciousness, out of all the vast space of possibilities, is now… the conscious person asking us to imagine this whole scenario. And our own brains, which will fill in the blank when we imagine, “What will this GLUT say in response to ‘Talk about your inner listener’?”

The moral of this story is that when you follow back discourse about “consciousness,” you generally find consciousness. It’s not always right in front of you. Sometimes it’s very cleverly hidden. But it’s there. Hence the Generalized Anti-Zombie Principle.

If there is a Zombie Master in the form of a chatbot that processes and remixes amateur human discourse about “consciousness,” the humans who generated the original text corpus are conscious.

If someday you come to understand consciousness, and look back, and see that there’s a program you can write that will output confused philosophical discourse that sounds an awful lot like humans without itself being conscious— then when I ask “How did this program come to sound similar to humans?” the answer is that you wrote it to sound similar to conscious humans, rather than choosing on the criterion of similarity to something else. This doesn’t mean your little Zombie Master is conscious—but it does mean I can find consciousness somewhere in the universe by tracing back the chain of causality, which means we’re not entirely in the Zombie World.

But suppose someone actually did reach into a GLUT-bin and by genuinely pure chance pulled out a GLUT that wrote philosophy papers?

Well, then it wouldn’t be conscious. In my humble opinion.

I mean, there’s got to be more to it than inputs and outputs.

Otherwise even a GLUT would be conscious, right?

Oh, and for those of you wondering how this sort of thing relates to my day job…

In this line of business you meet an awful lot of people who think that an arbitrarily generated powerful AI will be “moral.” They can’t agree among themselves on why, or what they mean by the word “moral”; but they all agree that doing Friendly AI theory is unnecessary. And when you ask them how an arbitrarily generated AI ends up with moral outputs, they proffer elaborate rationalizations aimed at AIs of that which they deem “moral”; and there are all sorts of problems with this, but the number one problem is, “Are you sure the AI would follow the same line of thought you invented to argue human morals, when, unlike you, the AI doesn’t start out knowing what you want it to rationalize?” You could call the counter-principle Follow-The-Decision-Information, or something along those lines. You can account for an AI that does improbably nice things by telling me how you chose the AI’s design from a huge space of possibilities, but otherwise the improbability is being pulled out of nowhere—though more and more heavily disguised, as rationalized premises are rationalized in turn.

So I’ve already done a whole series of essays which I myself generated using Follow-The-Improbability. But I didn’t spell out the rules explicitly at that time, because I hadn’t done the thermodynamics essays yet…

Just thought I’d mention that. It’s amazing how many of my essays coincidentally turn out to include ideas surprisingly relevant to discussion of Friendly AI theory… if you believe in coincidence.

Daniel C. Dennett, “The Unimagined Preposterousness of Zombies,” Journal of Consciousness Studies 2 (4 1995): 322–26. ↩︎

Richard P. Feynman, “Judging Books by Their Covers,” in Surely You’re Joking, Mr. Feynman! (New York: W. W. Norton & Company, 1985). ↩︎

The Generalized Anti-Zombie Principle

Top

Book

Sequence

Belief in the Implied Invisible