SourceMarkdown · Talk

My Childhood Role Model

When I lecture on the intelligence explosion, I often draw a graph of the “scale of intelligence” as it appears in everyday life:

diagram: horizontal line from Village Idiot to Einstein

But this is a rather parochial view of intelligence. Sure, in everyday life, we only deal socially with other humans—only other humans are partners in the great game—and so we only meet the minds of intelligences ranging from village idiot to Einstein. But what we really need to talk about Artificial Intelligence or theoretical optima of rationality is this intelligence scale:

diagram: horizontal line with Mouse, Chimp, Village Idiot, and Einstein near the left; line continues rightward

For us humans, it seems that the scale of intelligence runs from “village idiot” at the bottom to “Einstein” at the top. Yet the distance from “village idiot” to “Einstein” is tiny, in the space of brain designs. Einstein and the village idiot both have a prefrontal cortex, a hippocampus, a cerebellum…

Maybe Einstein has some minor genetic differences from the village idiot, engine tweaks. But the brain-design-distance between Einstein and the village idiot is nothing remotely like the brain-design-distance between the village idiot and a chimpanzee. A chimp couldn’t tell the difference between Einstein and the village idiot, and our descendants may not see much of a difference either.

Carl Shulman has observed that some academics who talk about transhumanism seem to use the following scale of intelligence:

diagram: horizontal line with Mouse, Chimp, Village Idiot, Average Human, College Professor, Transhuman, and Einstein spread out evenly along it; line continues rightward

Douglas Hofstadter actually said something like this, at the 2006 Singularity Summit. He looked at my diagram showing the “village idiot” next to “Einstein,” and said, “That seems wrong to me; I think Einstein should be way off on the right.”

I was speechless. Especially because this was Douglas Hofstadter, one of my childhood heroes. It revealed a cultural gap that I had never imagined existed.

See, for me, what you would find toward the right side of the scale was a Jupiter Brain. Einstein did not literally have a brain the size of a planet.

On the right side of the scale, you would find Deep Thought—Douglas Adams’s original version, thank you, not the chess player. The computer so intelligent that even before its stupendous data banks were connected, when it was switched on for the first time, it started from I think therefore I am and got as far as deducing the existence of rice pudding and income tax before anyone managed to shut it off.

Toward the right side of the scale, you would find the Elders of Arisia, galactic overminds, Matrioshka brains, and the better class of God. At the extreme right end of the scale, Old One and the Blight.

Not frickin’ Einstein.

I’m sure Einstein was very smart for a human. I’m sure a General Systems Vehicle would think that was very cute of him.

I call this a “cultural gap” because I was introduced to the concept of a Jupiter Brain at the age of twelve.

Now all of this, of course, is the logical fallacy of generalization from fictional evidence.

But it is an example of why—logical fallacy or not—I suspect that reading science fiction does have a helpful effect on futurism. Sometimes the alternative to a fictional acquaintance with worlds outside your own is to have a mindset that is absolutely stuck in one era: A world where humans exist, and have always existed, and always will exist.

The universe is 13.7 billion years old, people! Homo sapiens sapiens have only been around for a hundred thousand years or thereabouts!

Then again, I have met some people who never read science fiction, but who do seem able to imagine outside their own world. And there are science fiction fans who don’t get it. I wish I knew what “it” was, so I could bottle it.

In the previous essay, I wanted to talk about the efficient use of evidence, i.e., Einstein was cute for a human but in an absolute sense he was around as efficient as the US Department of Defense.

So I had to talk about a civilization that included thousands of Einsteins, thinking for decades. Because if I’d just depicted a Bayesian superintelligence in a box, looking at a webcam, people would think: “But… how does it know how to interpret a 2D picture?” They wouldn’t put themselves in the shoes of the mere machine, even if it was called a “Bayesian superintelligence”; they wouldn’t apply even their own creativity to the problem of what you could extract from looking at a grid of bits.

It would just be a ghost in a box, that happened to be called a “Bayesian superintelligence.” The ghost hasn’t been told anything about how to interpret the input of a webcam; so, in their mental model, the ghost does not know.

As for whether it’s realistic to suppose that one Bayesian superintelligence can “do all that”… i.e., the stuff that occurred to me on first sitting down to the problem, writing out the story as I went along…

Well, let me put it this way: Remember how Jeffreyssai pointed out that if the experience of having an important insight doesn’t take more than 5 minutes, this theoretically gives you time for 5,760 insights per month? Assuming you sleep 8 hours a day and have no important insights while sleeping, that is.

Now humans cannot use themselves this efficiently. But humans are not adapted for the task of scientific research. Humans are adapted to chase deer across the savanna, throw spears into them, cook them, and then—this is probably the part that takes most of the brains—cleverly argue that they deserve to receive a larger share of the meat.

It’s amazing that Albert Einstein managed to repurpose a brain like that for the task of doing physics. This deserves applause. It deserves more than applause, it deserves a place in the Guinness Book of Records. Like successfully building the fastest car ever to be made entirely out of Jello.

How poorly did the blind idiot god (evolution) really design the human brain?

This is something that can only be grasped through much study of cognitive science, until the full horror begins to dawn upon you.

All the biases we have discussed here should at least be a hint.

Likewise the fact that the human brain must use its full power and concentration, with trillions of synapses firing, to multiply out two three-digit numbers without a paper and pencil.

No more than Einstein made efficient use of his sensory data, did his brain make efficient use of his neurons’ firing.

Of course, I have certain ulterior motives in saying all this. But let it also be understood that, years ago, when I set out to be a rationalist, the impossible unattainable ideal of intelligence that inspired me was never Einstein.

Carl Schurz said:

Ideals are like stars. You will not succeed in touching them with your hands. But, like the seafaring man on the desert of waters, you choose them as your guides and following them you will reach your destiny.

So now you’ve caught a glimpse of one of my great childhood role models—my dream of an AI. Only the dream, of course, the reality not being available. I reached up to that dream, once upon a time.

And this helped me to some degree, and harmed me to some degree.

For some ideals are like dreams: they come from within us, not from outside. Mentor of Arisia proceeded from E. E. “doc” Smith’s imagination, not from any real thing. If you imagine what a Bayesian superintelligence would say, it is only your own mind talking. Not like a star, that you can follow from outside. You have to guess where your ideals are, and if you guess wrong, you go astray.

But do not limit your ideals to mere stars, to mere humans who actually existed, especially if they were born more than fifty years before you and are dead. Each succeeding generation has a chance to do better. To let your ideals be composed only of humans, especially dead ones, is to limit yourself to what has already been accomplished. You will ask yourself, “Do I dare to do this thing, which Einstein could not do? Is this not lèse majesté?” Well, if Einstein had sat around asking himself, “Am I allowed to do better than Newton?” he would not have gotten where he did. This is the problem with following stars; at best, it gets you to the star.

Your era supports you more than you realize, in unconscious assumptions, in subtly improved technology of mind. Einstein was a nice fellow, but he talked a deal of nonsense about an impersonal God, which shows you how well he understood the art of careful thinking at a higher level of abstraction than his own field. It may seem less like sacrilege to think that if you have at least one imaginary galactic supermind to compare with Einstein, so that he is not the far right end of your intelligence scale.

If you only try to do what seems humanly possible, you will ask too little of yourself. When you imagine reaching up to some higher and inconvenient goal, all the convenient reasons why it is “not possible” leap readily to mind.

The most important role models are dreams: they come from within ourselves. To dream of anything less than what you conceive to be perfection is to draw on less than the full power of the part of yourself that dreams.

That Alien Message

Top

Book

Sequence

Einstein’s Superpowers