SourceMarkdown · Talk

Anthropomorphic Optimism

The core fallacy of anthropomorphism is expecting something to be predicted by the black box of your brain, when its causal structure is so different from that of a human brain as to give you no license to expect any such thing.

The early (pre-1966) biologists in The Tragedy of Group Selectionism believed that predators would voluntarily restrain their breeding to avoid overpopulating their habitat and exhausting the prey population. Later on, when Michael J. Wade actually went out and created in the laboratory the nighimpossible conditions for group selection, the adults adapted to cannibalize eggs and larvae, especially female larvae.1

Now, why might the group selectionists have not thought of that possibility?

Suppose you were a member of a tribe, and you knew that, in the near future, your tribe would be subjected to a resource squeeze. You might propose, as a solution, that no couple have more than one child—after the first child, the couple goes on birth control. Saying, “Let’s all individually have as many children as we can, but then hunt down and cannibalize each other’s children, especially the girls,” would not even occur to you as a possibility.

Think of a preference ordering over solutions, relative to your goals. You want a solution as high in this preference ordering as possible. How do you find one? With a brain, of course! Think of your brain as a high-ranking-solution-generator—a search process that produces solutions that rank high in your innate preference ordering.

The solution space on all real-world problems is generally fairly large, which is why you need an efficient brain that doesn’t even bother to formulate the vast majority of low-ranking solutions.

If your tribe is faced with a resource squeeze, you could try hopping everywhere on one leg, or chewing off your own toes. These “solutions” obviously wouldn’t work and would incur large costs, as you can see upon examination— but in fact your brain is too efficient to waste time considering such poor solutions; it doesn’t generate them in the first place. Your brain, in its search for high-ranking solutions, flies directly to parts of the solution space like “Everyone in the tribe gets together, and agrees to have no more than one child per couple until the resource squeeze is past.”

Such a low-ranking solution as “Everyone have as many kids as possible, then cannibalize the girls” would not be generated in your search process.

But the ranking of an option as “low” or “high” is not an inherent property of the option. It is a property of the optimization process that does the preferring. And different optimization processes will search in different orders.

So far as evolution is concerned, individuals reproducing to the fullest and then cannibalizing others’ daughters is a no-brainer; whereas individuals voluntarily restraining their own breeding for the good of the group is absolutely ludicrous. Or to say it less anthropomorphically, the first set of alleles would rapidly replace the second in a population. (And natural selection has no obvious search order here—these two alternatives seem around equally simple as mutations.)

Suppose that one of the biologists had said, “If a predator population has only finite resources, evolution will craft them to voluntarily restrain their breeding—that’s how I’d do it if I were in charge of building predators.” This would be anthropomorphism outright, the lines of reasoning naked and exposed: I would do it this way, therefore I infer that evolution will do it this way.

One does occasionally encounter the fallacy outright, in my line of work. But suppose you say to the one, “An AI will not necessarily work like you do.” Suppose you say to this hypothetical biologist, “Evolution doesn’t work like you do.” What will the one say in response? I can tell you a reply you will not hear: “Oh my! I didn’t realize that! One of the steps of my inference was invalid; I will throw away the conclusion and start over from scratch.”

No: what you’ll hear instead is a reason why any AI has to reason the same way as the speaker. Or a reason why natural selection, following entirely different criteria of optimization and using entirely different methods of optimization, ought to do the same thing that would occur to a human as a good idea.

Hence the elaborate idea that group selection would favor predator groups where the individuals voluntarily forsook reproductive opportunities.

The group selectionists went just as far astray, in their predictions, as someone committing the fallacy outright. Their final conclusions were the same as if they were assuming outright that evolution necessarily thought like themselves. But they erased what had been written above the bottom line of their argument, without erasing the actual bottom line, and wrote in new rationalizations. Now the fallacious reasoning is disguised; the obviously flawed step in the inference has been hidden—even though the conclusion remains exactly the same; and hence, in the real world, exactly as wrong.

But why would any scientist do this? In the end, the data came out against the group selectionists and they were embarrassed.

As I remarked in Fake Optimization Criteria, we humans seem to have evolved an instinct for arguing that our preferred policy arises from practically any criterion of optimization. Politics was a feature of the ancestral environment; we are descended from those who argued most persuasively that the tribe’s interest—not just their own interest—required that their hated rival Uglak be executed. We certainly aren’t descended from Uglak, who failed to argue that his tribe’s moral code—not just his own obvious self-interest— required his survival.

And because we can more persuasively argue for what we honestly believe, we have evolved an instinct to honestly believe that other people’s goals, and our tribe’s moral code, truly do imply that they should do things our way for their benefit.

So the group selectionists, imagining this beautiful picture of predators restraining their breeding, instinctively rationalized why natural selection ought to do things their way, even according to natural selection’s own purposes. The foxes will be fitter if they restrain their breeding! No, really! They’ll even outbreed other foxes who don’t restrain their breeding! Honestly!

The problem with trying to argue natural selection into doing things your way is that evolution does not contain that which could be moved by your arguments. Evolution does not work like you do—not even to the extent of having any element that could listen to or care about your painstaking explanation of why evolution ought to do things your way. Human arguments are not even commensurate with the internal structure of natural selection as an optimization process—human arguments aren’t used in promoting alleles, as human arguments would play a causal role in human politics.

So instead of successfully persuading natural selection to do things their way, the group selectionists were simply embarrassed when reality came out differently.

There’s a fairly heavy subtext here about Unfriendly AI.

But the point generalizes: this is the problem with optimistic reasoning in general. What is optimism? It is ranking the possibilities by your own preference ordering, and selecting an outcome high in that preference ordering, and somehow that outcome ends up as your prediction. What kind of elaborate rationalizations were generated along the way is probably not so relevant as one might fondly believe; look at the cognitive history and it’s optimism in, optimism out. But Nature, or whatever other process is under discussion, is not actually, causally choosing between outcomes by ranking them in your preference ordering and picking a high one. So the brain fails to synchronize with the environment, and the prediction fails to match reality.

Wade, “Group selections among laboratory populations of Tribolium.” ↩︎

The Hidden Complexity of Wishes

Top

Book

Sequence

Lost Purposes