Many psychological experiments were conducted in the late 1950s and early 1960s in which subjects were asked to predict the outcome of an event that had a random component but yet had base-rate predictability—for example, subjects were asked to predict whether the next card the experimenter turned over would be red or blue in a context in which 70% of the cards were blue, but in which the sequence of red and blue cards was totally random.
In such a situation, the strategy that will yield the highest proportion of success is to predict the more common event. For example, if 70% of the cards are blue, then predicting blue on every trial yields a 70% success rate.
What subjects tended to do instead, however, was match probabilities—that is, predict the more probable event with the relative frequency with which it occurred. For example, subjects tended to predict 70% of the time that the blue card would occur and 30% of the time that the red card would occur. Such a strategy yields a 58% success rate, because the subjects are correct 70% of the time when the blue card occurs (which happens with probability .70) and 30% of the time when the red card occurs (which happens with probability .30); (.70 × .70) + (.30 × .30) = .58.
In fact, subjects predict the more frequent event with a slightly higher probability than that with which it occurs, but do not come close to predicting its occurrence 100% of the time, even when they are paid for the accuracy of their predictions… For example, subjects who were paid a nickel for each correct prediction over a thousand trials… predicted [the more common event] 76% of the time.
Do not think that this experiment is about a minor flaw in gambling strategies. It compactly illustrates the most important idea in all of rationality.
Subjects just keep guessing red, as if they think they have some way of predicting the random sequence. Of this experiment Dawes goes on to say, “Despite feedback through a thousand trials, subjects cannot bring themselves to believe that the situation is one in which they cannot predict.”
But the error must go deeper than that. Even if subjects think they’ve come up with a hypothesis, they don’t have to actually bet on that prediction in order to test their hypothesis. They can say, “Now if this hypothesis is correct, the next card will be red”—and then just bet on blue. They can pick blue each time, accumulating as many nickels as they can, while mentally noting their private guesses for any patterns they thought they spotted. If their predictions come out right, then they can switch to the newly discovered sequence.
I wouldn’t fault a subject for continuing to invent hypotheses—how could they know the sequence is truly beyond their ability to predict? But I would fault a subject for betting on the guesses, when this wasn’t necessary to gather information, and literally hundreds of earlier guesses had been disconfirmed.
Can even a human be that overconfident?
I would suspect that something simpler is going on—that the all-blue strategy just didn’t occur to the subjects.
People see a mix of mostly blue cards with some red, and suppose that the optimal betting strategy must be a mix of mostly blue cards with some red.
It is a counterintuitive idea that, given incomplete information, the optimal betting strategy does not resemble a typical sequence of cards.
It is a counterintuitive idea that the optimal strategy is to behave lawfully, even in an environment that has random elements.
It seems like your behavior ought to be unpredictable, just like the environment—but no! A random key does not open a random lock just because they are “both random.”
You don’t fight fire with fire; you fight fire with water. But this thought involves an extra step, a new concept not directly activated by the problem statement, and so it’s not the first idea that comes to mind.
In the dilemma of the blue and red cards, our partial knowledge tells us—on each and every round—that the best bet is blue. This advice of our partial knowledge is the same on each and every round. If 30% of the time we go against our partial knowledge and bet on red instead, then we will do worse thereby—because now we’re being outright stupid, betting on what we know is the less probable outcome.
If you bet on red every round, you would do as badly as you could possibly do; you would be 100% stupid. If you bet on red 30% of the time, faced with 30% red cards, then you’re making yourself 30% stupid.
When your knowledge is incomplete—meaning that the world will seem to you to have an element of randomness—randomizing your actions doesn’t solve the problem. Randomizing your actions takes you further from the target, not closer. In a world already foggy, throwing away your intelligence just makes things worse.
It is a counterintuitive idea that the optimal strategy can be to think lawfully, even under conditions of uncertainty.
And so there are not many rationalists, for most who perceive a chaotic world will try to fight chaos with chaos. You have to take an extra step, and think of something that doesn’t pop right into your mind, in order to imagine fighting fire with something that is not itself fire. You have heard the unenlightened ones say, “Rationality works fine for dealing with rational people, but the world isn’t rational.” But faced with an irrational opponent, throwing away your own reason is not going to help you. There are lawful forms of thought that still generate the best response, even when faced with an opponent who breaks those laws. Decision theory does not burst into flames and die when faced with an opponent who disobeys decision theory.
This is no more obvious than the idea of betting all blue, faced with a sequence of both blue and red cards. But each bet that you make on red is an expected loss, and so too with every departure from the Way in your own thinking.
How many Star Trek episodes are thus refuted? How many theories of AI?
Dawes, Rational Choice in An Uncertain World; Yaacov Schul and Ruth Mayo, “Searching for Certainty in an Uncertain World: The Difficulty of Giving Up the Experiential for the Rational Mode of Thinking,” Journal of Behavioral Decision Making 16, no. 2 (2003): 93–106, doi:10.1002/bdm.434.