Words as Hidden Inferences
Suppose I find a barrel, sealed at the top, but with a hole large enough for a hand. I reach in and feel a small, curved object. I pull the object out, and it’s blue—a bluish egg. Next I reach in and feel something hard and flat, with edges—which, when I extract it, proves to be a red cube. I pull out 11 eggs and 8 cubes, and every egg is blue, and every cube is red.
Now I reach in and I feel another egg-shaped object. Before I pull it out and look, I have to guess: What will it look like?
The evidence doesn’t prove that every egg in the barrel is blue and every cube is red. The evidence doesn’t even argue this all that strongly: 19 is not a large sample size. Nonetheless, I’ll guess that this egg-shaped object is blue—or as a runner-up guess, red. If I guess anything else, there’s as many possibilities as distinguishable colors—and for that matter, who says the egg has to be a single shade? Maybe it has a picture of a horse painted on.
So I say “blue,” with a dutiful patina of humility. For I am a sophisticated rationalist-type person, and I keep track of my assumptions and dependencies—I guess, but I’m aware that I’m guessing… right?
But when a large yellow striped feline-shaped object leaps out at me from the shadows, I think, “Yikes! A tiger!” Not, “Hm… objects with the properties of largeness, yellowness, stripedness, and feline shape, have previously often possessed the properties ‘hungry’ and ‘dangerous,’ and thus, although it is not logically necessary, it may be an empirically good guess that aaauuughhhh crunch crunch gulp.”
The human brain, for some odd reason, seems to have been adapted to make this inference quickly, automatically, and without keeping explicit track of its assumptions.
And if I name the egg-shaped objects “bleggs” (for blue eggs) and the red cubes “rubes,” then, when I reach in and feel another egg-shaped object, I may think, Oh, it’s a blegg, rather than considering all that problem-of-induction stuff.
It is a common misconception that you can define a word any way you like.
This would be true if the brain treated words as purely logical constructs, Aristotelian classes, and you never took out any more information than you put in.
Yet the brain goes on about its work of categorization, whether or not we consciously approve. “All humans are mortal; Socrates is a human; therefore Socrates is mortal”—thus spake the ancient Greek philosophers. Well, if mortality is part of your logical definition of “human,” you can’t logically classify Socrates as human until you observe him to be mortal. But—this is the problem—Aristotle knew perfectly well that Socrates was a human. Aristotle’s brain placed Socrates in the “human” category as efficiently as your own brain categorizes tigers, apples, and everything else in its environment: Swiftly, silently, and without conscious approval.
Aristotle laid down rules under which no one could conclude Socrates was “human” until after he died. Nonetheless, Aristotle and his students went on concluding that living people were humans and therefore mortal; they saw distinguishing properties such as human faces and human bodies, and their brains made the leap to inferred properties such as mortality.
Misunderstanding the working of your own mind does not, thankfully, prevent the mind from doing its work. Otherwise Aristotelians would have starved, unable to conclude that an object was edible merely because it looked and felt like a banana.
So the Aristotelians went on classifying environmental objects on the basis of partial information, the way people had always done. Students of Aristotelian logic went on thinking exactly the same way, but they had acquired an erroneous picture of what they were doing.
If you asked an Aristotelian philosopher whether Carol the grocer was mortal, they would say “Yes.” If you asked them how they knew, they would say “All humans are mortal; Carol is human; therefore Carol is mortal.” Ask them whether it was a guess or a certainty, and they would say it was a certainty (if you asked before the sixteenth century, at least). Ask them how they knew that humans were mortal, and they would say it was established by definition.
The Aristotelians were still the same people, they retained their original natures, but they had acquired incorrect beliefs about their own functioning. They looked into the mirror of self-awareness, and saw something unlike their true selves: they reflected incorrectly.
Your brain doesn’t treat words as logical definitions with no empirical consequences, and so neither should you. The mere act of creating a word can cause your mind to allocate a category, and thereby trigger unconscious inferences of similarity. Or block inferences of similarity; if I create two labels I can get your mind to allocate two categories. Notice how I said “you” and “your brain” as if they were different things?
Making errors about the inside of your head doesn’t change what’s there; otherwise Aristotle would have died when he concluded that the brain was an organ for cooling the blood. Philosophical mistakes usually don’t interfere with blink-of-an-eye perceptual inferences.
But philosophical mistakes can severely mess up the deliberate thinking processes that we use to try to correct our first impressions. If you believe that you can “define a word any way you like,” without realizing that your brain goes on categorizing without your conscious oversight, then you won’t make the effort to choose your definitions wisely.