SourceMarkdown · Talk

Extensions and Intensions

“What is red?”

“Red is a color.”

“What’s a color?”

“A color is a property of a thing.”

But what is a thing? And what’s a property? Soon the two are lost in a maze of words defined in other words, the problem that Steven Harnad once described as trying to learn Chinese from a Chinese/Chinese dictionary.

Alternatively, if you asked me “What is red?” I could point to a stop sign, then to someone wearing a red shirt, and a traffic light that happens to be red, and blood from where I accidentally cut myself, and a red business card, and then I could call up a color wheel on my computer and move the cursor to the red area. This would probably be sufficient, though if you know what the word “No” means, the truly strict would insist that I point to the sky and say “No.”

I think I stole this example from S. I. Hayakawa—though I’m really not sure, because I heard this way back in the indistinct blur of my childhood. (When I was twelve, my father accidentally deleted all my computer files. I have no memory of anything before that.)

But that’s how I remember first learning about the difference between intensional and extensional definition. To give an “intensional definition” is to define a word or phrase in terms of other words, as a dictionary does. To give an “extensional definition” is to point to examples, as adults do when teaching children. The preceding sentence gives an intensional definition of “extensional definition,” which makes it an extensional example of “intensional definition.”

In Hollywood Rationality and popular culture generally, “rationalists” are depicted as word-obsessed, floating in endless verbal space disconnected from reality.

But the actual Traditional Rationalists have long insisted on maintaining a tight connection to experience:

If you look into a textbook of chemistry for a definition of lithium, you may be told that it is that element whose atomic weight is 7 very nearly. But if the author has a more logical mind he will tell you that if you search among minerals that are vitreous, translucent, grey or white, very hard, brittle, and insoluble, for one which imparts a crimson tinge to an unluminous flame, this mineral being triturated with lime or witherite rats-bane, and then fused, can be partly dissolved in muriatic acid; and if this solution be evaporated, and the residue be extracted with sulphuric acid, and duly purified, it can be converted by ordinary methods into a chloride, which being obtained in the solid state, fused, and electrolyzed with half a dozen powerful cells, will yield a globule of a pinkish silvery metal that will float on gasolene; and the material of that is a specimen of lithium.

That’s an example of “logical mind” as described by a genuine Traditional Rationalist, rather than a Hollywood scriptwriter.

But note: Peirce isn’t actually showing you a piece of lithium. He didn’t have pieces of lithium stapled to his book. Rather he’s giving you a treasure map—an intensionally defined procedure which, when executed, will lead you to an extensional example of lithium. This is not the same as just tossing you a hunk of lithium, but it’s not the same as saying “atomic weight 7” either. (Though if you had sufficiently sharp eyes, saying “3 protons” might let you pick out lithium at a glance…)

So that is intensional and extensional definition, which is a way of telling someone else what you mean by a concept. When I talked about “definitions” above, I talked about a way of communicating concepts—telling someone else what you mean by “red,” “tiger,” “human,” or “lithium.” Now let’s talk about the actual concepts themselves.

The actual intension of my “tiger” concept would be the neural pattern (in my temporal cortex) that inspects an incoming signal from the visual cortex to determine whether or not it is a tiger.

The actual extension of my “tiger” concept is everything I call a tiger.

Intensional definitions don’t capture entire intensions; extensional definitions don’t capture entire extensions. If I point to just one tiger and say the word “tiger,” the communication may fail if they think I mean “dangerous animal” or “male tiger” or “yellow thing.” Similarly, if I say “dangerous yellow-black striped animal,” without pointing to anything, the listener may visualize giant hornets.

You can’t capture in words all the details of the cognitive concept—as it exists in your mind—that lets you recognize things as tigers or nontigers. It’s too large. And you can’t point to all the tigers you’ve ever seen, let alone everything you would call a tiger.

The strongest definitions use a crossfire of intensional and extensional communication to nail down a concept. Even so, you only communicate maps to concepts, or instructions for building concepts—you don’t communicate the actual categories as they exist in your mind or in the world.

(Yes, with enough creativity you can construct exceptions to this rule, like “Sentences Eliezer Yudkowsky has published containing the term ‘huragaloni’ as of Feb. 4, 2008.” I’ve just shown you this concept’s entire extension. But except in mathematics, definitions are usually treasure maps, not treasure.)

So that’s another reason you can’t “define a word any way you like”: You can’t directly program concepts into someone else’s brain.

Even within the Aristotelian paradigm, where we pretend that the definitions are the actual concepts, you don’t have simultaneous freedom of intension and extension. Suppose I define Mars as “A huge red rocky sphere, around a tenth of Earth’s mass and 50% further away from the Sun.” It’s then a separate matter to show that this intensional definition matches some particular extensional thing in my experience, or indeed, that it matches any real thing whatsoever. If instead I say “That’s Mars” and point to a red light in the night sky, it becomes a separate matter to show that this extensional light matches any particular intensional definition I may propose—or any intensional beliefs I may have—such as “Mars is the God of War.”

But most of the brain’s work of applying intensions happens sub-deliberately. We aren’t consciously aware that our identification of a red light as “Mars” is a separate matter from our verbal definition “Mars is the God of War.” No matter what kind of intensional definition I make up to describe Mars, my mind believes that “Mars” refers to this thingy, and that it is the fourth planet in the Solar System.

When you take into account the way the human mind actually, pragmatically works, the notion “I can define a word any way I like” soon becomes “I can believe anything I want about a fixed set of objects” or “I can move any object I want in or out of a fixed membership test.” Just as you can’t usually convey a concept’s whole intension in words because it’s a big complicated neural membership test, you can’t control the concept’s entire intension because it’s applied sub-deliberately. This is why arguing that XYZ is true “by definition” is so popular. If definition changes behaved like the empirical null-ops they’re supposed to be, no one would bother arguing them. But abuse definitions just a little, and they turn into magic wands—in arguments, of course; not in reality.

Charles Sanders Peirce, Collected Papers (Harvard University Press, 1931).

Words as Hidden Inferences

Top

Book

Sequence

Similarity Clusters