SourceMarkdown · Talk

Faster Than Science

I sometimes say that the method of science is to amass such an enormous mountain of evidence that even scientists cannot ignore it; and that this is the distinguishing characteristic of a scientist. (A non-scientist will ignore it anyway.)

Max Planck was even less optimistic:1

A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.

I am much tickled by this notion, because it implies that the power of science to distinguish truth from falsehood ultimately rests on the good taste of grad students.

The gradual increase in acceptance of many-worlds in academic physics suggests that there are physicists who will only accept a new idea given some combination of epistemic justification, and a sufficiently large academic pack in whose company they can be comfortable. As more physicists accept, the pack grows larger, and hence more people go over their individual thresholds for conversion—with the epistemic justification remaining essentially the same.

But Science still gets there eventually, and this is sufficient for the ratchet of Science to move forward, and raise up a technological civilization.

Scientists can be moved by groundless prejudices, by undermined intuitions, by raw herd behavior—the panoply of human flaws. Each time a scientist shifts belief for epistemically unjustifiable reasons, it requires more evidence, or new arguments, to cancel out the noise.

The “collapse of the wavefunction” has no experimental justification, but it appeals to the (undermined) intuition of a single world. Then it may take an extra argument—say, that collapse violates Special Relativity—to begin the slow academic disintegration of an idea that should never have been assigned non-negligible probability in the first place.

From a Bayesian perspective, human academic science as a whole is a highly inefficient processor of evidence. Each time an unjustifiable argument shifts belief, you need an extra justifiable argument to shift it back. The social process of science leans on extra evidence to overcome cognitive noise.

A more charitable way of putting it is that scientists will adopt positions that are theoretically insufficiently extreme, compared to the ideal positions that scientists would adopt, if they were Bayesian AIs and could trust themselves to reason clearly.

But don’t be too charitable. The noise we are talking about is not all innocent mistakes. In many fields, debates drag on for decades after they should have been settled. And not because the scientists on both sides refuse to trust themselves and agree they should look for additional evidence. But because one side keeps throwing up more and more ridiculous objections, and demanding more and more evidence, from an entrenched position of academic power, long after it becomes clear from which quarter the winds of evidence are blowing. (I’m thinking here about the debates surrounding the invention of evolutionary psychology, not about many-worlds.)

Is it possible for individual humans or groups to process evidence more efficiently—reach correct conclusions faster—than human academic science as a whole?

“Ideas are tested by experiment. That is the core of science.” And this must be true, because if you can’t trust Zombie Feynman, who can you trust?

Yet where do the ideas come from?

You may be tempted to reply, “They come from scientists. Got any other questions?” In Science you’re not supposed to care where the hypotheses come from—just whether they pass or fail experimentally.

Okay, but if you remove all new ideas, the scientific process as a whole stops working because it has no alternative hypotheses to test. So inventing new ideas is not a dispensable part of the process.

Now put your Bayesian goggles back on. As described in Einstein’s Arrogance, there are queries that are not binary—where the answer is not “Yes” or “No,” but drawn from a larger space of structures, e.g., the space of equations. In such cases it takes far more Bayesian evidence to promote a hypothesis to your attention than to confirm the hypothesis.

If you’re working in the space of all equations that can be specified in 32 bits or less, you’re working in a space of 4 billion equations. It takes far more Bayesian evidence to raise one of those hypotheses to the 10% probability level, than it requires to further raise the hypothesis from 10% to 90% probability.

When the idea-space is large, coming up with ideas worthy of testing involves much more work—in the Bayesian-thermodynamic sense of “work”—than merely obtaining an experimental result with p < 0.0001 for the new hypothesis over the old hypothesis.

If this doesn’t seem obvious-at-a-glance, pause here and review Einstein’s Arrogance.

The scientific process has always relied on scientists to come up with hypotheses to test, via some process not further specified by Science. Suppose you came up with some way of generating hypotheses that was completely crazy—say, pumping a robot-controlled Ouija board with the digits of pi—and the resulting suggestions kept on getting verified experimentally. The pure ideal essence of Science wouldn’t skip a beat. The pure ideal essence of Bayes would burst into flames and die.

(Compared to Science, Bayes is falsified by more of the possible outcomes.)

This doesn’t mean that the process of deciding which ideas to test is unimportant to Science. It means that Science doesn’t specify it.

In practice, the robot-controlled Ouija board doesn’t work. In practice, there are some scientific queries with a large enough answer space that, picking models at random to test, it would take zillions of years to hit on a model that made good predictions—like getting monkeys to type Shakespeare.

At the frontier of science—the boundary between ignorance and knowledge, where science advances—the process relies on at least some individual scientists (or working groups) seeing things that are not yet confirmed by Science. That’s how they know which hypotheses to test, in advance of the test itself.

If you take your Bayesian goggles off, you can say, “Well, they don’t have to know, they just have to guess.” If you put your Bayesian goggles back on, you realize that “guessing” with 10% probability requires nearly as much epistemic work to have been successfully performed, behind the scenes, as “guessing” with 80% probability—at least for large answer spaces.

The scientist may not know they have done this epistemic work successfully, in advance of the experiment; but they must, in fact, have done it successfully! Otherwise they will not even think of the correct hypothesis. In large answer spaces, anyway.

So the scientist makes the novel prediction, performs the experiment, publishes the result, and now Science knows it too. It is now part of the publicly accessible knowledge of humankind, that anyone can verify for themselves.

In between was an interval where the scientist rationally knew something that the public social process of science hadn’t yet confirmed. And this is not a trivial interval, though it may be short; for it is where the frontier of science lies, the advancing border.

All of this is more true for non-routine science than for routine science, because it is a notion of large answer spaces where the answer is not “Yes” or “No” or drawn from a small set of obvious alternatives. It is much easier to train people to test ideas than to have good ideas to test.

Max Planck, Scientific Autobiography and Other Papers (New York: Philosophical Library, 1949). ↩︎

Changing the Definition of Science

Top

Book

Sequence

Einstein’s Speed