The bystander effect, also known as bystander apathy, is that larger groups are less likely to act in emergencies—not just individually, but collectively. Put an experimental subject alone in a room and let smoke start coming up from under the door. Seventy-five percent of the subjects will leave to report it. Now put three subjects in the room—real subjects, none of whom know what’s going on. On only 38% of the occasions will anyone report the smoke. Put the subject with two confederates who ignore the smoke, and they’ll only report it 10% on the time—even staying in the room until it becomes hazy.1
On the standard model, the two primary drivers of bystander apathy are:
Very often an emergency is not obviously an emergency. Is the man lying in the alley a heart-attack victim or a drunk sleeping one off? … In times of such uncertainty, the natural tendency is to look around at the actions of others for clues. We can learn from the way the other witnesses are reacting whether the event is or is not an emergency. What is easy to forget, though, is that everybody else observing the event is likely to be looking for social evidence, too. Because we all prefer to appear poised and unflustered among others, we are likely to search for that evidence placidly, with brief, camouflaged glances at those around us. Therefore everyone is likely to see everyone else looking unruffled and failing to act.
Cialdini suggests that if you’re ever in emergency need of help, you point to one single bystander and ask them for help—making it very clear to whom you’re referring. Remember that the total group, combined, may have less chance of helping than one individual.
I’ve mused a bit on the evolutionary psychology of the bystander effect. Suppose that in the ancestral environment, most people in your band were likely to be at least a little related to you—enough to be worth saving, if you were the only one who could do it. But if there are two others present, then the first person to act incurs a cost, while the other two both reap the genetic benefit of a partial relative being saved. Could there have been an arms race for who waited the longest?
As far as I’ve followed this line of speculation, it doesn’t seem to be a good explanation—at the point where the whole group is failing to act, a gene that helps immediately ought to be able to invade, I would think. The experimental result is not a long wait before helping, but simply failure to help: if it’s a genetic benefit to help when you’re the only person who can do it (as does happen in the experiments) then the group equilibrium should not be no one helping (as happens in the experiments).
So I don’t think an arms race of delay is a plausible evolutionary explanation. More likely, I think, is that we’re looking at a nonancestral problem. If the experimental subjects actually know the apparent victim, the chances of helping go way up (i.e., we’re not looking at the correlate of helping an actual fellow band member). If I recall correctly, if the experimental subjects know each other, the chances of action also go up.
Nervousness about public action may also play a role. If Robin Hanson is right about the evolutionary role of “choking,” then being first to act in an emergency might also be taken as a dangerous bid for high status. (Come to think, I can’t actually recall seeing shyness discussed in analyses of the bystander effect, but that’s probably just my poor memory.)
Can the bystander effect be explained primarily by diffusion of moral responsibility? We could be cynical and suggest that people are mostly interested in not being blamed for not helping, rather than having any positive desire to help—that they mainly wish to escape antiheroism and possible retribution. Something like this may well be a contributor, but two observations that mitigate against it are (a) the experimental subjects did not report smoke coming in from under the door, even though it could well have represented a strictly selfish threat and (b) telling people about the bystander effect reduces the bystander effect, even though they’re no more likely to be held publicly responsible thereby.
In fact, the bystander effect is one of the main cases I recall offhand where telling people about a bias actually seems able to strongly reduce it—maybe because the appropriate way to compensate is so obvious, and it’s not easy to overcompensate (as when you’re trying to e.g. adjust your calibration). So we should be careful not to be too cynical about the implications of the bystander effect and diffusion of responsibility, if we interpret individual action in terms of a cold, calculated attempt to avoid public censure. People seem at least to sometimes hold themselves responsible, once they realize they’re the only ones who know enough about the bystander effect to be likely to act.
Though I wonder what happens if you know that you’re part of a crowd where everyone has been told about the bystander effect…
Bibb Latané and John M. Darley, “Bystander ‘Apathy,”’ American Scientist 57, no. 2 (1969): 244– 268, http://www.jstor.org/stable/27828530.