Over the last fifty years or so, diversity has joined motherhood and apple pie as something everyone embraces, at least in public. But many who would never say publicly that there is only one correct way to do things and one admirable type of person, express such feelings in private, at least about what's important to them. For them, diversity may be okay, even good, for the recreational, shopping-mall side of life — food, clothes, music. But when it comes to the serious work of the world, there's One Right Answer, and what counts is getting as close to it as possible. Diversity seems beside the point, if not a liability. To such people, the usual platitudes about diversity sound like obvious, if perhaps well-intentioned, nonsense. Such attitudes do sometimes surface in public. In the recent Supreme Court case about affirmative action at the University of Michigan's law school, Justice Antonin Scalia opined, during the oral arguments, that the only reason there was an issue was that Michigan insisted on having a "super-duper law school" (as he put it) that was also diverse, assuming that these were incompatible goals. If Michigan wanted a diverse student body, Scalia said, it simply had to "lower the standards" and admit a certain number of incompetents.
Such views understandably infuriate many people, who nonetheless can't say just what's wrong with them. This is where the work of SFI External Faculty member Scott Page and his collaborators comes in, by showing that diversity can actually help find the One Right Answer. Page, a political scientist and economist at (coincidentally) the University of Michigan, has drawn on ideas from the study of complexity to outline what he calls "the logic of diversity." Just as classic work in political economy established the "logic of collective action" (Mancur Olson) and the logic of social choice (SFI External Faculty and Nobel Laureate Kenneth Arrow), Page thinks he has found the basic rules explaining how diversity works in society, and complexity science plays an essential part in his explanation. He shows that not only can diversity be helpful in finding good solutions, but it can even be more beneficial than individual competence.
Start with the idea of a complex problem — one that has many aspects that are strongly interdependent. Because of that interdependence, it's hard to modify just one aspect of a solution at a time — if you try to improve one thing, you'll often end up breaking several others. The idea of a "search landscape" is a convenient way of visualizing this. Picture potential solutions as points on a relief map; the height of the map at each point indicates the quality of the solution. The fact that the problem is complex, with many interdependent aspects, corresponds to the landscape having many local peaks. (Search landscapes come from evolutionary biology, where ones like this are called "rugged.") Even if there's a unique highest point on the landscape — a single optimal solution or right answer — it can be hard to find. For large, industrial-strength problems, the only way to find the optimum may be to enumerate all the possible solutions, which is prohibitively time-consuming. So, in practice, any agent trying to solve such a problem will have to employ heuristics, tricks or short-cuts that may work on particular problems, but can't be guaranteed to always find the right answer.
Closely associated with this, agents will have particular "perspectives" on the problem, paying attention to some aspects of it and filtering out all the others. If an agent faces two problems that, from its perspective, look the same, it will attempt the same solution to both, even if it notices differences between them. From its perspective, everything important is identical. If the problem is to estimate the value of a building, one agent might look at just its location and size. It will then guess similar values for similarly situated buildings, regardless of, say, age — it doesn't "see" age. Because perspectives filter out some aspects of the problem, they limit the kinds of heuristics agents can use. For example, if an agent doesn't "see" a building's age, it can't use a valuation heuristic that combines age and size in guessing maintenance costs. Conversely, every heuristic has an implicit perspective, because it responds to some aspects of the problem but not to others.
A weak heuristic is one which comes up with solutions to the problem which are, on average, only a little better than one would expect from chance. These tend to be heuristics which get stuck near local peaks in the search landscape, and can't get out of those traps, never reaching the optimum. A strong heuristic, on the other hand, is one that gives a solution that is generally nearly as good as the actual optimum. Counter-intuitively, Page, with long-time collaborator Lu Hong, professor of finance at Loyola University, has shown that, under very general conditions, a diverse population of agents, each with a different weak heuristic, will outperform a single agent with a very strong heuristic — as Page and Hong say, "diversity trumps ability." One way to grasp Page and Hong's result is to imagine the diverse but inept agents taking turns at the problem, each one starting from where the last one got stuck. Each of them tends to get trapped at local peaks, but, because they're diverse agents, they get trapped at different peaks. By using each other's work, the group avoids these local traps, and gets arbitrarily close to the optimal solution — closer than any given individual agent with a strong heuristic.
Remarkably enough, one doesn't get the same improvement from using a diverse population of agents with strong heuristics. The reason is that the strong heuristics all tend to be similar to one another — they know the same tricks, as it were — and so tend to get stuck on the same local peaks. Because they're all good in the same way, they have no ability to compensate for each others' weaknesses. "If the best problem solvers tend to think about a problem similarly, then it stands to reason that as a group, they may not be very effective," Page says.
It's worth noting that this isn't just a trick with averaging. A new book, The Wisdom of Crowds: Why the Many Are Smarter than the Few and How Collective Wisdom Shapes Business, Economies, Societies and Nations, written by The New Yorker columnist James Surowiecki, has recently popularized the idea that groups can, in some ways, be smarter than their members, which is superficially similar to Page's results. While Surowiecki gives many examples of what one might call collective cognition, where groups out-perform isolated individuals, he really has only one explanation for this phenomenon, based on one of his examples: jelly beans. In a long series of experiments, students in psychology classes are asked to guess, for example, the number of jelly beans in a jar. Quite reliably, the average guess of all the students in the class is closer to the real number than the individual guesses, sometimes astonishingly close. The reason this works, Surowiecki says, is that averaging together many independent, unbiased guesses gives a result that is probably closer to the truth than any one guess. While true — it's the central limit theorem of statistics — it's far from being the only way in which diversity can be beneficial in problem solving.
If you think the only way that collective cognition can work is through pooling independent guesses, you will be puzzled about situations where it works when people's guesses are dependent. In particular, you'd expect that if each person's guess is strongly dependent on the last person's guess, then the group should fail miserably. But in Page and Hong's model, remember, each agent starts from where the last one got stuck, so its guess does depend strongly on the previous agent's. Yet these groups not only don't do badly, they do better than they would if each agent acted independently of the others. So while Surowiecki's idea is not wrong, it's incomplete.
It's also important to distinguish this idea from the division of labor, even the division of cognitive labor. Big engineering projects (say, designing a new jet), are often broken down into modules which are nearly self-contained, with well-defined interfaces connecting them. This means that someone can work on the autopilot, without knowing all the details of the engines, just some of its interface properties (for example, the maximum thrust it can deliver), and someone else can work on the engines without knowing the details of the autopilot. This in turn means that engineers can specialize in control or power, solve their individual problems, and then put their individual partial solutions together, with some hope of the result working. Often some tweaking is needed, because the interfaces don't perfectly encapsulate the different modules, but much less than if the project wasn't broken up this way to start with. Specialization is a way of using diversity, because control and power engineers do learn to think differently (as anyone who's had to coordinate both can tell you). But this isn't what's going on in Page and Hong's set-up, where the team is dealing with a single, undivided, perhaps indivisible, problem.
The division of labor is, in part, an adaptation for handling complex problems, but only those which are complex in the straightforward sense of being very large. It relies on finding a way of decomposing the large problem into nearly-separate parts, so that it can be attacked through a strategy of divide-and-conquer, with different people specializing in conquering the various divisions. (This topic, and its relation to hierarchical structure, was explored by Herbert Simon in his classic Sciences of the Artificial.) Diversity, in the sense Page is talking about, is another way of adapting to complexity, and specifically to complex problems which are not decomposable into neat hierarchies.
Put strategically, the idea is like this: Agents have only a limited capacity to represent, learn about, and predict their world, and so solve their problems. When the problem or environment is too complex for any one agent, then you should have many weak agents make partial, incomplete, overlapping representations. You'll be better off by doing this, and then learning a way to combine them, than by trying to find a single, globally accurate representation, such as a single super-genius agent which can handle the problem all by itself. Collectively, the combined representations of the group of agents are equivalent to a single high-capacity representation. But nobody, individually, has anything like the complete picture; in fact, everybody's individual picture is pretty much wrong, or at best drastically incomplete.
Powerful, high-level capacities which emerge from the interplay of low-level components are a common feature of complex systems, but here as elsewhere, just having the components and letting them interact is not enough. The organization of the interactions is crucial. In the brain, for instance, this is the difference between coherent thought and delirium, or even epilepsy. In distributed problem solving, social organization is the key to realizing the potential benefits of diversity, and avoiding mutual incomprehension or socially-amplified folly. Improving organization raises performance in diverse groups by making it easier for the agents to utilize each others' abilities and efforts, which can be more important, as we've seen, than improving those individual abilities. Page and Hong's model shows, in a sense, how well the group could do with the right organization, but not how to find that structure.
When political scientists, say, come up with dozens of different models for predicting elections, each backed up by their own data set, the thing to do might not be to try to find the One Right Model, but instead to find good ways to combine these partial, overlapping models. The collective picture could turn out to be highly accurate, even if the component models are bad, and their combination is too complicated for individual social scientists to grasp.
It's already widely appreciated that markets perform this kind of distributed problem-solving. No individual in the market can grasp all the information about goods and services in every economy, much less search over allocations to ensure that supply and demand balance. But the market as a whole not only finds such allocations, it adjusts them as conditions change, and does so by using the diverse local knowledge of the participants. Even though it's appreciated that markets can solve problems individuals can't grasp, it's disconcerting to think this way about something like a scientific discipline, or about more formal institutions, such as governments and businesses. Or, for that matter, law schools. In a provocative mood, Page suggests that Scalia "got things completely backwards." Given the complex interdependencies of the problems we ask the legal system to resolve, it might well be that diversity is the only way to excellence, that "ability is diversity, and to say there's a trade-off is in some sense to misunderstand the nature of ability."
Which is not to say that diversity has no drawbacks. Diversity of heuristics and perspectives tends to be linked to diversity of values and interests. This, as Page says, is where things get tricky. We've been assuming everyone agrees on what makes a solution good or bad, that everyone shares a common set of interests. But, unless everybody values exactly the same thing, and receives exactly the same share of what the group gets, this won't likely be the case. Diverse groups, good at solving problems, will tend to be ones whose members have diverse ideas about which problems they ought to solve. Should a police department catch criminals, deter potential criminals, reduce the harm done by crime, or get the police chief reappointed? The study of how to aggregate differing agents' preferences into a common collective choice brings us back to Kenneth Arrow's "logic of social choice" mentioned earlier.
This is bad news, because the main thrust of the logic of social choice is Arrow's "impossibility theorem." The gist of the theorem is that under certain conditions of rationality and equality, it is impossible to guarantee that societal preferences will correspond to individual preferences when more than two individuals and alternative choices are involved.
The way to avoid the impossibility theorem is for people in the group to agree in their preferences (or not disagree too much). One way to achieve this is to limit the membership to those with the "right" values, but this will in turn reduce the diversity of heuristics and perspectives. Such groups may find it easy to decide what to do, but they're ineffective at doing it. The way to preserve diversity is to reach agreement on preferences. This could be either by reaching new, shared values, or by crafting compromises which satisfy divergent values. These are the cornerstones of democratic deliberation. In the end, perhaps, the logic of diversity explains why democracy is so hard, and so necessary.
Cosma Shalizi is a postdoctoral fellow at the University of Michigan's Center for the Study of Complex Systems.
Santa Fe Institute Bulletin, volume 20, no. 1 (2005), pp. 34--38