One of the things Holland has been thinking about for a long time is the puzzle of building blocks, of re-usable categorical parts. "Any human can, with the greatest of ease, parse an unfamiliar scene into familiar objects --- trees, buildings, automobiles, other humans, specific animals, and so on. This quick decomposition of complex visual scenes into familiar building blocks is something that we cannot yet mimic with computers" (pp. 24--25) --- because we have almost no idea how it is done. (Much of what we do know about it comes from studying those persons who, from brain damage, cannot parse visual scenes easily if at all, like Luria's Man with a Shattered World.) This problem of finding and using good building blocks recurs in a tremendous number of domains: in making up scientific theories, for example, good building blocks are entities or variables which are subject to simple, reliable, findable laws, so this is next door to the problem of induction, or more precisely of hypothesis. The schema theorem for genetic algorithms is Holland's attempt to address it in that more well-behaved domain, in essence specifying the kind of building blocks the GA can't fail to find, and how quickly it is likely to find them.
The problem of emergence is, roughly speaking --- and half the trouble with it is that everything we say about it is only rough --- the flip side of the problem of building blocks. Instead of asking how we, or other creatures, carve Nature at the joints, we ask why Nature has those particular joints, or even has joints at all, and is not (to continue with the metaphor) a single undifferentiated hunk of inharmoniously quivering meat, a fleshy compound of chaos and ancient night. Some regularity, someplace far down in the depths where quantum field theory meets general relativity and atoms and void merge into one another, we may take to be given, an empirical fact, not susceptible to any meaningful explanation, in short, the rules of the game: but the rest of the observable, exploitable order in the universe --- benzene molecules, PV = nRT, snowflakes, cyclonic storms, kittens, cats, young love, middle-aged remorse, financial euphoria accompanied with acute gullibility, prevaricating candidates for public office, tapeworms, jet-lag, and unfolding cherry blossoms --- where do all these regularities come from? They're connected to the fundamental physics somehow, just like pawn formations and end-games are connected to the rules of chess, but how do you get from one to the other? Call this "emergence" if you like --- it's a fine-sounding word, and brings to mind southwestern creation myths in an oddly apt way --- that label in itself just marks a mystery, without explaining anything. And whatever answer we come up with had better not just work for the physical universe, for the Realized World, but (as Holland's persistent use of board games as examples makes clear) nearly anything governed by rules.
Like almost all working scientists, Holland assumes that a valid explanation of (any one of) these puzzles is a reductionist one, one that explains the behavior or properties of the larger entity from those of its components and their interactions. Or, as Ernest Gellner put it in Legitimation of Belief, reductionism is "the view that everything in this world is really something else, and that the something else is always in the end unedifying. So lucidly formulated, one can see that this is a luminously true and certain idea." Turned around, reductionism becomes the optimistic belief that, out of unpromising and unedifying materials (say, colloidal carbon slime), edifying things can be assembled, or even will assemble themselves (say, cherry blossoms unfolding in April snows). Holland calls this the "creative side of reduction," and if talk of complementarity were still in fashion, he might easily have said that reduction and emergence are complementary. More modestly, if there weren't emergents, there would be little point and less opportunity for successful reductionism!
This is not, usually, what people have in mind when they attack reductionism; they are thinking, rather, of one of two impostors. The first, which seems to be the bogey of humanists and social scientists, it might be better to call unicausalism. It is the mistake of "reducing" a large and variegated class of phenomena to the effects of a single type of cause, more precisely to claim they are all functions of a single causal variable, as, for instance, saying that people's IQ scores set the courses of their lives, or that a writer's literary productions are a function of her relationship to the means of production, or the way she was weaned, or (if I've got it right) the structure of her episteme, as concealed in her language. (Bad episteme! No bath-house!) Of course, sometimes such drastically simple relations really do obtain: no spirochetes, no syphilis, with all its ramified and diverse and idiosyncratic consequences.
The other caricature of reductionism, which is what (real) scientists tend to accuse each other of, is studying the properties and behavior of components to the neglect of understanding how they fit together, how they interact. At any given time, a trawl through the world's laboratories and seminar rooms will net more scientists fitting one of these caricatures than one would wish. (For instance, those molecular biologists who think that, once they find and sequence all the homeobox genes, the problem of morphogenesis will be solved.) But these lower and distorted forms of reductionism are simply bad science, and do nothing to sully the dignity, or impair the utility, of reductionism which keeps its wits about it. (Besides, the caricatures have half-lives on the order of those of dietary recommendations.)
So much for the praises of a moderate reductionism, alive to the importance of interactions. How do we actually set about reducing phenomena and explaining emergence? By constructing a model. What is a model? We can use one thing (say, a globe) as a model of another (say, the surface of the Earth) if we can find a way of translating, or, as the mathematicians say, mapping, from one to the other which doesn't mess up the relations we're interested in. Then anything we learn about the model can be translated into a discovery about the modeled. (Holland includes things like the Game of Life among models, even though they do not fit this definition. Perhaps, like his board-games, they are to be regarded as models of imaginary worlds.) Models are only good if they're easier to handle and learn about than what they model, and if they really do accurately map the relations we're interested in. How does one find such a model? Here Holland leaves us hanging, from the end of chapter two, on models, to the last chapters of the book. Despite having co-authored a whole (good) book on induction, his answer to the "How?" question is "Nobody knows." He offers some sage advice --- become intimately familiar with the problem (no "I could look that up"), learn the related problems and the tricks and the oral tradition which go with them, be on the lookout for analogies and exploit them --- but ultimately it comes down to trial and error, and so to luck. This is not much of an advance over Popper, or Poincaré, or Mach, or for that matter Bacon, but then, it is precisely on this rock (if not before) that all treatises on method run aground.
Holland does better when it comes to describing the kind of model needed: a "constrained generating procedure," or CGP. The basic element of a CGP is something which has an internal state and a set of inputs, and whose next state is a function of the current state and those inputs; call it a basic mechanism. We suppose there are only a finite number of different kinds of basic mechanisms; then each of them is a CGP, and so is anything we get by making one of the inputs of a CGP be the state (or some function of the state) of a basic mechanism --- to be a bit less exact and recursive, and a bit plainer: anything you get by wiring up basic mechanisms is also a CGP. The result is a generating procedure because it effectively implements a rule for making different sequences of states, different (so to speak) moves; these are constrained by the connections between parts, so that each part isn't free to do just what it likes. (Computer scientists find it useful to gloss over the difference between a procedure and a mechanism which implements it, and this seems to have become second nature to Holland.) Most models used to study learning and self-organization can be cast into this form with a little trouble, and he shows this in detail for some basic neural net models, cellular automata, and the very early (1957) checkers-playing program devised by his friend Arthur Samuel. This threw together some very simple-minded mechanisms whose combined effect was to cause the program to come to recognize useful patterns in the board and useful strategies. In other words, building-blocks emerged from the rules of the program, and tracked things which emerged from the rules of checkers; this was effective enough that it learned to beat Samuel. (There is no discussion of how to fit the basic mechanisms and their connections to data from the Realized World.) He also introduces the class of "constrained generating procedures with variable geometry" (really variable topology), where the connections between mechanisms can change depending on the states of those mechanisms --- CGPs which can control their own wiring. This adds no new computational power, but it does make it much easier to see what is happening in many cases.
One would now like a nice, crisp definition of what an emergent property, object or behavior is, phrased in terms of constrained generating procedures. None is forthcoming, and Holland is frank about this. It's easy to see that any CGP can be made into a "basic" mechanism (in fact, Holland proves this in detail). What he'd like to say is that such a composite mechanism is (or describes) an emergent object, and that moving up a level of description involves going from a CGP built from the truly basic mechanisms to a new, equivalent CGP built up from the composite mechanisms. The problem is that any combination of basic mechanisms can be a composite mechanism, with, perhaps, truly ugly and horrible behavior, while emergent properties and objects are supposed to be simpler to describe and handle than their components. What is lacking, and what Holland is (as yet!) unable to provide, is a clear way of picking one collection of composite mechanisms from among the Vast number of sets of such mechanisms. Still, this does move the problem a bit further away from philosophy and a bit closer to the arts of the soluble, and this is a Good Thing.
At this point the book reverts to the consideration of creativity and metaphor begun in the first chapters, and already discussed.
Emergence is not so broad or ambitious as its title and publicity may suggest. Nothing but highly stylized formal models are considered, and we aren't even told just what it means for something to emerge in such models, never mind the Realized World. (The total mathematical content could probably fit into fifteen pages.) Rather Holland makes explicit and codifies much of the folk wisdom, oral tradition and spontaneous common hunches of those of us working on such models. It's good for us to actually see what we think, instead of just thinking it, not least because it helps us doubt it, and there is probably no other book which gives outsiders, especially non-scientists, such a good feel for our knowledge, our methods and our ignorance.