Notebooks

Notes on a Talk Entitled ``Is the Brain Critical?'' by Per Bak, 18 March 1997 at the Santa Fe Institute

24 Mar 1997 12:37

The answer, Bak dixit, is yes. He means that it's critical, not in the sense of being vitally important, but in the mathematical or physical sense of being just on the boundary between two different sorts of behavior.

(The simplest sort of process which can be critical, and one Bak actually dragged out at the beginning of his talk, is what's called a branching process. Think of some colony of asexually reproducing organisms. Bugs die off at a fixed rate, and they split into two or more bugs at another rate, and they don't interact with each other at all. Question: will the colony go on forever? If the ratio of the birth to date rates is below a certain level (just what depends on the details of the reproduction), then no matter what the original colony size, with probability 1 it will eventually die out. If the birth date is sufficiently larger than the death rate, any colony, no matter what its original size (other than zero!) will go to infinity. These are the subcritical and supercritical cases, respectively. The critical case is when births just balance deaths.)

Bak explained that the brain ought to be critical on the analogy of a branching process. If the brain was sub-critical, input would eventually die out; if it was super-critical, any input would eventually activate lots and lots of outputs; therefore, it must be critical, which is just right. (It was never made terribly clear whether Bak --- and, I should add, his collaborator, whose name unfortunately I did not catch --- were concerned with all animal brains whatsoever, or just vertebrates, or just mammals, or just humans, or if they would extend these considerations beyond animals with brains to those with any nervous system at all. I have trouble swallowing the notion that Hydra or the sea-slug have been carefully tuned for criticality.) This is also somehow supposed to maximize adaptive flexibility.

Bak then went on to assume --- apparently not as a modelling convenience or worst-case --- that the brain is initially wired up in a totally random manner, and that all its functional organization must be self-organization. Why this is not also true of the liver, or bones, he did not say, nor did he indicate how the differences between the brains of human children, and apes raised in human families, are to be explained. This also ties in, supposedly, with making brains as flexible and adaptive as possible; and at this point he poured some scorn on artificial neural networks like the Hopfield network, which eventually lock into fixed patterns that are very hard to get out of. (R. Palmer pointed out that he and Hopfield had modified the network in '81 to avoid this: which didn't seem to discombobulate Bak in the least.)

Bak's model is very far from having this problem (if, indeed, it is a problem, and not a veritable feature in a memory). It consists of threshold neurons, which fire if the weighted sum of their inputs, plus a noise term, is over their threshold. (All weights are positive. I asked him about this, and he said adding inhibitory connections was one of the planned next steps, and that he suspected they were important for getting oscillations; I'd give him a copy of Sherrington's Integrative Action of the Nervous System (1906), if I had one.) The neurons are arranged in layers, and each gets input from the one immediately behind it and the two to either side in the preceeding layer. (His experiments so far are all on 256x256 arrays.) When a trial is successful, the connections between neurons which fired during that trial are strengthened, and all thresholds are raised; when a trial is unsuccessful, connections are weakened and thresholds are lowered. The idea behind raising the threshold, Bak says, is that when one learns to do something, it takes less and less effort --- so fewer and fewer neurons should fire! (Evidently, he is not a peripatetic.) Thresholds get raised uniformly because, according to Bak, any sort of selectivity would be ``cheating``, would not be self-organization; and here he said some well-merited things against back-propagation networks, since they need some large and elaborate algorithm to figure out what's wrong and tune connections accordingly. (Why changing only the thresholds of firing neurons isn't also cheating wasn't addressed.)

Now the task he tried out his network on was this. At each trial, the net is given one of two and only two stimuli. There are two and only two responses which are ever appropriate, and it is supposed to learn to associate each stimulus with a given response. During each series of trials, a random sprinkling of neurons is deemed to be sensitive to one stimulus, and is automatically turned on when that stimulus is present, and another, possibly partially overlapping, random set notices the other stimulus. One of the appropriate responses is deemed given whenever two randomly selected neurons in the final layer are activated, and the other when another pair of final neurons goes on. The interpretation being given to this is that a rat is being alternately shown a red and a green light, and if it presses a lever on its right when seeing green, or on its left when seeing red, it is given food, which counts as a success and is fed back into the network accordingly. ``I was told our paper was completely flawed because rats are color-blind. [Pause.] Our rat is not.'' (Since all parts of its brain respond in the same way to being fed, however, it apparently has no taste-buds.) I propose to call it the Bak-rat.

It will come as no surprise to learn that the Bak-rat did, indeed, eventually learn to associate stimuli with responses in the desired manner --- not perfectly, but with probability very close to one, even when the inputs alternated. (In fact, I think that if you keep up a steady stream of just one input, it will forget how to deal with the other.) It will also come as no surprise to those familiar with Bak's work that there are various power laws to be found in things like the size of the net vs. its learning time. (Personally, I'd find him a bit more credible if he ever discovered relationships which were not power laws.) Triumphantly, he declared that this model shows that random systems can self-organize to learn; and the Bak-rat does better at learning this task, and similar ones, than Hopfield nets and the like.

My comments:

  1. Random nets have been known to be able to learn simple behaviors ever since Ashby, and his homeostat was given even less to go on than the Bak-rat. (It got only negative feedback, which consisted of throwing out all the connections and starting over again from random: Ashby was trying for a worst-case.)
  2. Like the homeostat, the Bak-rat can only learn to do one thing. (It is unlike the hedgehog in that it doesn't even do that perfectly.) When it fails, again like the homeostat, it tends to flail about at random. This does not seem like an accurate description of animal behavior. Suppose you're riding your bicycle down the sidewalk, peddling more or less on auto-pilot. (This is one of Bak's examples of low brain activity during well-learned tasks.) If something goes wrong (bits of glass bottles on the sidewalk are very common here in Santa Fe), according to Bak you don't switch to a new motor program (if you don't like that phrase, try Luria's ``kinetic melody'') like swerving, you not only start flailing about at random, but actively forget part of what you know about ordinary peddling.
  3. It seems unlikely that the Bak-rat can learn anything more complicated than simple assocation, e.g., sequences of actions which must be done too quickly for sensory feedback. (If it can keep enough associations in memory, it could of course learn sequences of actions which are slow enough that one action can be executed and its effects fed back through the sensory appratus to the brain, but many very interesting behaviors, like throwing, are not of this type.) As a number of people pointed out at the talk, there's nothing like graded responses, or hierarchies and building on previous learning in this model; Bak says he'd like to work on it. (Cf. what was said about inhibitory connections above.)
  4. No consideration was given to the speed with which humans are capable of learning very complex skills in highly restricted domains, of which the best instance is of course language. Nor was their recognition of the long-familiar facts that we get little or no reinforcement for learning such things, and that our examples are often inaccurate. (I doubt even the stearnest paterfamilias refused to give his children bread when they mis-declined panis.) And while things like the so-called grammar gene (see Pinker, The Language Instinct) are puzzling, if one thinks that lots of the brain is genetically set up to learn certain things, they're utter mysteries if you suppose it's a self-organizing tabula rasa. --- What is true of human language is true, mutatis mutandis, for other instincts as well.
  5. I'd very much like to know whether the Bak-rat can learn to do things with only delayed reinforcement, or in perceptually noisy environments. (Currently, it gets two-and-only-two stimuli, both of which are relevant.) My suspicion is that, absent something prohibited by Bak's (somewhat intermittent) commitment to self-organization, like Holland's bucket-brigade principle, it could not.


Notebooks: