Notebooks

Dynamics in Cognitive Science

Last update: 31 Jan 2026 09:32
First version: Before 17 December 2004; fixed link rot and added a few paragraphs of auto-archaeology, 30 January 2026

Since the early 1990s, some people have gotten very excited about the idea that dynamical systems theory can be used to model cognitive processes. As somebody trained in nonlinear dynamics, I applaud this development, since, if successful, it will enhance my material and academic prospects. Sadly, when they do things like purporting to explain decision-making with a low (8) dimensional model with no noise, I grow deeply suspicious. Worse, many of these same people believe that dynamics gives them an account of cognition which is incompatible with traditional models, whether of the (Newell-Simon) symbol-processing or connectionist sort, and in fact one which is fundamentally non-computational. As somebody trained in the symbolic aspects of nonlinear dynamics, and who uses that math to study the intrinsic computation carried out by dynamical systems, I have to wonder what they're talking about.

To do: Find something interesting to say about this by December, when abstracts are due for the Potsdam workshop on Dynamical Systems Approaches to Language and Symbol Grounding. Update, December 2005: Well, I don't know if what I found to say was interesting, but you can read the abstract here.

Post-script, 30 January 2026

Let me try to rephrase where I ended up when I stopped paying much attention to this, around 2005. This involves a certain of guesswork based on my Potsdam slides. It is also, transparently, the ideas of my mentors, plus a little bit of "What Is a Macrostate?".

Big physical systems admit multiple levels of description, with different amounts of coarse-graining. The variables at coarser levels are usually "collective coordinates", functions of many lower-level variables. (No water molecule needs to go all the way from Africa to Brazil for a wave to cross the Atlantic; a body's center of mass can have no mass there.) Some coarse-grainings do, objectively, allow for more efficient use of limited information, which is, or should be, what we mean by "emergence". So far, this is just saying that different (continuous) dynamical-systems descriptions can all be true of the same thing, with some of the descriptions forming "real patterns" (as Dennett would have said). Now it is also possible to coarse-grain continuous-state dynamical systems to discrete states, which is what we do in symbolic dynamics. (And some discretizations are better than others, information-theoretically.) Those symbolic dynamical systems, which are implicit in, or even emerge from, the continuous dynamics follow the sorts of rules which we study in classical automata theory. Discrete, symbol-manipulating computation is an emergent property of continuous dynamics. Or at least: it can be.

Whether the specific kinds of symbols posited by classical cognitive science really do emerge (in this sense) from a biophysical description of the nervous system would be a very challenging question to answer empirically. Maybe lots of parts of our cognitive life don't have any very interesting/useful symbolic dynamics. Maybe the places where there are emergent symbolic dynamics don't look anything like the psychologists imagined. But the idea that there is some sort of categorical opposition between dynamics and computation seemed to me then (and still today) deeply misguided.


Notebooks:   Powered by Blosxom