April 15, 2005

"Fixt Fate, free will, foreknowledge absolute"

This was mostly written just before I went on hiatus.

Wolfgang has got me thinking about Newcomb's Paradox (here, under "2004-12-23"). It goes something like this. A Superior Being (perhaps the Medium Lobster?) appears before you, and gives convincing signs and tokens of its effective prescience. Then, being capricious, it offers you the following dilemma. It places before you two boxes. Box A is transparent, so you can see it contains \$1,000. Box B is opaque, and may or may not contain <voice="dr. evil">\$1,000,000</voice>. You can take either box B, or both boxes. If it predicts that you will take only box B, then it's got the money; if it predicts that you will take both boxes, then box B is empty. Which do you chose? I emphasize that the Superior Being has convinced you it is able to predict your behavior, and that attempts to fool it are unavailing. We can also stipulate that you're not allowed to randomize: it will detect you doing so, and smite you appropriately.

There does not seem to be a stable resolution, within the stated terms. If you chose both boxes, then (since the Superior Being can follow your train of reasoning), you'd do better to pick only box B, but if you do chose just box B, you should really chose both boxes: if there's any chance the Superior Being predicts wrong, you'll be better off, on average, by doing so. Better people than I have gone over this a zillion different ways, exploring all the decision-theoretic wrinkles, and still wound up like the demons in Paradise Lost:

Others apart sat on a Hill retir'd,
In thoughts more elevate, and reason'd high
Of Providence, Foreknowledge, Will, and Fate,
Fixt Fate, free will, foreknowledge absolute,
And found no end, in wand'ring mazes lost.

Wolfgang suggests, in e-mail, that the best way out may simply be to reject one of the premises of the paradox --- that the kind of prediction the Superior Being is purporting to make is simply not possible. But this is also very strange. Human beings are, after all, finite material bodies, and it strains belief --- at least a physicist's belief --- to think that a limitation on the in-principle ability to predict the motion of a finite material body can be discovered by an exercise of pure reason, without any experimental data. Worse, there doesn't seem any reason, in principle, why a Laplacean Vast and Considerable Intellect couldn't integrate the equations of motion for your body to predict what it would do. (Pace Penrose, and even Mitch Porter, quantum processes are unimportant in the brain, so this just demands utterly implausible computational resources and measurement resolution: a piece of cake for a Vast and Considerable Intellect.) For that matter, your friends (and still more your spouse) could probably make a pretty shrewd guess about what you'd do, even if you can't.

My thought, at this point, is that the paradox shows us a limitation, alright, but not on the predictability of material bodies. Rather, I suspect, the limitation is in our ideas about rational decision-making.

The laws of physics are what they are, and I see no reason to suppose that human beings are necessarily harder to predict than, say, the atmospheres of gas giant planets; perhaps less. Psychological terms and concepts, in both their folk and scientific versions, provide a coarse-grained description of human (and animal) behavior with considerable predictive power, at least on its own level, and great concision. (Rather than belabor the point, I will refer you to my post about David Wallace's version of this story.) Psychology is a kind of approximation to the reality of human organisms, just as hydrodynamics and climatology are approximations to the reality of the Jovian atmosphere. (That we use those approximations to help mark off the objects under discussion is beside the point.) At the psychological level, we can say things like "Cosma is over-cautious and greedy, but his timidity always beats his avarice; he'll take both boxes".

Psychology is, in turn, approximated by our ideas --- or better, ideals --- of rational choice. These constitute an abstract system, which can be formalized in various ways, e.g., as in von Neumann and Morgenstern. I think what Newcomb's paradox tells us is that the situation imagined is one where the abstract system of decision theory breaks down and gives indeterminate results, perhaps because your being a rational agent, in the necessary sense, is inconsistent with your being predictable. Since we're dealing with an abstract system, it's not surprising that a mere thought experiment can show us a place where it breaks down.

Or maybe you should just take both boxes; what do I know?

Update, 18 April 2005: Thanks to Wolfgang for pointing out that I initially mislabeled the boxes!

Posted at April 15, 2005 20:11 | permanent link

Three-Toed Sloth