This is probably the least cat-related paper ever featured in this space, but the nice thing about calling it "Science Geek Edition" is that my barrel comes pre-scrapped. That said, Schrödinger's cat is over-worked in discussions of quantum mechanics, and I won't drag the poor creature out tonight (not even Cris Moore's humane version, where the choices aren't life and death, but catnip and tuna). Instead, I will --- very circumspectly --- lead before you Wallace's tiger.

- David Wallace, "Everett and Structure", Studies In History and
Philosophy of Modern Physics
**34**(2003): 87--105 = quant-ph/0107144 - I address the problem of indefiniteness in quantum mechanics: the problem that the theory, without changes to its formalism, seems to predict that macroscopic quantities have no definite values. The Everett interpretation is often criticised along these lines and I shall argue that much of this criticism rests on a false dichotomy: that the macroworld must either be written directly into the formalism or be regarded as somehow illusory. By means of analogy with other areas of physics, I develop the view that the macroworld is instead to be understood in terms of certain structures and patterns which emerge from quantum theory (given appropriate dynamics, in particular decoherence). I extend this view to the observer, and in doing so make contact with functionalist theories of mind.

I think Wallace is completely right, and the dichotomy in question is utterly false. The whole science of statistical mechanics, for instance, testifies to the fact that structures and patterns which are not directly part of the microscopic dynamics can nonetheless be reliably generated by those dynamics. (Cris and I have a paper about that, actually, which I need to finish revising per the referees.) But let me allow Wallace to speak for himself about tigers.

To see why it is reasonable to reject [the dichotomy], consider that in science there are many examples of concepts which are certainly real, but which are not directly represented in the axioms. A dramatic example of such a concept is the tiger: tigers are unquestionably real in any reasonable sense of the word, but they are certainly not part of the basic ontology of any physical theory. A tiger, instead, is to be understood as a pattern or structure in the physical state.To see how this works in practice, consider how we could go about studying, say, tiger hunting patterns. In principle --- but only in principle --- the most reliable way to make predictions about these would be in terms of atoms and electrons, applying molecular dynamics directly to the swirl of molecules which make up tigers and their environment. In practice, however, this is clearly insane: no remotely imaginable computer would be able to solve the 10

^{35}or so simultaneous dynamical equations which would be needed to predict what the tigers would do, and even if such a computer could exist its calculations could not remotely be said toexplaintheir behaviour.A more effective strategy can be found by studying the structures observable at the multi-trillion-molecule level of description of this "swirl of molecules". At this level, we will observe robust --- though not 100% reliable --- regularities, which will give us an alternative description of the tiger in a language of cells and molecules. The principles by which these cells and molecules interact will be derivable from the underlying microphysics, and will involve various assumptions and approximations; hence very occasionally they will be found to fail. Nonetheless, this slight riskiness in our description is overwhelmingly worthwhile given the enormous gain in usefulness of this new description: the language of cell biology is both explanatorily far more powerful, and practically far more useful, than the language of physics for describing tiger behaviour.

Nonetheless it is still ludicrously hard work to study tigers in this way. To reach a really practical level of description, we again look for patterns and regularities, this time in the behaviour of the cells that make up individual tigers (and other living creatures which interact with them). In doing so we will reach yet another language, that of zoology and evolutionary adaptationism, which describes the system in terms of tigers, deer, grass, camouflage and so on. This language is, of course, the norm in studying tiger hunting patterns, and another (in practice very modest) increase in the riskiness of our description is happily accepted in exchange for another phenomenal rise in explanatory power and practical utility.

Of course, talk of zoology is grounded in cell biology, and cell biology in molecular physics, but we cannot discard the tools and terms of zoology to work directly with physics, without (a) losing explanatory power, and (b) taking forever.

What moral should we draw from this mildly fanciful example? That higher-level ontology is to be understood in terms of pattern or structure: in a slogan,

A tiger is any pattern which behaves as a tiger....Why is it reasonable to claim, in examples like these, that higher-level descriptions are explanatorily more powerful than lower-level ones? In other words, granted that a prediction from microphysics is in practice impossible, if we had such a prediction why wouldn't it count as a good explanation? To some extent I'm inclined to say that this is just obvious --- anyone who really believes that a description of the trajectories followed by the molecular constituents of a tiger explains why that tiger eats a deer means something very different by "explanation". [I don't agree with that last sentence; CRS] But possibly a more satisfying reason is that the higher-level theory to some extent "floats free" of the lower-level one, in the sense that it doesn't care how its patterns are instantiated provided that they are instantiated. (Hence a zoological account of tigers requires us to assume that they are carnivorous, have certain strengths and weaknesses, and so on, but doesn't care what their internal makeup is.) So an explanation in terms of the lower-level theory contains an enormous amount of extraneous noise which is irrelevant to a description in terms of higher-level patterns....

(That last point, about extraneous noise, can be made precise by means of information theory, and doing so leads to a quantitative definition of "emergence"; if this interests you, see my paper with Cris, or the last chapter of my thesis.)

In the microscopic description, we have a very large number of degrees of
freedom, which interact with each other and form a (pretty much) self-contained
dynamical system. These degrees of freedom live in some mathematical state
space, and the dynamics give the laws of motion through that state space. If
we peform a change of coordinates in state-space, it won't, generally speaking,
be the case that each of the new coordinates can be unambiguously associated
with a single microscopic object; most of the new coordinates will
be *collective* degrees of freedom, involving some or all of the
constituent objects. If we are lucky, we might find that a (comparatively)
small number of such collective degrees of freedom themselves form a mostly
self-contained dynamical system; the other coordinates, the ones we neglect,
appear as noise. (For a good discussions of techniques for doing this in
classical systems, see Dror Givon, Raz Kupferman and Andrew Stuart, "Extracting
macroscopic dynamics", Nonlinearity **17** (2004):
R55--R127, available as a PDF
preprint.) When this is the case --- when we've found a self-contained set
of collective degrees of freedom --- then we've found macroscopic structures
and processes. So, for instance, for an ideal gas, instead of having to deal
with 10^{23} or more degrees of freedom, we have
three effective degrees of freedom (temperature, pressure and volume), all of
which are collective coordinates. Similarly, for a rigid body, there are only
a handful of effective degrees of freedom --- the location of the center of
mass, and the angles giving the orientation of the body. I think the question
of methodological
individualism in social science can be resolved along the same lines --- to
be cute, institutions are
collective degrees of freedom --- but I'm not qualified to have an opinion on
that. (Not like that stops me.)

If you read the rest of Wallace's paper, you'll see how this connects to Daniel Dennett's work, and how decoherence, in particularly, is actually a rather important part of making this story work, and deriving classical macroscopic objects as patterns in quantum-mechanical states. (Schrödinger's cat makes an appearance, after the tiger.) The argument is conceptual rather than mathematical, so I imagine the paper would be rather more accessible than some I've blogged about; but by the same token it would be nice if somebody would provide some quantitative examples.

Now, if you'll excuse me, I need a beer.

Posted by crshalizi at November 19, 2004 19:50 | permanent link