Posting has been light partly because of house-hunting (graciously hosted by John and Dani), but also because of finishing and submitting this:

- Kristina Lisa Klinkner, Cosma Rohilla Shalizi and Marcelo F. Camperi, "Measuring Shared Information and Coordinated Activity in Neuronal Networks", q-bio.NC/0506009
*Abstract*: Most nervous systems encode information about stimuli in the responding activity of large neuronal networks. This activity often manifests itself as dynamically coordinated sequences of action potentials. Since multiple electrode recordings are now a standard tool in neuroscience research, it is important to have a measure of such network-wide behavioral coordination and information sharing, applicable to multiple neural spike train data. We propose a new statistic, informational coherence, which measures how much better one unit can be predicted by knowing the dynamical state of another. We argue informational coherence is a measure of association and shared information which is superior to traditional pairwise measures of synchronization and correlation. To find the dynamical states, we use a recently-introduced algorithm which reconstructs effective state spaces from stochastic time series. We then extend the pairwise measure to a multivariate analysis of the network by estimating the network multi-information. We illustrate our method by testing it on a detailed model of the transition from gamma to beta rhythms.

This was really Kris and Marcelo's problem, and their solution; but they let
me kibbitz. Still, since it's mostly their work (by a *large* margin),
I feel like I can say that it's *very nice*, without tooting my own horn
too much. Existing methods, like cross-correlation or joint peristimulus time
histograms, can handle certain kinds of coordination between neurons, but
basically they do it by making really restrictive assumptions about what
patterns of activity the neurons might be using, and how they're related. If
you look at cross-correlation, for instance, you're sticking to linear
relationships between what happens at one time to one neuron and to another
neuron after some time-lag, and ignoring non-linear relationships or
relationships between extended patterns (rather than just momentary activity).
What Kris and Marcelo realized is that you don't *have* to make this
kind of assumption. If we had a way to *discover* each neuron's
characteristic patterns of behavior, we could just look at the moment-to-moment
coordination of those patterns. But reconstructing the effective state space,
which is something we know how to
do, is the same thing as discovering those patterns; and there we are.

Another nice feature of this approach is that, as we say in the abstract, it lets us get at global coordination in a way which goes beyond just averaging pairwise measurements. Since we use mutual information between states, it would be nice to look at the global mutual information — essentially how far the joint distribution of all the neurons' states departs from statistical indepdence. Estimating that global distribution directly is really hard, but it turns out one can use Chow-Liu trees to put a very reasonable lower bound on the global information, while only having to estimate the joint distribution of each pair of neurons. This is like ignoring higher-order interactions in statistical mechanics, except when they can be decomposed into pairwise interactions, and it's actually (see Amari) a kind of maximum-entropy approximation. Nothing like this would work for, say, correlation coefficients. — We learned about Chow-Liu trees from a cool paper by Kirshner, Smyth and Robertson at UAI last year; I'm surprised they're not better known.

There is no reason why informational coherence should measure
coordination *only* between neurons. But that will be another story.

*Manual trackback*: Idiolect

Minds, Brains, and Neurons; Enigmas of Chance; Networks; Self-Centered

Posted by crshalizi at June 10, 2005 13:54 | permanent link