Notebooks

Self-Organization

03 Dec 2010 08:00

Something is self-organizing if, left to itself, it tends to become more organized. This is an unusual, indeed quite counter-intuitive property: we expect that, left to themselves, things get messy, and that when we encounter a very high degree of order, or an increase in order, something, someone, or at least some peculiar thing, is responsible. (This is the heart of the Argument from Design.) But we now know of many instances where this expectation is simply wrong, of things which can start in a highly random state and, without being shaped from the outside, become more and more organized. Thus self-organization, which I find to be one of the most interesting concepts in modern science --- if also one of the most nebulous, because the ideas of organization, pattern, order and so forth are, as used normally, quite vague.

Self-organization is not just something I find interesting, it's also my research topic. More exactly, I did my Ph.D. dissertation on quantitative measures of self-organization, especially in cellular automata; the idea was to remove some of the vagueness in the idea of organization, and so make self-organization at least a bit less nebulous. I've explained this at (doubtless excessive) length in my "Is the Primordial Soup Done Yet?", which is based on a talk I gave to Madison Chaos & Complexity Seminar in 1996, and which served as the basis for the first two chapters of my dissertation.

Well, I think I did come up with a good quantitative measure of self-organization. The complexity of a process ought to be measured by how much information is needed to predict its future behavior. (This is done locally for spatially extended systems --- how much do I need to know to predict what this point is going to do?) This in turn depends on the organization of the process, on its causal architecture; the higher the complexity, the greater the (effective) number of parts in that architecture. An autonomous process --- one which receives either constant or purely random inputs from the outside --- self-organizes if its complexity increases over time. So far as I can tell, this definition matches all the cases where people are agreed, intuitively, about whether or not something self-organizes, and it can definitely be applied to real data sets from real experiments. If you want to know more about this, read my paper with Kris Shalizi and Rob Haslinger, "Quantifying Self-Organization with Optimal Predictors", nlin.AO/0409024.

This leaves, however, the problem of "exorcising demons": how to tell self-organization from being organized by something else? How much can the outside world increase something's complexity, if the relevant part of the outside world has only a certain amount of complexity itself? There seem to be many situations like this --- embryological development, learning (with or without instruction), etc. Mitch Porter has been taxing me for not tackling this point since 1996, but I have only the vaguest notion of how to proceed.

Many writers conflate the notion of self-organization with that of emergence. Properly defined, however, there can be self-organization without emergence and emergence without self-organization. There is a link between them, but it's fairly subtle. See ch. 11 the dissertation.

History of the idea. (Discussed at more length in ch. 1 of my dissertation.) The idea that the dynamics of a system can tend, of themselves, to make it more orderly, is very old. The first statement of it (naturally, a clear and distinct one) that I can find is by Descartes, in the fifth part of his Discourse on Method, where he presents it hypothetically, as something God could have arranged to have happen, if He hadn't wanted to create everything Himself. Descartes elaborated on the idea at great length in a book called Le Monde, which he never published during his life, for obvious reasons. (I strongly suspect the hypothetical form of his discussion was simply to keep himself out of trouble with the churches, but I'm not enough of a scholar of his life and thought to show that.) Of course, the ancient atomists (among others) had argued that a designing intelligence was unnecessary, but their arguments were all of the "worlds enough and time" variety: given enough time and space and matter, organization is bounded to happen somewhere, sometime, by sheer chance. (Lucretius gives this the interesting twist that stable forms, produced by chance, will last longer than unstable ones, but doesn't take it anyplace.) What Descartes introduced was the idea that the ordinary laws of nature tend to produce organization. --- Since there are people, taken seriously by fellow scholars, who say that self-organization u.s.w. represents a break with the Cartesian, mechanistic, reductionist, etc. tradition of science, I find this more amusing and significant than it probably is. (For related history, see Avram Vartanian, From Descartes to Diderot.)

More modernly, the term "self-organizing" seems to have been introduced in 1947 by W. Ross Ashby, and not (as some references claim) by a pair of computer scientists in 1954; the paper of theirs which is generally cited in support of this claim in fact refers to Ashby! It had the misfortune to get wrapped up with general systems theory in the 1960s, but was taken up by physicists and people working on complex systems in the 1970s and 1980s, which is when the real deluge began. When queried with the keyword self-organ*, Disertation Abstracts finds nothing before 1954, and only four total before 1970. There were 17 in the years 1971--1980; 126 in 1981--1990; and 593 in 1991--2000. A truly distressing number of these are in the social sciences. Some of them use the term in a perfectly legitimate way, e.g., "the self-organization of a miner's union". (In fact, Raymond Aron speaks of ideas and interests "organizing themselves" in The Opium of the Intellectuals, the French original of which was published in 1955. It would be interesting to know what the French phrase translated this way is, when it was introduced, and whether Aron might have run across Ashby or some other cyberneticist.) Still, there's an immense quantity of fluff peddled under this label. (I have mentioned only a few of the least offensive below.)

"Self-organized criticality" is a rather different story, though I now encounter a fair number of physicists who infer the meaning of "self-organized" backwards from that term.

Self-organization in action: some instances of evolutionary computation and Darwin Machines; qwerty; cellular automata (some of them), dissipative structures, and pattern formation in general; the so-called edge of chaos. Nest-building by social insects. Flocking.

Many phase transitions look self-organizing (e.g., magnetization, crystal growth, and liquid crystal phenomena) --- but are all of them? (Of course, the transitions in the opposite direction would be self-disorganization, and all of them are or can be in thermodynamic equilibrium.)

Iffy. Turbulence. Ecosystems, especially during succession. "Financial euphoria" and other market behavior. (Iffy phenomena are reasons to want a test for self-organization.)

Addendum, 17 December 2007: I have just found an earlier (but apparently passing and incidental) use of "self-organizing": "The self-organisation of society depends on commonly diffused symbols evoking commonly diffused ideas, and at the same time indicating commonly understood action" --- A. N. Whitehead, Symbolism: Its Meaning and Effect (Macmillan, 1927), p. 76.

Related theoretical topics: biological order; complexity theory; complexity measures; computational mechanics; cybernetics; non-linear dynamics; statistical mechanics

Probably connected somehow: adaptation [But self-organization does not imply any kind of adaptation]; design principles (cf. Christopher Alexander, Herbert Simon); the Russell-Whitehead notion of "structure"; simulation and modelling


Notebooks:     Hosted, but not endorsed, by the Center for the Study of Complex Systems