Summary
This book is over 30 years old now but has aged very well. In spite of being a contemporary review of an emerging field – complexity theory – many of the ideas seem fresh and interesting still. It’s mainly a survey of the scientists contributing to the Santa Fe Institute around that time – a group that was trying hard to synthesize a common theory for systems that are far from equilibrium, demonstrate emergent behavior that is hard to extrapolate from components’ character, have high dependency on initial conditions, and other characteristics that make typical simplified models perform poorly. The work is still progressing, though perhaps not as fast as Waldrop might have hoped when he was observing it around 1990. For a more current summary, see The Complex World (Krakauer 2024).
Notes and Quotes of Interest
Complexity Overall
What we are really looking for in the science of complexity is the general law of pattern formation in non-equilibrium systems throughout the universe. (J. Doyne Farmer, p. 299)
The “complexity revolution” is:
“..in a sense, it’s the opposite of reductionism. The complexity revolution began the first time someone said, ‘Hey, I can start with this amazingly simple system, and look – it gives rise to these immensely complicated and unpredictable consequences.’” Instead of relying on the Newtonian metaphor of clockwork predictability, complexity seems to be based on metaphors more closely akin to the growth of a plant from a tiny seed, or the unfolding of a computer program from a few lines of code, or perhaps even the organic, self-organized flocking of simpleminded birds.” (p. 329)
A number of modern analyses, especially economic analysis, emphasize equilibrium, possibly subject to occasional shocks. But reality spends most of its time out of equilibrium and sometimes – and some very important times – far from equilibrium.
It is also path dependent, with small initial causes able to create significant long-term effects (analogous to how chaotic systems behave).
Positive feedback is often present and can produce massive consequences. In fact, real life is a sea of interacting positive and negative feedback loops. Nonlinearities increase the difficulty of predicting and modeling systems.
Properties of many real systems are emergent. Simple, but interconnected subsystems can produce complex system behavior that is hard to predict from the characteristics of the subsystems. And it seems the complexity is often more due to the pattern of interconnections than the characteristics of the nodes themselves.
One version of emergence is self-organization. Cellular automata are a simple but rich example of self-organization.
Gene regulation (how a cell controls which genes are “turned on” or expressed) can be thought of as a form of computation. Stuart Kaufmann explored random Boolean networks to investigate generic self-organizing properties of gene regulatory networks. He noted that sparse connection was important to supporting emergent behavior.
Well, he thought, one obvious prediction of his model was that real genetic networks seemed incapable of settling down into stable cycles. He didn’t expect them to have precisely two inputs per gene, like his model networks. Nature is never quite that regular. But, from his computer simulations and his reems of calculations, he realized that the connections only had to be sparse in a certain statistical sense. And when you looked at the data, by golly, real networks seemed to be sparse in exactly that way. (P. 112)
Sparse connection also seems to be true of neural networks – both organic and artificial.
Another interesting “network” is one of catalytic chemicals. Think of them like logic and amplifiers (nonlinear elements) involved in the production of other chemicals. This could have produced self-organizing feedback-loop-driven “autocatalytic” chemical networks at the origin of life.
Economics
And autocatalytic chemical networks are analogous to an economy.
Most obviously, they agreed, an autocatalytic set was a web of transformations among molecules in precisely the same way that an economy is a web of transformations among goods and services. In a very real sense, in fact, an autocatalytic set was an economy – submicroscopic economy that extracted raw materials (the primordial “food” molecules) and converted them into useful products (more molecules in the set). (p. 126)
And such a set can become much more complex:
Moreover, an autocatalytic set can bootstrap its own evolution in precisely the same way an economy can, by growing more and more complex over time….If innovations result from new combinations of old technologies, then the number of possible innovations would go up very rapidly as more and more technologies became available. In fact…once you get beyond a certain threshold of complexity you can expect a kind of phase transition analogous to the ones he had found in his autocatalytic sets. Below that level of complexity, you would find countries dependent upon just a few major industries, and their economies would tend to be fragile and stagnant. In that case, it wouldn’t matter how much investment got poured into the country. “If all you do is produce bananas, nothing will happen except that you produce more bananas.” But if a country ever managed to diversify and increase its complexity above the critical point, then you would expect it to undergo an explosive increase in growth an innovation – what some economists have called an “economic takeoff.” (p. 126)
It also supports the importance of trade:
The existence of that phase transition would also help explain why trade is so important to prosperity… Suppose you have two countries, each one which is subcritical by itself. Their economies are going nowhere. But now suppose that they start trading, so that their economies become interlinked into one large economy with a higher complexity. “I expect that trade between such systems will allow the joint system to be supercritical and explode outward.” (p. 126)
And relates to booms and crashes:
Finally, an autocatalytic set can undergo exactly the same kind of evolutionary booms and crashes that the economy does. Injecting one new molecule into the soup could often transform the set utterly, in much the same way that the economy was transformed when the horse was replaced by the automobile. [This part of autocatalysis demonstrated] upheaval and change and enormous consequences flowing from trivial-seeming events – and yet with deep law hidden beneath. (p. 126)
On homo economicus and its implications:
Unfortunately, the economists’ standard solution to the problem of expectations – perfect rationality – drove the physicists nuts. Perfectly rational agents do have the virtue of being perfectly predictable. That is, they know everything that can be known about the choices they will face infinitely far into the future, and they use flawless reasoning to foresee all the possible implications of their actions. So you can safely say that they will always take the most advantageous action in any given situation, based on the available information. Of course, they may sometimes be caught short by oil shocks, technological revolutions, political decisions about interest rates, and other noneconomic surprises. But they are so smart and so fast in their adjustments that they will always keep the economy in a kind of rolling equilibrium, with supply precisely equal to demand.
The only problem of course, is that real human beings are neither perfectly rational nor perfectly predictable – as the physicists pointed out at great length. Furthermore, as several of them also pointed out, there are real theoretical pitfalls in assuming perfect predictions, even if you assume that people are perfectly rational. In nonlinear systems – and the economy is most certainly nonlinear – chaos theory tells you that the slightest uncertainty in your knowledge of the initial conditional will often grow inexorably. After a while, your predictions are nonsense.
…
The economists, backed into a corner, would reply, “Yeah, but this allows us to solve these problems. If you don’t make these assumptions, then you can’t do anything.”
And the physicists would come right back, “Yeah, but where does that get you – you’re solving the wrong problem if that’s not reality.” (p. 142)
In other words, economics sometimes assumes omniscience and unbounded computational resources. Another way to look at it is – in what situation would it be likely that rational expectations would work?
“The question was,” says [Brian] Arthur, “does realistic adaptive behavior lead you to the rational expectations outcome? To my mind the answer was yes – but only if the problem is simple enough, or if the conditions are repeated again and again. Basically, rational expectation is saying that people aren’t stupid. Then it’s like tic-tac-toe: after a few times I learn to anticipate my opponent, and we both play perfect games. But if it’s an on-off situation that’s never going to happen again, or if the situation is very complicated, so that your agents have to do an awful lot of computing, then you’re asking for a hell of a lot. Because you’re asking them to have knowledge of their own expectations, of the dynamics of the market, of other people’s expectations, of other people’s expectations about other people’s expectations, et cetera. And pretty soon, economics is loading almost impossible conditions onto these hapless agents.” Under those circumstances, Arthur and Holland argued, the agents are so far from equilibrium that the “gravitational pull” of the rational expectations outcome becomes very weak. Dynamics and surprise are everything. (p. 271)
Some contrasts between significant parts of 20th century neoclassical economic theory and that suggested by complexity theory:
Decreasing returns (negative feedback) Static equilibrium (punctuated by external shocks) Perfect rationality Perfect information through all time and unbounded computation Optimization | Increasing returns (positive feedback) Often far from equilibrium Bounded rationality Imperfect information; degraded prediction Imperfect (satisficing) search via adaptation and learning |
Santa Fe Institute’s economics program probed on simulations using agents built on Holland’s classifier system:
…such an economy-under-glass would be profoundly different form conventional economic simulations, in which the computer just integrated a bunch of differential equations. His economic agents wouldn’t be mathematical variables, but agents – entities caught up in a web of interaction and happenstance. They would no more be governed by mathematical formulas than human beings are. (p. 243)
Chaos
Interesting models for complex systems can somehow combine both path-dependency and chaos into useful models:
“I think these kinds of models are the place for contingency and law at the same time….The point is that the phase transitions may be lawful, but the specific details are not.” (Quoting Stuart Kaufmann, p. 127)
But chaos (dynamical systems theory) has limitations:
…chaos theory by itself didn’t go far enough. It told you a lot about how certain simple rules of behavior could give rise to astonishingly complicated dynamics. But despite all the beautiful pictures of fractals and such, chaos theory actually had very little to say about the fundamental principles of living systems or of evolution. It didn’t explain how systems starting out in a state of random nothingness could then organize themselves into complex wholes. Most important, it didn’t answer his old question about the inexorable growth of order and structure in the universe. (p. 288)
Path dependence can be an analogy to chaos theory (simple dynamical systems which nonetheless show high sensitivity to initial conditions) but also to the idea of multiple equilibria (there are many choices of where you could end up).
Prediction
John Holland builds on the importance of prediction in his analyses:
More generally, said Holland, every complex adaptive system is constantly making predictions based on its various internal models of the world – its implicit or explicit assumptions about the way things are out there. Furthermore, these models are much more than passive blueprints. They are active. Like subroutines in a computer program, they can come to life in a given situation and “execute,” producing behavior in the system. In fact, you can think of the internal models as the building blocks of behavior. And like the other building blocks, they can be tested, refined, and rearranged as the system gains experience. (p. 146)
But prediction is not the be all and end all – or even the critical component – of science:
Look at meteorology, he told them. The weather never settles down. It never repeats itself exactly. It’s essentially unpredictable more than a week or so in advance. And yet we can comprehend and explain almost everything that we see up there. We can identify important features such as weather fronts, jet streams, and high-pressure systems. We can understand their dynamics. We can understand how they interact to produce weather on a local and regional scale. In short, we have a real science of weather – without full prediction. And we can do it because prediction isn’t the essence of science. The essence is comprehension and explanation. (p. 255)
On Adaptation and Evolution
A collection of phenomena becomes important: emergence, collective behavior, spontaneous organization, and – importantly – adaptation. This is getting at a key question: how do complex systems adjust to their environment? Evolution – a common form of adaptation – can be though of as exploration of a very large state space.
It works at more than one scale:
The cut and try of evolution isn’t just to build a good animal, but to find good building blocks that can be put together to make many good animals. (p. 170)
An important part of Holland’s work on adaptation in genetic algorithms is his schema theory, which states that “short, low-order schemata with above-average fitness increase exponentially in frequency in successive generations.”
A different take on adaptation from evolution is different types of learning. But evolution itself is robust in the sense that:
“Evolution doesn’t care whether problems are well-defined or not.” (p. 254)
and
…human programmers [use] well-tested algorithms to solve precisely specified problems in clearly defined environments. But in the poorly, defined, constantly changing environments faced by living systems…there semes to be only one way to proceed: trial and error, also known as Darwinian natural selection. The process may seem terribly cruel and wasteful, he pointed out. In effect, nature does its programming by building a lot of different machines…and then smashing the ones that don’t work very well. But, in fact, that messy, wasteful process may be the best that nature can do. (p. 282)
Holland on competition:
“In contrast to mainstream artificial intelligence, I see competition as much more essential than consistency,” he says. Consistency is a chimera, because in a complicated world there is no guarantee that experience will be consistent. But for agents playing a game against their environment, competition is forever. “Besides,” says Holland, “despite all the work in economics and biology, we still haven’t extracted what’s central in competition.” There’s a richness there that we’ve only just begun to fathom. Consider the magical fact that competition can produce a very strong incentive for cooperation, as certain players spontaneously forge alliances and symbiotic relationships with each other for mutual support. It happens at every level and in every kind of complex, adaptive system, from biology to economics to politics. “Competition and cooperation may seem antithetical,” he says, “but at some very deep level, they are two sides of the same coin.” (p. 185)
Indeed, the combination of competition/cooperation and evolution leads to coevolution:
“…organisms in an ecosystem don’t just evolve, they coevolve. Organisms don’t change by climbing uphill to the highest peak of some abstract fitness landscape, the way biologists of R.A. Fisher’s generation had it. (The fitness-maximizing organisms of classical population genetics actually look a lot like the utility-maximizing agents of neoclassical economics.) Real organisms constantly circle and chase each other in an infinitely complex dance of coevolution. (p. 259)
Game Theory
Game theory, of course, has much to say about competition and cooperation. The prisoner’s dilemma is simple but interesting. The winning algorithm for repeated plays in the famous 1980 computer tournament organized by Michigan political scientist Robert Axelrod was TIT FOR TAT.
Submitted by psychologist Anatol Rapoport of the University of Toronto, TIT FOR TAT would start out by cooperating on the first move, and from there on out would do exactly what the other program had done the move before. That is, the TIT FOR TAT strategy incorporated the essence of the carrot and the stick. It was “nice” in the sense that it would never defect first. It was “forgiving” in the sense that tit would reward good behavior by cooperating the next time. And yet it was “tough” in the sense that it would punish uncooperative behavior by defecting the next time. Moreover, it was “clear” in the sense that its strategy was so simple that the opposing program could easily figure out what they were dealing with. (p. 264)
And TIT FOR TAT style cooperation might do well even in a mixed society:
More generally, Axelrod said, the process of coevolution should allow TIT FOR TAT style cooperation to thrive even in a world full of treacherous sleazoids. Suppose a few TIT FOR TAT individuals rise in such a world by mutation, he argued. Then so long as those individuals meet one another often enough to have a stake in future encounters, they will start to form little pockets of cooperation. And once that happens, they will perform far better than the knife-in-the-back types around them. Their members will therefore increase. Rapidly. Indeed, said Axelrod, TIF FOR TAT-style cooperation will eventually take over. And once established, the cooperative individuals will be there to stay; if less-cooperative types try to invade and exploit their niceness, TIT FOR TAT’s policy of toughness will punish them so severely that they cannot spread. “Thus,” wrote Axelrod, “the gear wheels of social evolution have a ratchet.” (p. 265)
Axelrod develops this in his original “The Further Evolution of Cooperation” paper. (Axelrod and Dion 1988)
This also aligns well with Sauer’s description of the development of morality in The Invention of Good and Evil. (Sauer 2024)
Holland’s Classifier System
Three principles that underlay Holland’s Classifier System:
..that knowledge can be expressed in terms of mental structures that behave very much like rules, that these rules are in competition, so that experience causes useful rules to grow stronger and unhelpful rules to grow weaker; and that plausible new rules are generated from combinations of old rules. (p. 193)
This is like evolution but with the replacement of random mutation by (re)combination.
Domains in Dynamic Systems and “the edge of chaos”
Dynamic systems vary from ordered to “complex” to chaotic. The complex zone is where life and many other interesting phenomena emerge. “Life originated at the edge of chaos.”
Chris Langton analogized several different system types using the general idea of phase transitions. (around p.232):
Cellular Automata Classes | I & II | “IV” | III |
Dynamical Systems | Order | “Complexity” | Chaos |
Matter | Solid | Phase Transition | Fluid |
Computation | Halting | “Undecidable” | Nonhalting |
Life | Too static | “Life/Intelligence” | Too noisy |
The edge of chaos framing has a metaphor in a shifting pile of sand:
For the best and most vivid metaphor, he says, imagine a pile of sand on a tabletop, with a steady drizzle of new sand grains raining down from above…The pile grows higher and higher until it can’t grow any more: old sand is cascading down the sides and off the edge of the table as fast as the new sand dribbles down. Conversely…you could reach exactly the same state by starting with a huge pile of sand: the sides would collapse until all the excess sand had fallen off.
Either way, he says, the resulting sand-pile is self-organized, in the sense that it reaches the steady state all by itself without anyone explicitly shaping it. And it’s in a state of criticality, in the sense that sand grains on the surface are just barely stable. In fact, the critical sand pile is very much like a critical mass of plutonium, in which the chain reaction is just barely on the verge of running away into a nuclear explosion – but doesn’t. The microscopic surfaces and edges of the grains are interlocked in every conceivable combination, and are just ready to give way. When a falling grain hits there’s no telling what might happen. Maybe nothing. Maybe just a tiny shift in a few grains. Or maybe, if one collision led to another in just the right chain reaction, a catastrophic landslide will off one whole face of the sand pile. In fact, all these things do happen at one time or another. Big avalanches are rare, and smaller ones are frequent. But the steadily drizzling sand triggers cascades of all sizes – a fact that manifests itself mathematically as the avalanches’ “power-law” behavior: the average frequency of a given size of avalanche is inversely proportional to some power of its size.
…
Just as a steady trickle of sand drives a sand pile to organize itself into a critical state, a steady input of energy or water or electrons drives a great many systems in nature to organize themselves in the same way. They become a mass of intricately interlocking subsystems just barely on the edge of criticality – with breakdowns of all sizes ripping through and rearranging things just often enough to keep them posed on the edge. (p. 305)
There is a claim here, like Ormerod (Ormerod 2005) that showing a power law behavior implies an underlying “complex system” – in Waldrop’s case, one that is “on the edge of chaos.”
Artificial Life
Are computer viruses life?
…many of the participants felt that [computer] viruses has come uncomfortably close to crossing the line [to be defined as alive] already. The pesky things met almost every criterion for life that anyone could think of. Computer viruses could reproduce and propagate by copying themselves into another computer or to a floppy disk. They could store a representation of themselves in computer code, analogous to DNA. They could commandeer the metabolism of their host (a computer) to carry out their own functions, much as real viruses commandeer the molecular metabolism of infected cells. They could respond to stimuli in their environment (the computer again). And – courtesy of certain hackers with a warped sense of humor – they could even mutate and evolve. True, computer viruses lived their lives entirely with the cyberspace of computers and computer networks. They didn’t have any independent existence out in the material world. But that didn’t necessarily rule them out as living things. If life was really just a question of organization, as Langton claimed, then a properly organized entity would literally be alive, no matter what it was made of. (p. 283)
And what about other types of artificial life?
…suppose that you could create life. Then suddenly you would be involved in something a lot bigger than some technical definition of living versus nonliving. Very quickly, in fact, you would find yourself engaged in a kind of empirical theology. Having created a living creature, for example, would you then have the right to demand that it worship you and make sacrifices to you? Would you have the right to act as its god? Would you have the right to destroy if it didn’t behave the way you wanted it to?
…” whether we have correct answers or not, they must be addressed, honestly and openly. Artificial life is more than just a scientific or technical challenge; it is a challenge to our most fundamental social, moral, philosophical, and religious beliefs. Like the Copernican model of the solar system, it will force us to reexamine our place in the universe and our role in nature.” (p. 284)
Implications
On politics:
Witness the collapse of communism in the former Soviet Union and the Eastern European satellites [Chris Langton] says: the whole situation seems all too reminiscent of the power-law distribution of stability and upheaval at the edge of chaos. “When you think about it,” he says, “the Cold War was one of these long periods where not much changed. And although we can find fault with the U.S. and Soviet governments for holding a gun to the world’s head – the only thing that kept if from blowing up was Mutual Assured Destruction – there was a lot of stability. But now that period of stability is ending. We’ve seen upheaval in the Balkans and all over the place. I’m more scared about what’s coming the immediate future. Because in the models, once you get out of one of these metastable periods, you get into one of these chaotic periods where a lot of change happens. The possibilities for war are much higher – including the kind that could lead to a world war. It’s much more sensitive now to initial conditions.
“So, what’s the right course of action?” he asks. “I don’t know, except that this is like the punctuated equilibrium in evolutionary history. It doesn’t happen without a great deal of extinction. And it’s not necessarily a step for the better. There are models where the species that dominates in the stable period after the upheaval may be less fit than the species that dominated beforehand. So, these periods of evolutionary change can be pretty nasty times. This is the kind of era when the United States could disappear as a world power. Who knows what’s going to come out the other end?” (p. 320)
In general
Matter has managed to evolve as best it can. And we’re at home in the universe. It’s not Panglossian, because there’s a lot of pain. You can go extinct, or broke. But here we are on the edge of chaos because that’s where, on average, we all do the best.” (Stuart Kauffman, p. 322)
Other Quotes
Brian Arthur on the death of his 13-year old daughter Merit by a hit-and-run driver.
“They say that time heals,” he adds. “But that’s not quite true. It’s simply that the grief erupts less often.”
Bibliography
Axelrod, Robert, and Douglas Dion. 1988. “The Further Evolution of Cooperation.” Science 242 (4884): 1385–90. https://doi.org/10.1126/science.242.4884.1385.
Krakauer, David. 2024. The Complex World. Santa Fe Institute of Science.
Ormerod, Paul. 2005. Why Most Things Fail: Evolution, Extinction and Economics. London: Faber and Faber.
Sauer, Hanno. 2024. The Invention of Good and Evil: A World History of Morality. Translated by Jo Heinrich. New York, NY: Oxford University Press.
Waldrop, M. Mitchell. 1992. Complexity: The Emerging Science at the Edge of Order and Chaos. New York: Simon & Schuster.
Leave a Reply