Ormerod (2005)
Summary and Comments
Ormerod uses analogies between biological systems and economies (especially extinction events of species and firms) to argue that both are so complex – and a particularly difficult type of complexity – as to be essentially unpredictable. Policies of governments or firms that assume one can develop global models, optimize them, and then achieve desired outcomes are impotent. Better to rely on distributed innovation and competition for useful progress.
Qualitatively, I found the argument compelling. But the chain of evidence is not strong – reliance on apparent similarities, extrapolation from simple models, and induction carry a lot of weight.
That might explain why there doesn’t seem to be much evidence of impact from the book – it’s hard to find a lot of subsequent references or others building on it. Perhaps his heroes – Schumpeter and Hayek – have achieved momentum on their own that covers much of what he recommends.
But another possibility is that while Ormerod’s conclusions are reasonable, both they and analysis to support them are subsumed by a large body of work in the field of complex systems science. An excellent review by Siegenfeld and Bar-Yam (Siegenfeld and Bar-Yam 2020) supports this, and Ormerod is an active contributor in this community with the expected focus on economics (Farmer et al. 2012).
Quotes of Interest
On the management literature
A nice critique of popular management literature, similar to The Halo Effect (Rosenzweig 2007), is:
But as Hannah notes laconically, ‘The tendency to overemphasize successes, and to rationalize them ex post is chronically endemic amongst business historians and management consultants.’ The latter group are particularly prone to the temptation of claiming to have found the unique formula for business success. Books proliferate, and occasionally sell in very large numbers, which claim to have found the rule, or small set of rules, which will guarantee business success. But business is far too complicated, far too difficult an activity to distil into a few simple commands, be it the ‘set price equal to marginal cost’ of economic theory, or some of the more exotic exhortations of the business gurus. It is failure rather than success which is the distinguishing feature of corporate life. We see the survivors, and their triumphs are lionized. But the failures remain virtually forgotten. (p. 21)
A more complete and accurate quote direct from Hannah (Hannah 1999) is
The tendency to overemphasize successes (and to rationalize them ex-post) – what has been criticized as the “Whig” misinterpretation in the context of political history – is chronically endemic among them, as it is also among businessmen and management consultants (see, e.g., Hamel and Pralahad, 1994). It commonly coexists with the conviction that they have found the unique recipe for rectifying the failure of firms or countries (a trait particularly well-developed in the Anglo-Saxon world of one-time leaders who have allegedly failed). I believe that some of the insights this process has generated have been valuable: it has, for example, helped us to understand the role of corporate learning and organizational capabilities in generating asymmetries between firms that provide a key to understanding competitive performance. Like Monlière’s Monsieur Jordain and his prose, “new” industrial economists and businessmen are now beginning to formulate explicitly what thoughtful business have long implicitly understood about the limits of the simpler neoclassical models of “old” industrial economists. The following comments are not intended to undermine that endeavor, but to reinforce it by disciplining some of its more adventurous generalizations.
Not so laconic after all. And also, a little more generous to other economists than Ormerod – they aren’t completely mired in their old neoclassical models.
On Risk and Uncertainty
Risk refers to situations in which the outcome cannot be known with certainty, but the probability of any given outcome is understood perfectly. A simple example would be a toss of a fair coin. There is a fifty– fifty chance of it being either heads or tails. If we are gambling on the next toss being heads, there is a risk that we will lose our money if it turns out to be tails. But we know precisely what the chances are. Uncertainty, in its strict sense, refers to situations in which the probability of the various outcomes is itself unknown. (p. 34)
But it doesn’t seem like the distinction can be binary as he notes later:
In practice, of course, the two concepts almost always blur into one another. In practice we rarely face situations which are as clear-cut as the outcome of the toss of a coin. Equally, it is unusual for us to have no idea whatsoever about the likely distribution of possible outcomes. The precise mix will depend upon circumstances. (p.35)
On Cost Plus versus Marginal Cost equal to Marginal Revenue
The profit maximizing price depends on the market (e.g., elasticity) as well as production costs, but cost plus pricing is hard to escape. He notes:
The classic study was carried out by two Oxford economists, Hall and Hitch, as long ago as 1939. They questioned the owners of thirty-eight firms and found that, rather than profit maximizing by producing where marginal cost is equal to marginal revenue, the majority in fact used cost-plus pricing. The entrepreneurs added up their costs of production and then added what they thought was a fair profit margin. A few took into account what the market price was, but none was able to calculate marginal costs and revenues. (p. 46)
And then notes evidence that this practice continues; this informs his main thesis:
The use of fairly simple rules to guide behaviour is in fact a rather sensible way to behave when confronted with a situation which is both enormously complicated and massively uncertain. (p. 47)
One imagines that Sull and Eisenhardt would agree wholeheartedly. (Sull and Eisenhardt 2016)
The Challenge of System Optimization
Ormerod’s main claim is that complexity undermines predictability which in turn undermines control.
In order to control a system any system, whether an economy, a biological system or a machine – we need to be able to do two things: first, make forecasts which are reasonably accurate in a systematic way over time; and second, understand with reasonable accuracy the effect of changes in policy on the system one is trying to control. Unless policy-makers know with reasonable confidence what state the system is likely to be in at some point in the future, it is not possible to say what action is required now in order to bring about a more desirable outcome. And unless the authorities understand the impact of their actions, it is not possible to know what should be done in order to bring about any desired outcome. The scope for failure abounds. (p.71)
He builds on this later:
A key paradox begins to emerge from all this. Humans, whether acting as individuals or making collective decisions in companies or governments, behave with purpose. They take decisions with the aim of achieving specific, desired outcomes. Yet our view of the world which is emerging is one in which it is either very difficult or even impossible to predict the consequences of decisions in any meaningful sense. We may intend to achieve a particular outcome, but the complexity of the world, even in apparently simple situations, appears to be so great that it is not within our power to ordain the future. (p.123)
and emphasizes the challenge for governments
It is in government rather than the world of commerce where goals are particularly multifarious and not easy to define. (p. 147)
Failure to Model and Control for Equal Income per Head
Ormerod notes some major policy goals that seem hard to satisfy. For equality of income per head:
A theory which predicts that in the long run all areas will converge on the same level of income per head is clearly in serious difficulty if this is not actually observed even for areas within the same country. The same sort of gap that we see between states in America exists between the individual countries of western Europe and, for the most part, between the regions within each of these countries. In short, the mainstream social sciences* have not succeeded in establishing any firm theoretical guidelines on the evolution of inequality over time, whether between the individuals within a country or between countries themselves. Further, the empirical evidence seems to indicate that inequality (p. 26).
Capitalist societies don’t necessarily embrace equality of income per head as their goal, though calls for “equity” might. But meritocracy based capitalist systems mostly (all?) assume inequality of income as a necessary and motivating attribute. But Ormerod sees the difficulty in modeling inequality as itself emblematic of the challenge of complex systems.
Like many phenomena in the social sciences, inequality appears to have the characteristics of what is known as a complex system. There is an inherent lack of predictability. Whatever the method used, it is not possible to make consistently accurate forecasts. The distribution of outcomes over time may be reasonably stable, so that we can make meaningful statements about the range which the system will explore, but we cannot say with confidence where it will be at any given moment. (p. 70)
Subsequently, he elaborates and summarizes:
In the previous chapter, we saw the key reasons why policy-makers cannot control the degree of inequality that emerges in a society. They operate in a very complicated world, in which decisions are enmeshed in inescapable uncertainty regarding their consequences. We saw how inequality evolves both at a global and national level, seemingly indifferent to the well-intentioned efforts of policy-makers to promote greater equality. Sometimes it improves, sometimes it deteriorates. Policies fail. Again, it is the sheer dimension of the problem, the fact that so many influences impinge upon it, which makes it difficult to comprehend and control. (p. 95)
Failure of Economic Forecasts
The persistent failure of economic forecasts is perhaps a more familiar example of this point. Regardless of the data which is gathered, the statistical technique which is applied or the particular economic theory which is used, economic forecasts over time are bound to contain substantial errors. The economy at the aggregate level behaves much more like a purely random system than one which can be predicted and controlled.
It may seem implausible that economic systems behave as if they were almost random. However, this near-random quality does not mean in any way that the individual components of an economy – people, firms, governments – take decisions at random. On the contrary, they act with purpose and intent. But the consequences of these millions upon millions of individual decisions, interacting with each other all the time, lead to an overall outcome, for total output (GDP), say, that appears as if it were close to being random. The sheer dimensions of the problem are simply too great for the system to be understood properly. There are simply too many factors that determine the outcome, and whose relative importance alters over time, for the complete picture ever to be grasped. (p. 72)
This is an observation and a plausible one, but how rigorously has it been evaluated? For support, he cites his own paper with Craig Mounfield which looks at economic growth over time, models it using random matrix theory, and concludes that “the correlations matrices are similar, though not identical, to those implied by random matrix theory.”(Ormerod and Mounfield 2000). This is certainly an example of evidence for near randomness, but it is hard to prove a negative (to assert a system is entirely random by describing and testing every possible deviation from randomness). The approach is perhaps more potent in highlighting the areas where randomness fails – smoking guns for correlation – and that is part of this paper, as well as one using similar techniques to look at randomness in stock prices (the efficient market hypothesis). (Plerou et al. 2002)
Failure of Desegregation
Ormerod describes Schelling’s models that demonstrate how slight preferences of individuals can lead to self segregation. He concludes:
Attempts to promote social integration, whether along class or racial lines, have largely failed, despite the fact that most individuals do not seem to feel strongly about the issues. They are happy to be integrated but, by an ‘unconscious tacit agreement’, this does not happen.
The Schelling model may seem simple, but it is a hugely complex system, whose outcomes depend upon the interactions between its component agents. No single agent in the model appears able to anticipate what might happen to the composition of its local neighbourhood, because ultimately the outcome depends
Yet we can go much further than just invoking the dimension of the problem in order to appreciate the inherent unpredictability of the system. We have perfect and complete information about how individual agents behave. We know all there is to know in this particular context, yet we are still quite unable to predict the precise distribution of locations that will emerge in any particular solution of the model. Uncertainty reigns. (p. 96)
Haldane quote on progress in science
He [J.B.S. Haldane] was responsible for many very quotable aphorisms, of which perhaps the best is his description of the various stages through which a new scientific theory passes: ‘There are four stages of acceptance: i) this is worthless nonsense; ii) this is an interesting, but perverse, point of view; iii) this is true, but quite unimportant; iv) I always said so.’ (p. 189)
See Goodreads quote and Quote Investigator report.
Evolution, Power Laws, and Complexity
A key idea of Ormerod is that complex systems are difficult to model and optimize; consequently attempts to improve performance are better served using other models, and in particular natural selection from biology[1].
He starts noting that evolution in biology doesn’t require system models or even a sense of what a global optimum means:
Yet it is not within the capacity of sunflowers, or indeed of any creature, to plan to grow feet, to act with the conscious intention of producing a mutated variant of the species possessing feet. The changes which gave the plants the ability to turn their heads and follow the sun took place at random. (p. 191)
Ormerod is particularly interested in the frequency over time of extinctions in biology – the result, he contends, of the behavior of a complex, adaptive system. He goes to length to build a case that extinctions follow a power law (fat tail) distribution in time.
Everyone agrees that a power law gives a good description of the pattern in the data, and the debate is about whether or not a slightly different type of law gives a slightly better description. (p. 199)
And he finds power law distributions particularly telling:
Now, the discovery of so-called power law behaviour in widely different areas challenges perceptions of causality at the system-wide, macro level. (p. 205)
…
It is clear that dramatic, large-scale events are far less frequent than small ones. In systems characterized by power-law behaviour, however, they can occur at any time and for no particular reason. The world is not turned completely upside down by these discoveries. Most of the time, small events, small shocks to the system, will only have small impacts, and large shocks will usually have big consequences.
But the fact that we observe power-law behaviour in a system tells us that the system operates in ways that mean that these relationships do not always hold. Sometimes, a very small event can have profound consequences, and occasionally a big shock can be contained and be of little import.
It is not the power law itself which gives rise to these unexpected features of causality; rather, it is the fact that we observe a power law in a system which tells us that causality within it behaves in this way. The conventional way of thinking, which postulates a link between the size of an event and its consequences, is broken. (p. 206)
And he draws further evidence from random networks, which can exhibit a sensitivity to small shocks that results in power law distributions (large events at surprisingly high frequency) on various measurements or outputs.
Power-law behaviour at the level of the system as a whole arises from the ways in which the component parts of the system are connected. In Chapter 8, we introduced the idea that we can think of the economy as being characterized by its network of connections; of how, for example, firms are linked to each other, and whether the flow across any particular connection is positive or negative. Sometimes, it is the distribution of the connections themselves which follows a power law, as is the case, say, with sexual contacts. Sometimes, it is the connections which give rise to power-law behaviour of another feature of the system, such as the relationship between the size and frequency of extinction events in the fossil record.
Systems which are connected in these ways are extremely difficult to insulate against shocks spreading across most or the whole of the system. (p. 207)
Surprisingly, he doesn’t reference chaos theory – nonlinear systems for which a small change in inputs does not produce a small change in outputs, perhaps because such systems do not need to be complex in their description.
He then ties power laws back not only in terms of outputs but of the network connections themselves in a random network (such as the connections between actors in an economy):
In networks in which the number of connections for the individual components follows a power-law distribution, the critical proportion of the population that needs to be infected for the entire system to become infected does not exist. In other words, in principle, if just one single component becomes a carrier of a virus or a shock or an idea, there is a risk that this will percolate across the entire population. Of course, the probability that this will happen is very small indeed, but it is greater than zero. If suddenly a substantial number of people become infected, the chances are higher that it will percolate across the system, but the possibility exists that the shock will be contained and fade away. In a purely random network of connections, neither of these properties are true. Either enough agents are infected for their number to be above the critical value for the shock to spread, or they are not, in which case the shock will simply die out. (p. 208-209)
He sees examples in finance, in particular with the implication of the collapse of Long Term Capital Management (LTCM) in 1998 and the response of the global financial system. He notes that LTCM was informed by Nobel prize winning economists (e.g., Merton) and their models like Black-Scholes, but that these were vulnerable to what Taleb would call black swans (Taleb and Taleb 2016):
The fund failed essentially because it embodied a view of the world in which big changes, big events, have big causes, and since we see very few large identifiable shocks to economies, certainly in the western world, it is then easy to believe that major changes in financial markets will happen with only exceptional rarity. (p. 214)
But a world like that would have faster decaying tails than a power law. Observation, though, says they don’t – for stock prices, for instance:
Here is what real statistical physicists Gene Stanley and some of his colleagues wrote about stock-market fluctuations in 2000: ‘The probability distribution of stock-price fluctuations: stock-price fluctuations occur in all magnitudes, from tiny fluctuations to drastic events such as market crashes. The distribution of price fluctuations decays with a power-law tail and describes fluctuations that differ in size by as much as eight orders of magnitude.’ (p. 214
Extinction Events
Ormerod draws an interesting if tenuous chain of logic. He notes that biological extinction events have a power law distribution in species life. And species cannot plan (other than humans). But firm extinctions also have a power law distribution. So even though firms can apparently plan, it is just as if they couldn’t. Their planning is impotent.
No biological species, with the exception of humanity, is able to anticipate the future and to plan its strategy accordingly. In reality, extinction is a pervasive feature for biological species, as it is for firms. Yet the people in companies are able to think about strategy, they are able to make decisions which will affect the ability of the firm to survive, and still extinction is a pervasive feature.
…
Creatures cannot plan their evolution. The outcome of this process over biological time across species as a whole gives rise to a distinct pattern in the connection between the frequency and size of extinction events. Firms can plan their strategies, the way they intend to change and evolve, but we nevertheless observe the very same distinctive signature in their extinctions over time.
The implication is that it is as if – at last a useful and meaningful way in which to use the economists’ favourite phrase! – firms acted at random, as if they were unable to act with intent and try to plan their futures. The massive uncertainty which often exists even in apparently simple situations means quite simply that intent is not the same as outcome. Firms try all the time to achieve favourable outcomes, but often they fail. And often they become extinct. (pp. 225-226).
And he ties this back to a model in which firms are connected by the kind of power law distribution of connections that he earlier noted result in fat tail shocks.
We will eventually see in Chapters 12 and 13 that it is the structure of the connections between firms, the network across which the impact of firms’ strategies percolates, which is the feature ultimately responsible for the patterns of extinction which we observe. (p. 226)
He elaborates on the implication for business strategy:
These facts support the view we have formed about the ability of firms to bring about a desired strategic outcome. After a short period of existence, once a set of very elementary errors have been avoided, there is almost nothing to be gained by further experience in terms of enhancing the prospects for survival. In addition, the benefits of being large extend only very weakly to the ability to avoid failure and extinction. (p. 245).
He insists that his reasoning is not tenuous:
The model of both biological and corporate extinction patterns illustrates these points clearly. From a scientific perspective, the model is very successful. It is based upon a small number of apparently simple rules, but these rules give rise to properties of the model which are hard, if not impossible, to deduce from inspection of the rules themselves. Crucially, these properties are consistent with key observed features of the phenomena we are trying to explain. (p. 260)
and draws it together:
The clear implication of this abstract theoretical model is that agents, firms, individuals, governments have very limited capacities to acquire knowledge about the true impact either of their strategies on others or of others on them. The model passes stringent scientific tests of validation. The properties that emerge from it, which are not at all obvious from a description of its component rules, accord very well with the subtle but clearly defined patterns of extinction which we observe. And we have seen that even the world’s largest firms are capable of making huge mistakes about the possible effects of their strategies.
To repeat a key phrase which needs to be hard-wired into the brain of every decision-maker, whether in the public or private sector, intent is not the same as outcome. Humans, whether acting as individuals or in a collective fashion in a firm or government, face massive inherent uncertainty about the effect of their actions. Whether it is the great characters of tragedy or giant corporations such as Microsoft, the future remains covered in a deep veil to all. Species, people, firms, governments are all complex entities that must survive in dynamic environments which evolve over time. Their ability to understand such environments is inherently limited.
These limits are a fundamental feature of the systems we have discussed, whether biological or whether in the realm of human social and economic organization, in which the individual agents are connected through networks which evolve over time. These limits can no more be overcome by smarter analysis than we are able to break binding physical constraints, such as our inability to travel faster than the speed of light. This is why things fail. (pp. 266-267).
Remedies and Mitigations
Ormerod draws on Schumpeter’s creative destruction and Hayek’s emphasis on the ability of the market to organically optimize. But he starts to draw strong analogies to evolution and adaptation, saying of Hayek, for instance:
Hayek’s view is much more rooted in biology. Individual behaviour is not fixed, like a screw or cog in a machine is, but evolves in response to the behaviour of others. Control and prediction of the system as a whole is simply not possible. (p. 272).
…
Much more generally, it is innovation, evolution and competition which are the hallmarks of a successful system. (p. 274)
The purpose of government policy, then, is not to plan economies but to promote these attributes:
So a demanding and competitive external environment is essential for the health of the system as whole. But economic regulators, with the possible exception of the US authorities, are obsessed with something entirely different. Their mission in life is to promote competition within the system, to increase the level of competition between the individual agents inside the system. (pp. 278-279).
…
This is not to say that all public policy necessarily ends in failure. Almost at random, some will succeed. But the way of thinking, the Weltanschauung, which relies upon, which believes in detailed planning to achieve precise, carefully monitored aims is inherently doomed to fail. (pp. 287-288).
Bibliography
Farmer, J. Doyne, M. Gallegati, C. Hommes, A. Kirman, P. Ormerod, S. Cincotti, A. Sanchez, and D. Helbing. 2012. “A Complex Systems Approach to Constructing Better Models for Managing Financial Markets and the Economy.” The European Physical Journal Special Topics 214 (1): 295–324. https://doi.org/10.1140/epjst/e2012-01696-9.
Hannah, Leslie. 1999. “Marshall’s” Trees” and the Global” Forest”: Were” Giant Redwoods” Different?” In Learning by Doing in Markets, Firms, and Countries, 253–94. University of Chicago Press.
Ormerod, Paul. 2005. Why Most Things Fail: Evolution, Extinction and Economics. London: Faber and Faber.
Ormerod, Paul, and Craig Mounfield. 2000. “Random Matrix Theory and the Failure of Macro-Economic Forecasts.” Physica A: Statistical Mechanics and Its Applications 280 (3–4): 497–504. https://doi.org/10.1016/S0378-4371(00)00075-3.
Plerou, Vasiliki, Parameswaran Gopikrishnan, Bernd Rosenow, Luís A. Nunes Amaral, Thomas Guhr, and H. Eugene Stanley. 2002. “Random Matrix Approach to Cross Correlations in Financial Data.” Physical Review E 65 (6): 066126. https://doi.org/10.1103/PhysRevE.65.066126.
Rosenzweig, Philip M. 2007. The Halo Effect: And the Eight Other Business Delusions That Deceive Managers. London: Free Press.
Siegenfeld, Alexander F., and Yaneer Bar-Yam. 2020. “An Introduction to Complex Systems Science and Its Applications.” Complexity 2020 (July):1–16. https://doi.org/10.1155/2020/6105872.
Sull, Donald N., and Kathleen M. Eisenhardt. 2016. Simple Rules: How to Thrive in a Complex World. First Mariner Books edition. Boston New York: Mariner Books, Houghton Mifflin Harcourt.
Taleb, Nassim Nicholas, and Nassim Nicholas Taleb. 2016. The Black Swan: The Impact of the Highly Improbable. Random House trade paperback edition. Incerto / Nassim Nicholas Taleb. New York: Random House.
[1] One might as well consider other optimization strategies such as gradient descent, which do optimize using models but local approximate models are adequate. In fact, these might be considered as a simple augmentation to random mutation and natural selection (the mutation is not entirely random but rather informed by some sense of what improvement might result).
Leave a Reply