Gaussianitis: compulsive disorder characterised by a subject’s compulsive use of ‘Normal’ statistics in order to get away with the complexity and ambiguity of life
How does Gaussianitis work? Let me give you a couple of examples
The interview with Nick Clegg (the LibDem leader) in GQ Magazine has stimulated a flurry of articles on sexual partner number. Is 30 normal for a 40 year old man? Should I worry if my Casanova index is stuck at 5? Is my Don Giovanni parameter abnormal if I am at 100? Well, what does it mean to be normal in sexual life anyway? Now, this is an interesting question!
How do we assess normality in sexual behaviour? Well, one way is to collect a sample of the population, ask them about their sexual preferences, add up the answer and divide by the total number of respondents. We get the mean and then polish it up by getting rid of the outliers. Then we compare it with our number and decide where we stand. Majority rules. Right? Wrong. It is a symptom of Gaussianitis at work, and, I’m sorry, it affects nearly everyone.
What’s wrong? Well, the error is in the assumption again. That the majority of individuals will be similar to each other. In a famous paper published in Nature in 2001, the Swedish sociologist Liljeros discovered that the distribution of sexual contacts is power law distributed, that is, it shows a long and fat tail with most individuals having only a few contacts and a few of them having lots. A power law distribution shows no significant mean or variance because the extreme events in the tail of the distribution (the Casanovas, Don Giovannis or Patient Zero Gaëtan Dugas) shift the mean and variance. And as the extreme events in the tail are much more common than one would expect, mean and variance never converge. Or, more simply, mean and variance don’t exist. What does it all mean? Simply that the variability of the phenomenon is so large that there is no convergence toward any representative value. The representative value doesn’t exist. As my friend Jack Cohen points out: “it’s like taking an average between a man and woman. What do you get? A person with one breast, one testicle, one ovary, half a penis”. What is that representative of?
Let me give you another example. Robert Axtell is an economist at the George Mason’s Center for Social Complexity. He gave a fascinating talk this year at the Organization Science Winter Conference. He asked the ‘simple’ question: what is a representative US firm in terms of employees? How do you find out? Well, you get a good database, you add all the employees of US firms and then divide by the number of firms. You get something like, say, 20 (I don’t exactly remember the number). Now this number is important if you are in charge of setting the regulatory frameworks for businesses and you want to make life better for the majority of firms. You assume that 20 is the best number to start from. Well it turns out that the distribution of firm sizes in the US (and in most countries all over the world) decreases in a power law fashion. No mean, no variance, no representative firm. The most common firm, the mode in fact, has zero employees. Yes zero, only the owner manager, no employees. And then, where do you start to set your regulatory frameworks for say SMEs?
Jack Cohen, again, says that we humans (but it extends to firms and planets) are like hand-made pieces, made by a artisan, not standardised mass manufactured items. If this is true, and if you talk to him (or read his books) he can show you some quite convincing evidence, then the idea of simplifying the true complexity of life by assuming representative averages when none exist is a dangerous illusion. So next time you revert to an average to make a point, ask yourself if it is Gaussianitis!
In a nice article on the pitfalls of statistics published today on KnowledgeWharton (The Use — and Misuse — of Statistics: How and Why Numbers Are So Easily Manipulated – http://knowledge.wharton.upenn.edu/article.cfm?articleid=1928) there is an interesting discussion on statistics and how tricky it actually is. Nice, but it doesn’t go far enough.
How do we usually proceed to study a field or phenomena where there is lots of apparent or real heterogeneity? Well, we are trained to look for simple explanations, to infer from patterns and regularities the existence of laws (when in doubt apply Occam’s razor) and expect our units of analysis (whether they are cities, people, firms, ecologies or economic transactions – let’s call them agents) to conform to those laws with some individual variations. Assuming the existence of a representative agent, we also expect that it is possible to rank our agents according to how distant they are from the representative agent.
Now imagine what the world looked before this type of reasoning was introduced. Unbounded variability, endless forms, capricious behaviours, permanent amazement at the diversity of the natural and social phenomena. No surprise that Plato introduced the myth of the cavern to try to establish some order in the messiness of reality (for Plato all earthly forms were flawed reflections of the ideal type, which didn’t exist on earth).
Then arrived Quetelet, Demoivre, Gauss, Pearson, etc. and it must have been intellectual nirvana. By using simple concepts such as averages and variances, they could explain the amazing diversity of reality. It worked everywhere, from atoms to voters, from societies to natural systems. The promise of statistics must have seemed unbound. Quetelet thought, in a typical Platonian or pre-communist fashion, that the mean was the embodiment of the ideal form. Variance was evil and extreme variances indicated pathological behaviours. Order was in homogeneity and the mean represent the signature of the ‘right’ value. Perfection rested with the average person and consequently the role of politics was to create the average society. Many sciences followed. Substitute mean with equilibrium, throw in the invisible hand and you get today’s market fundamentalism. In today’s FT George Soros writes:
“for the past 25 years or so the financial authorities and institutions they regulate have been guided by market fundamentalism: the belief that markets tend toward equilibrium and that deviations from it occur in random manner. All the innovations – risk management, trading techniques, the alphabet soup of derivatives and synthetic financial instruments were based on that belief. The innovations remained unregulated because authorities believe markets are self-correcting”
As often in the history of ideas we forget the assumptions on which theories are built. For Gaussian statistics (and linear science at large), they are basically 2: independence and randomness. Now, how many instances do you know in the social sciences in which phenomena, or datapoints, are truly independent from each other and random? But try to take any sample of articles in the social sciences and especially in economics and management, and you will see that Gaussian statistics rules uncontested. Even worse, alternative methods and the underlying weltanschaung (vision of the world) are actively resisted.
As Mandelbrot (the inventor of fractal geometry) puts it:
“The most diverse attempts continue to be made, to discredit in advance all evidence based on the use of doubly logarithmic graphs. But I think this method would have remained uncontroversial, were it not for the nature of the conclusion to which it leads. Unfortunately, a straight doubly logarithmic graph indicates a distribution that flies in the face of the Gaussian dogma, which long ruled uncontested. The failure of applied statisticians and social scientists to heed Zipf helps account for the striking backwardness of their fields. (Mandelbrot, 1983, 404)”
With a colleague from UCLA, Bill McKelvey, I have been studying the misuse of Gaussian statistics and exploring the potential of what is known as Paretian (from Pareto, the Italian economist/sociologist) science. We find that almost anywhere you look at you find distributions that carry the unmistakable sign of Pareto distributions (also known as Zipf, or power law, these are long-tailed distributions where both mean and variance are unstable or don’t exist.), which scream for a non Gaussian interpretation.
More on this in my next blog
On the New York Times (March 30) there is an interesting article on micro-projectors
“The (micro)projectors may be particularly useful for business presentations — for example, when road warriors need to show a product video to small groups. No coordination would be needed to arrange for a screen. Instead, a patch of wall within a cubicle or restaurant could serve for an impromptu presentation. Carolina Milanesi, a research director in London for Gartner, the research firm, says she thinks the microprojectors are most likely to appeal to business travellers who, for example, could use them to beam PowerPoint shows from their smartphones”
And: “Insight Media forecasts a substantial and fast-growing market. “We anticipate total sales of more than $2.5 billion by 2012 for the companion models,” Mr. Brennesholtz said, and $1 billion in revenue for projector modules that are integrated into cellphones and other devices”.
What is the problem with this prediction? Simple, it ignores exaptation and more generally how new applications emerge.
Behind any forecast there is a hidden set of assumptions. In this case the assumption is: a micro-projector is just a tiny projector! Ergo, it will be used in the same way! The newly acquired portability will extend the current applications, not change them. Linear thinking!
The disruptive innovation model is a good example of exaptation. Imagine a new technology which underperforms (in Christensen’s language, ‘is not good enough’) the incumbent technology under a set of attributes that current customers value. The new technology cannot compete with the incumbent and will survive only and only if it can find or create a new emerging niche, where its different ‘package of attributes’ turn out to be advantageous. Usually this happens by trial & error. In this way, the disruptive innovation escapes competition by becoming something else. It may eventually become ‘good enough’ to attack the incumbent’s market position from below, that is, from the least demanding customers segment. The incumbent suddenly sees a new player coming out of nowhere. In fact, what was until a moment before a product living in a different non-competitive world, enters into the incumbent’s competitive space. The problem for the incumbent is that the pre-adaptation that makes possible the disruption is usually discovered when the disruption starts, not before! An exaptation!
The take-home message is that disruptions are often preceded by a process of application discovery: underperforming technologies survive by creating new niches based on new non-competitive applications. How are the new applications discovered? Well, once prototypes are set free in the market, they will link with the nearly infinite universe of idiosyncratic needs, contexts, wants and combinatorial imagination of users out there. The co-evolutionary process between prototypes and users creates the new application space, it makes the rules of the game as it goes along.
So what’s likely to happen with the micro-projectors? That depends on another set of questions? Is the microprojector an underperforming technology? Is it disruptive? Is it going to be exapted into something else? If scaling down causes a change in the way projectors are used, then a microprojector might become something very different from a conventional projector. If this happens, then the last to know (what may happen) will be the experts!
If I were in charge of the development and commercialisation of microprojectors, I’d rather give a number of them to a group of highly diverse (cognitively diverse, á la Scott Page) group of people and invite them to play with the microprojectors, link with existing technologies, invent new behaviours (that the MP enables), form communities around the new behaviours, etc. In other words ‘exapt’ the microprojector!
Nearly all biological traits and many products for particular markets and functions, began life as something different. Feathers were selected for thermal insulation, microwave ovens started life as radar magnetrons and gin&tonic was a concoction to mask the unacceptable quinine taste to British troups in India. The analysis of history of technology and biological evolutions shows that at the root of any adaptive trajectory it is usual for a structure to have been subverted – perverted –from a different function (Gould and Vrba called it “exaptation”). I did a quick review of 19th century innovations and found that about 30% (the real number is likely to be higher) of innovations have an exaptational origin.
Generally exaptation has been regarded as contingent, serendipitous. But, if, as we think, there are regularities, if not rules, then the question becomes: can we exploit these regularities to improve innovation?
At the heart of the innovative potential of exaptation is the indefinite – rarely made explicit – range of potential functions of existing objects: “At the end of its production a US master sergeant wrote nostalgically of its uses: as seat, pillow, washbasin, cooking pot, nutcracker, tent-peg pounder, wheel chock, and even – with the explosive from an userviceable Claymore mine – popcorn popper.” (Tenner, 2004: 253). What was this? The American army helmet.
Innovation-by-‘progressive’ adaptation, or the Market-pull model of innovation, is what traditional business schools, and ‘adaptive radiation’ biologists, teach. This is bland, uninteresting, and very incremental. Innovation-by-exaptation is different because the unforeseen connection between an existing tool and a new function (for which the tool was not designed for) creates the new phase space (the microwave industry sector was created by the serendipitous melting of peanut butter in Dr. Spencer’s pocket whilst working on a magnetron in 1946).
For a jump to new function, there must be permissive contexts that create effective bridges between old tools and new functions. Perhaps there are driving contexts, contexts that encourage contiguity of form and function, so that exaptation is promoted. Here the bio literature helps: Ecosystem engineering (Jones et al., 1994), niche constructionism (Odling-Smee et al., 2003) and external physiology (Turner, 2002) all show how new feedbacks between organismal traits and meta-environmental factors appear. The earthworm I cited in my previous blog is a classic example. Such niche construction ‘engineer’ species build their own environment by ‘perverting’ environmental factors, they hijack external fluxes of ‘energy’, which they use to build new niches, thereby changing their selective trajectory – and everybody else’s! Niche construction (like the creation of new industrial niche/sectors) starts with an exaptation, then evolves/adapts into a locked dance between the two pairs: exaptation/adaptation and organism/environment (or goods/market: think automobiles/gas stations or mobile phones/towers).
What properties of the context are permissive for exaptations? Clearly, a sparse, almost uninhabited region of phase space has few opportunities for the contiguities necessary to establish functional bridges among tools, technologies or indeed species. Exposure to new contexts (projects) favours translation of tools into new functions (‘horizontal’ exaptation), whereas cooptation of tool modules into new architectures (‘vertical’ exaptation) generates new technological families.
We claim that network-based recombinant environments, like the Silicon Valleys, the Cities, and the industrial clusters of the world, are so innovative because they exploit the power of exaptation.
To conclude: how many academic articles do you know that discuss innovation as exaptation?
A few weeks ago Nature published an interesting article on a the memory of slime mould, a common bacterial film. Bacteria form aggregates with emergent properties, one of which is memory. This triggers some interesting considerations, some of which should not surprise complexity sympathisers. Bacteria are close to zero intelligent agents, like certain financial traders in modern agent based modelling simulations. However, by interacting with each other they form a kind of super-organism and develop memory.
Now, one of the common ideas about evolution is that biological evolution and technological evolution are qualitatively different. Even smart people like Steven Jay Gould subscribed to the view that cultural evolution is Lamarckian and genetic evolution is Darwinian. People have intentions, biological species don’t. The transmission of acquired characteristics – Lamarckism – is what we call education. End of story! For the ones of us with a sincere passion for evolution (and the suspicion that there is lot to learn from biological evolution), the maximum we can do is to appeal to the ‘biological metaphor’ and risk the paternalistic comments about lack of rigour, etc.
End of story?
Well, I think there is hope for at least three reasons.
First, in a delightful short book (The music of life) Oxford biologist Denis Noble shows that out of the about 200 types of cells we have in our body, 199 of them ‘misbehave’ and seem to ‘follow’ Lamarck rather than Darwin. There is only one type which ‘obeys’ Darwin (by the way, if you are irritated by the uncountable conservative economists management and consultant gurus who think that evolution is all and only about mutation, selection and retention, this is your book!). The divide between cultural and biological evolution is less wide than it seems.
Second, our community of bacteria shows behaviours that go beyond the single bacterium genetic database (not program). Memory is emergent, it’s not in the genes. Likewise, the argument that cultural evolution is different (from biological) because humans have intentions and free will, may be equally moot as the evolution of artefacts and technologies largely takes place at the aggregated level of societies, where individuals’ intentions only play an indirect role.
Third, the whole Darwinian evolutionary castle is based on an asymmetry: species adapt to the environment, not viceversa. Humans instead control their environment by cultural evolution. In a splendid book called, ‘the extended organism’, Turner shows how the humble earthworm (which is really a freshwater species) created an environment which works as its external kidneys. This environment is called the soil and agriculture is a fortunate consequence of the activity of the earthworm. But this is exactly what we humans do. We build the environment that suits us. The environmental asymmetry doesn’t apply either to us, nor to many species.
Conclusion: as we can not rule out that both technological and biological evolution follow common dynamical patterns, we can at least risk and apply learning from bio evolution to tech evolution and see what we can learn from it. In my next blog I will show one of the interesting results of this method.
Cognitive Edge Ltd. & Cognitive Edge Pte. trading as The Cynefin Company and The Cynefin Centre.
© COPYRIGHT 2025