As part of the preparation for CalmAlpha this week I've been reviewing some of the general principles of complexity in human systems. I few days ago I expressed some concerns about adoption of language without really thinking about the implications, and picked up one "ouch" today as a result. I also ran through 7 intervention principles which expanded on the safe-to-fail idea which is so critical.
Today I want to think about some of the more common mistakes, fortunately less than seven, I am keeping it to three.
1. Assuming that self-organisation is the same thing as anarchy
In nature self-organisation happens due to connectivity and constraints. It's not just about allowing things to spontaneously self-organise, that generally results in spontaneous combustion! The degree of constraint here is key: too little and you just get a feeling of well being without much sustainable product (this happens with a lot of open-space, good at the time but..). Too much and you just get what worked the last time. SNS was one of the techniques I put together to manage this balance in team formation. We added The Dispossessed to the CalmAlpha reading list to cover this issue.
2. The natural systems fallacy - thinking of deer in sylvan glades forgetting the cockroaches
You get a fair amount of this on the conference circuit, its the belief that complex systems are natural systems, and that natural systems are a priori ethical/valuable/desirable. In practice complexity is morally neutral, its a a physical and social phenomenon that allows us to understand agent interaction within constraints. As Arthur has pointed out technology development is also complex in nature and we are defined as a species by our use of tools, which includes methods and process.
3. Confusing humans with ants - the new Calvinists
One of the most common, especially with modellers. It's very easy to be seduced by Boids algorithm and try and discover deterministic rules that govern human behaviour that then allow models and the consequent confusion of simulation with prediction. In practice human systems represent a separate or sub-field of complexity (or at least some of us think so). I sometimes summarise the difference as the 3Is, namely intention, intelligence and identity. Animals operate by genetically or partially learn responses to stimuli. Humans are capable of controlling their environment, we change identity on context (so don't have single agency) and we can act intentionally (which includes concepts of altruism and sacrifice). There are a few academics who rather dislike free will and challenge, hence the secondary title.
Its not an exhaustive list, but it covers some of the most common. More source material tomorrow and of course the title of this post (from Joyce) makes the important point. Its not that errors or mistakes are somehow avoidable, although some can be, The issue is do we learn and move forwards?
Cognitive Edge Ltd. & Cognitive Edge Pte. trading as The Cynefin Company and The Cynefin Centre.
© COPYRIGHT 2021.