I enjoy the responsibility of summary type presentations. You have to sit through a whole day of speeches that vary from the mundane to the brilliant and rapidly evolve a set of material that will allow you to go beyond a simple he said this, she said that type of approach. Now after two days of slides on methods and practice it was time to try and create some structure to allow me, and ideally the audience as well, to make some sense of what they had seen and heard. I wanted to both show how the different approaches fitted together, but also what had been missed and more critically how we should define the boundaries of applicability for each.
What I needed to do was to create a conceptual framework within which I could position all that we had talked about, and all that maybe we should have discussed. The resulting framework was very well received, and was still being recommended by luminaries such as Peter Schwartz the next day. Given Peter more or less invented a lot of this stuff his approval is a fairly high bar to jump so that was good news. Brian Arthur liked it to, so I was in a very good mood in consequence. I’ll also be picking some of this up in the new Cynefin Seminar.
The dimensions of the typology (which is very different from a taxonomy) were:
Once I had that in place on paper, I moved across to the iPad (which is becoming more and more a part of me) and developed the idea. As you can see one of the first things I did was to match and extend the idea of causality. On the far left cause and effect is self-evident or discoverable (the ordered domains of Cynefin) but as we move from probable to possible understanding cause and effect becomes more problematic (common language in the foresight community). As we move further to the right again then systems are not causal, they are dispositional (something for a future post here). This is a key aspect of any complex adaptive system in which, at a system level, there is no repeating relationship between cause and effect but the system can be disposed to evolve in certain ways not in others.
Critically, while on the left of the framework we attempt to reduce uncertainty, as we move to the right increasingly we have to absorb uncertainty.
The green diagonal line of types then followed naturally. Systems models work well in the complicated domain of Cynefin and have high utility. They can range from fairly simple examples to more sophisticated ones such as the Afghanistan Stability model that got a lot of publicity of the time including the somewhat ironic comment from McChrystal When we understand that slide, we’ll have won the war. Now a lot of make the mistake of using that picture as a example of how not to do something which is a major mistake. These types of models are designed for experts by experts and have considerable sense-making utility. However outside of the community of those who created the models they have little value.
From there we move to games, including the multi-participant games that are being used in military strategy and pharmaceutical development along with many other fields. These involve more people and thus increasing scanning capability. I made the point that human mediated games can be more flexible and are cheaper to set up; we don’t need to be obsessed with doing everything in a simulation! From there, and fairly close are the agent based models that typify what I call computational complexity. Here I can create vast numbers of agents and get some real value. Its worth remembering though that the confusion of simulation with prediction can afflict these models as much as the confusion of correlation with causation affects more traditional social science. Its also worth remembering that swarm robots also work in this area and their physical interaction can produce more variety.
The we need to go beyond this aspect then is the whole of system modeling of mass narrative capture and representations such as fitness landscapes which allow us to see the wider patterns or dispositions of a system by approach Gell-Mann’s ideal namely that the only valid model of CAS is the system itself.
Filling in the gaps
That was the main thrust, but the advantage of typologies such as this is that you start to look at the white spaces (or blue in this case) and things suddenly drop into place. Scenario planning is the method of choice for the problematic uncertainties of the possible. Add in wild cards, the use of science fiction (another Peter Schwartz original) and we can considerable extend the range of what people take account of, but we are still restricted to the limits of imagination which is itself bounded in the present. This is also were things like Causal Layered Analysis (a post-structural approach) and other techniques work. They are not suitable for complexity (in the sense of CAS) but they have utility. It still surprises me by the way that otherwise intelligent people can pick these things up and apply them to CAS.
Mass sourcing without constraints is all about trawling open source data using various algorithmic approaches. Again a lot of value, but the absence of constraints limits meaning as we need a human element. Pushing ideas out to wider audiences (crown sourcing) virally or otherwise then allows some constrain and greater direction but its not Wisdom of Crowds per see as you can’t be confident that the pre-conditions are present, in particular the need for deep tacit knowledge or understanding. Just having any large group of people is more creating the stupidity (and cruelty) of a mob.
Top and right, which is always the best place to be
This all leads to the right had side of the model and a key principle. If the system is complex you distribute the cognition, but you centralise the decision making. You need a large human and machine sense making network in place so that you scan from multiple perspectives, but then you need to see the patterns of the whole and make decisions accordingly. Of course those decisions are not micro-decisions, they are about safe-to-fail experiments and the amplification or dampening of the results. They are about relaxing or strengthening constraints. But critically it is about sensing the whole not the parts. Complex adaptive systems are not capable of reduction to parts; it therefore follows that decisions cannot be delegated and aggregated per se. Autonomy of decision making within constraints is another matter, and of course visibility is only needed at the boundaries, so this is not about full transparency (which can damage decision making by making people over cautious) or sick stigma type accountability.
The key point here is disintermediation. While the leader is informed by the processes to the left of the model (and therefore communication is key and we had a lot of good stuff on visualisation), on the right the leader has to be intimately involved in the data itself. This aspect of self-discovery is key to the translation from attend to act. Seeing, attending and acting are separate processes. Good communication, especially communication that challenges perspective (which visualization does well) can create attention but action is a whole different ball game.
There is more to fill our here on how we make decisions, what type of decision support is necessary and the nature of the See-Attend-Act model of sense-making in practice, but that is for a future post. I’d also make it clear that this framework is a work in progress; ideas and questions welcome.
Cognitive Edge Ltd. & Cognitive Edge Pte. trading as The Cynefin Company and The Cynefin Centre.
© COPYRIGHT 2022.