One of the main (if not the main) strategies for dealing with a complex system is to create a range of safe-fail experiments or probes that will allow the nature of emergent possibilities to become more visible. In an ordered system safe-fail design is not only possible but is probably mandatory, the only issue is who does the design with what validation processes. A complex system has no repeating relationships between cause and effect, is highly sensitive to small interventions and cannot be determined by outcome based targets, hence the need for experimentation. Note: if none of this makes sense then read the HBR article or at no cost this summary from Tom Stewart or a more elaborate version here from myself.
Now the issue which arises is how to construct such interventions and are there any rules or principles that would help. I have a reasonably well established approach here, and recently Raymond over at the narrative lab came up with 9 principles with which I have some but not complete agreement.
Firstly to my normal mode of operation. Taking the Cynefin framework in a workshop environment (ideally around half a day but an hour is enough if that is all you have) I get the group to break a strategic issue into the four main domains of the Cynefin framework, in effect treating it as a categorisation framework. More elaborate processes involve full sense-making with the framework emerging from the data, but lets keep it simple for now.
One of the rules for this is to break the issue or problem up into discrete issues which are clearly in one domain or another. This helps the process a bit as ordered systems are capable of being reduced to parts so if there is a problem in reduction its a good indicator that the problem is complex. It is rare here to find things in the chaotic domain but it has been known. Either way, that task complete I break the group up and let them work in parallel (if time) or on separate areas (if little time) with the following decision rules.
If the groups are working in parallel then we compare and synthesis results. If the problem has been broken up we do a quick validation and then proceed to the interesting bit. How to handle the residual complex issues. This is often the largest space, this is especially so if you are dealing with conflict resolution for which the process is well suited.
Now at this point there will be lots of views, often conflicting as to how to deal with the issue. This is where conflict is most frequent because the nature of the domain means there is contradictory evidence and disputes are likely to escalate with participants becoming locked into a particular perspective as they defend it. So here are the stages:
All neat and tidy and it works well. So let's have a look at Raymond's 9 rules and provide some commentary. I am not doing this to be critical by the way, this is good stuff and we need more exchange in the network to create more robust methods. Raymond's words in italics, my commentary following.
Don't be afraid to experiment – some will fail.
I would go further than this and say that experiments should be designed with failure in mind. We often learn more from failure than success anyway, and this is research as well as intervention. We want to create an environment in which success is not privileged over failure in the early stages of dealing with complex issues.
Every experiment will be different – don't use the cookie-cutter approach when designing interventions.
Yes and no. You might want the same experiment run by different people or in different areas. Cookie-cutter approaches tend to be larger scale that true safe-fil probes so this may or may not be appropriate.
Don't learn the same lesson twice – or maybe I should say, don't make the same mistake twice.
Disagree, you can never be sure of the influence of context. Often an experiment which failed in the past may now succeed. Your competitor may well learn how to make your failures work. Obviously you don;t want to be stupid here, but many a good initiative has been destroyed by the We have tried that argument
Start with a low-risk area when you begin to experiment with a system.
Again yes and no. If you are talking about the whole system yes, but normally complex issues are immediate and failure is high risk. The experiment is low risk (the nature of safe-fail is such that you can afford to fail) but the problem area may well be high risk. In my experience complexity based strategies work much better in these situations.
Design an experiment that can be measured. That is, know what the success and failure indicators of each experiment are.
Change measure to monitor and I can agree with it. The second sentence I would delete
Don't be afraid – did I mention that already?
cool fully endorse
Try doing multiple experiments on the same system – even at the same time. Some will work, some will fail – good. Shut down the ones that fail and create variations on the ones that work.
Introduce dissent. Maximize diversity in the experiment design process by getting as many inputs as possible.
In the main agree, but see the above process. I generally don't like the failure and success words as they seem inappropriate to probes
Learn from the results of other people's experiments.
Yep, but remember your context is different
Teach other people the results of your experiments.
Yep, but remember your context is different
Cognitive Edge Ltd. & Cognitive Edge Pte. trading as The Cynefin Company and The Cynefin Centre.
© COPYRIGHT 2024
James Dellow hopes that this paper will will cause a bit of stir in the ...