The Aggregative error

December 31, 2011

201112311011.jpg Everything is simpler than you think and at the same time more complex than you can imagine

Goethe

I’m not the first person working in the area of complexity to use the Goethe quote and I am sure i will not be the last. It captures so much of what this subject is about, but also how hard it is to make some of the mental shifts required. As complexity is getting more popular its becoming easier, but misperception and misrepresentation are now both significant dangers. So as the year ends, and as I prepare for a journey down Pembrokeshire for a sentimental New Year’s Day walk (readers will suffer that tomorrow), I thought it would be useful to reflect on one dangers of using the simple word simplistically, namely the aggregative error.

We are all too familiar with the idea that you tackle big problems by breaking them into small ones, solving the small problems which we deem manageable and then reassembling them into a solution for the whole. That works brilliantly if the system is complicated as in any ordered system the whole is the sum of the parts; unfortunately in a complex system this is not the case. The whole is something other than the sum of its parts. Also how we choose to break a complex system down implies that we have knowledge of the system itself. That is hard enough at the current moment in time, but if we look at the emergent possibilities of the future its impossible and actually dangerous to take this approach.

A variant on this error, for which to be honest I have little but contempt, is to argue that it’s all to hard, or we don’t have enough time so we just do something simple that we can understand. Proponents of this approach, who are thankfully few, argue that it may not be the correct solution, but we don’t have enough time to understand what is going on so there is no alternative. I think the problem here is that their minds are stuck in an analytical mindset in which any attempt at understanding is long, resource intensive and (sic) complicated.

If its going to take you time to work out what sort of system it is, then almost by definition its complex. Yes we need simple interventions, but we need to realise that we are dealing with the system as a whole, and that what we do will change the system itself. Now in Cynefin the decision model is Probe-Sense-Respond and the key principle is that we can only gain understanding of the system by interaction with it.

The probes I have called safe-to-fail interventions when I move into operational implementation of Cynefin as a strategic tool. The idea is that anyone with a coherent theory as to what might work, or by not working teach us something of value(failure is often a better short term target than success) gets given a level of resource sufficient to allow them to run the experiment. A lot of people get this wrong and we end up with one or two experiments at most and they are general safe, safe to fail. So here are seven basic principles for these interventions:

  1. You need multiple safe-to-fail experiments not just one or two. Five to seven is a good minimum and there is no reason it should not be more
  2. If at least half of them don’t fail you are not pushing the edge enough (you can target managers on failure rates here by the way)
  3. In general the majority should follow the principle of obliquity, namely they should be attempting to solve issues tangential to the main issue. More on this in a future post, as I have experimented with it, its become more important
  4. An experiment has to demonstrate coherence. You don’t just throw resource at anything. In tight timescales techniques such as ritual dissent provide this, but with more time some of the analytical approaches used in the complicated domain can be used on aspects of the proposed probe. The point is that it doesn’t have to be right, it has to be coherent
  5. The probes should have different assumptions and may even contradict each other in terms of theory. Indeed in a ideal world they will. The more scanning range you get in place here the better
  6. You have to put in sensor mechanisms that can detect success, failure, and failure than can be success, at the earliest opportunity. Ideally you build human sensor networks for this in advance of need. Again something I will be talking a lot about this year.
  7. Any response to success or failure, is generally another iteration of a safe-to-fail experiment or experiments. Solutions in a complex system are part of a constantly evolving series of chances. An inability to cope with uncertainty is death.

That is a starting point, more in future posts about implementing a complex system strategy. In the meantime note the key difference here with aggregative approaches. I am not breaking the situation down not parts but I am taking multiple takes at the situation as a whole. Its a big difference, but its too simple for some.

Recent Posts

About the Cynefin Company

The Cynefin Company (formerly known as Cognitive Edge) was founded in 2005 by Dave Snowden. We believe in praxis and focus on building methods, tools and capability that apply the wisdom from Complex Adaptive Systems theory and other scientific disciplines in social systems. We are the world leader in developing management approaches (in society, government and industry) that empower organisations to absorb uncertainty, detect weak signals to enable sense-making in complex systems, act on the rich data, create resilience and, ultimately, thrive in a complex world.
ABOUT USSUBSCRIBE TO NEWSLETTER

Cognitive Edge Ltd. & Cognitive Edge Pte. trading as The Cynefin Company and The Cynefin Centre.

© COPYRIGHT 2024

< Prev

Upcoming online session

Mark your calendars for January 12th 1600 Eastern Standard Time, 2100 GMT. Mary Boone ...

More posts

Next >

A Treleddyn round

I had resolved some time ago to spend New Year's Day walking somewhere on ...

More posts

linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram