Differences that make a difference

February 27, 2024

One of the critical approaches we developed with narrative, initially in workshops and then at scale with SenseMaker®, was to take people through parallel processes of interpretation and then get the two groups to compare the results.  That work is three decades old in some forms, and the underlying principles have stood the test of time.  Humans respond to anomalies. Indeed, the eye has a tiny section, which shifts into highly focused attention in a blurred background when something different happens.  So, creating anomalies is critical to any sense-making, a point I have made several times over the last few weeks in varying forms.

But it is not enough to create an anomaly; the process by which that is triggered is also essential, especially when you are dealing with power.  And the nature of the anomaly must be explicit and ideally not ‘explainable away’.

Some of our early work included generating contextual archetypes (for example, managers’ and employees’ archetypes created in parallel processes), showing this to both groups and asking each other why they saw it differently. This article from my days at IBM provides some more background to this approach, and a more updated post-IBM article can be found here.  Archetypes are a form of symbolism, and if we ignore the abuse by Jung and Campbell trying to create universal and limiting categories, they provide resilient and oblique ways to address complex issues.  They allow people to have conversations through the medium of abstraction, reducing stress and increasing the effectiveness of what follows.  They also create symbolic languages that enable difficult conversations without getting too personal.  It is an aspect of our work that has been in the background for a bit, and I want to bring it more into the foreground and extend into broader semiotic approaches.

As you can see if you read the articles, the process of archetype creation is workshop-based. As we developed SenseMaker®, we could do similar things at scale and with less risk of facilitator bias using people’s experiences reported in narrative or observational forms.  We get people to interpret their experiences using high-abstraction metadata (and we can use archetypes for this, by the way). Then we only look at stories that provide an explanans for the statistical patterns that come from people-empowered self-interpretation.  This avoids people being triggered or primed by the first stories they see that match what they want to hear.  If we take a cluster of the narratives/observations signified (our word for interpreted or indexed)  in a similar way and present it to other groups and ask them to interpret the material, then they have gone through the same process; no expert who can be dismissed, no algorithm that can be blamed: they did it, and now they have to think about the consequences.  We call this Descriptive self-awareness a principle that applies to a much wider body of our work.  Create tensions, and differences that make people aware of differences, don’t drive it home with the proverbial sledgehammer.

I made the point forcibly in my last post that allowing people to interpret their own experiences is vital on both practical and ethical grounds.  This was driven home to me in a conversation with a network member that started yesterday and hung over into the morning.  They argued that AI, specifically generative AI, could identify and cluster stories to create a ‘living map’ that would allow the more like this, fewer like that question to be asked.  This is problematic on different levels:

  1. It assumes that the meaning is encapsulated in the story when all our work has demonstrated that different people interpret the same story differently, arising from their history and perceptions of the current context.  Those different histories are a form of the interpreter’s ’training data set’   (which differs significantly from its IT meaning) but an entrainment pattern nonetheless.  So, any AI clustering will reflect its own training and probabilistic models.
  2. One issue with executives presenting unpalatable narratives is explaining them away; AI allows this to happen at scale with pseudo-objectivity.
  3. You don’t get the parallel process of interpretation, which is vital to changing behaviour. Instead, both parties are delegating interpretation to the magical black box.
  4. The collection of narratives will not have the diversity needed for sense-making.   If you cluster based on self-interpretation, people often look at the cluster and wonder why these very different things have been interpreted the same way.  With SenseMaker®, you can then drill down into that thorough analysis to see if there are patterns in different group interpretations. Still, you stimulate curiosity and make people think about different things.  AI clustering, a prior, will cluster things based on the content, and that is not the meaning.
  5. You lose the capacity to iterate interpretations meaningfully.  One of our options for the coming QuickSenses is to present the results to your employees in a MassSense and gather scenarios and assessment stories at scale to check meaning.  You may find (to go to the nanner picture, that the real value is in the black, not the red)

Don’t get me wrong, AI has value for anomaly spotting and synthesis, but its use for primary meaning is problematic.   The above processes work with abductive reasoning; the AI is inductive by nature.   This is a point I have been making strongly in my pre and post-Christmas posts.

I should make a related point here – we focus on clusters of stories based on self-interpretation, not on a single structured story.  This is an essential difference with techniques such as Most Significant Change or some approaches to Positive Deviance (not all by any means), which create a competitive environment to tell the best story.  Many story consultants also like to discover and retell powerful stories, which I find problematic, although I can see it satisfies ego needs.  The dangers of expert interpretation are well-known and can impact even Anthropologists.  It is worth reading this paper to see the danger, but be aware it is controversial, and I am not endorsing Freeman’s claims, just pointing to an issue).

Humans don’t make sense through single stories or narrative forms (or when they do, it leads to tyranny);  it is the fragmented patterns of day-to-day experience from which meaning-making emerges, and change comes from spotting or attempting to trigger anomalies in those patterns.  AI can assist us in that. But not replace its essential human qualities.

A ‘living’ map created by AI is more likely to represent the undead, with no further evolution, just degeneration.


Another danger with algorithmic interpretation (generative or otherwise) of the narrative is that it plays fast and loose with the truth.  Bad actors exploit new technology faster than good ones, and pleas for “responsible’ use are naïve in the extreme and dangerously so.   We are entering a period where no one can trust any information unless you know the source.   That is one area of particular focus for CynefinCo in society-level work and builds on the work of the Ponte project we ran for the EU.  Others were trying to algorithmically determine what was true in a world of disinformation; we focused on getting the input right.

It is a constant source of amazement that the IT community that promulgated the phrase Garbage in, Garbage out still doesn’t realise the implications.

The banner picture is cropped from an original used under the Unsplash+ license, and the street art of the opening picture is by Nick Fewings and obtained from Unsplash.   Credit also to Gregory Bateson for the phrase that acts as the title of this post.   There is a whole literature on whether he said it and assuming he did what he meant by it, and I may write on that at some stage in the future.  For the moment, I am using it in the sense that different ways we use the same information provide the necessary gradient from which meaning can emerge, something that the homogenisation of common interpretation too quickly destroys.

Leave a Reply

Your email address will not be published. Required fields are marked *

Recent Posts

About the Cynefin Company

The Cynefin Company (formerly known as Cognitive Edge) was founded in 2005 by Dave Snowden. We believe in praxis and focus on building methods, tools and capability that apply the wisdom from Complex Adaptive Systems theory and other scientific disciplines in social systems. We are the world leader in developing management approaches (in society, government and industry) that empower organisations to absorb uncertainty, detect weak signals to enable sense-making in complex systems, act on the rich data, create resilience and, ultimately, thrive in a complex world.

Cognitive Edge Ltd. & Cognitive Edge Pte. trading as The Cynefin Company and The Cynefin Centre.


< Prev

Obliquity & liminality in narrative

This post continues the narrative theme that I have been working through over the last ...

More posts

Next >

The problem of curation

There will be a few context-setting memories before I get to the point in today’s ...

More posts

linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram