One of the critical approaches we developed with narrative, initially in workshops and then at scale with SenseMaker®, was to take people through parallel processes of interpretation and then get the two groups to compare the results. That work is three decades old in some forms, and the underlying principles have stood the test of time. Humans respond to anomalies. Indeed, the eye has a tiny section, which shifts into highly focused attention in a blurred background when something different happens. So, creating anomalies is critical to any sense-making, a point I have made several times over the last few weeks in varying forms.
But it is not enough to create an anomaly; the process by which that is triggered is also essential, especially when you are dealing with power. And the nature of the anomaly must be explicit and ideally not ‘explainable away’.
Some of our early work included generating contextual archetypes (for example, managers’ and employees’ archetypes created in parallel processes), showing this to both groups and asking each other why they saw it differently. This article from my days at IBM provides some more background to this approach, and a more updated post-IBM article can be found here. Archetypes are a form of symbolism, and if we ignore the abuse by Jung and Campbell trying to create universal and limiting categories, they provide resilient and oblique ways to address complex issues. They allow people to have conversations through the medium of abstraction, reducing stress and increasing the effectiveness of what follows. They also create symbolic languages that enable difficult conversations without getting too personal. It is an aspect of our work that has been in the background for a bit, and I want to bring it more into the foreground and extend into broader semiotic approaches.
As you can see if you read the articles, the process of archetype creation is workshop-based. As we developed SenseMaker®, we could do similar things at scale and with less risk of facilitator bias using people’s experiences reported in narrative or observational forms. We get people to interpret their experiences using high-abstraction metadata (and we can use archetypes for this, by the way). Then we only look at stories that provide an explanans for the statistical patterns that come from people-empowered self-interpretation. This avoids people being triggered or primed by the first stories they see that match what they want to hear. If we take a cluster of the narratives/observations signified (our word for interpreted or indexed) in a similar way and present it to other groups and ask them to interpret the material, then they have gone through the same process; no expert who can be dismissed, no algorithm that can be blamed: they did it, and now they have to think about the consequences. We call this Descriptive self-awareness a principle that applies to a much wider body of our work. Create tensions, and differences that make people aware of differences, don’t drive it home with the proverbial sledgehammer.
I made the point forcibly in my last post that allowing people to interpret their own experiences is vital on both practical and ethical grounds. This was driven home to me in a conversation with a network member that started yesterday and hung over into the morning. They argued that AI, specifically generative AI, could identify and cluster stories to create a ‘living map’ that would allow the more like this, fewer like that question to be asked. This is problematic on different levels:
Don’t get me wrong, AI has value for anomaly spotting and synthesis, but its use for primary meaning is problematic. The above processes work with abductive reasoning; the AI is inductive by nature. This is a point I have been making strongly in my pre and post-Christmas posts.
I should make a related point here – we focus on clusters of stories based on self-interpretation, not on a single structured story. This is an essential difference with techniques such as Most Significant Change or some approaches to Positive Deviance (not all by any means), which create a competitive environment to tell the best story. Many story consultants also like to discover and retell powerful stories, which I find problematic, although I can see it satisfies ego needs. The dangers of expert interpretation are well-known and can impact even Anthropologists. It is worth reading this paper to see the danger, but be aware it is controversial, and I am not endorsing Freeman’s claims, just pointing to an issue).
Humans don’t make sense through single stories or narrative forms (or when they do, it leads to tyranny); it is the fragmented patterns of day-to-day experience from which meaning-making emerges, and change comes from spotting or attempting to trigger anomalies in those patterns. AI can assist us in that. But not replace its essential human qualities.
A ‘living’ map created by AI is more likely to represent the undead, with no further evolution, just degeneration.
Postscript
Another danger with algorithmic interpretation (generative or otherwise) of the narrative is that it plays fast and loose with the truth. Bad actors exploit new technology faster than good ones, and pleas for “responsible’ use are naïve in the extreme and dangerously so. We are entering a period where no one can trust any information unless you know the source. That is one area of particular focus for CynefinCo in society-level work and builds on the work of the Ponte project we ran for the EU. Others were trying to algorithmically determine what was true in a world of disinformation; we focused on getting the input right.
It is a constant source of amazement that the IT community that promulgated the phrase Garbage in, Garbage out still doesn’t realise the implications.
The banner picture is cropped from an original used under the Unsplash+ license, and the street art of the opening picture is by Nick Fewings and obtained from Unsplash. Credit also to Gregory Bateson for the phrase that acts as the title of this post. There is a whole literature on whether he said it and assuming he did what he meant by it, and I may write on that at some stage in the future. For the moment, I am using it in the sense that different ways we use the same information provide the necessary gradient from which meaning can emerge, something that the homogenisation of common interpretation too quickly destroys.
Cognitive Edge Ltd. & Cognitive Edge Pte. trading as The Cynefin Company and The Cynefin Centre.
© COPYRIGHT 2024
This post continues the narrative theme that I have been working through over the last ...
There will be a few context-setting memories before I get to the point in today’s ...