This posting continues the theme of the past two days so you might want catch up first! I’ve been talking about research methods and by implication decision approaches. Yesterday I extended that to new thinking about scenario planning and its applicability. Today I want to return to the research theme and also provide some general summary of the different components. I want to start off with a question which comes up fairly frequently, then move onto a wider summary and conclude on meaning.
One of the key aspects of research in the domains of the possible and the plausible is the use of narrative, and (in my view) the self-signifcation of that narrative by those who create it. The person who told you the story knows what it means far better than an “expert” and certainly better than some semantic tool, even the sophistication of sentiment analysis. Its not unusual for someone when they get the results back to phone up or email and say something along the lines of I’ve just looked at the data and people are not indexing their stories properly. A variation of this (which I generally regard as indicating more intelligence) is I can’t see why they have indexed/signified their story in this way. I often talk about self-signification as adding layers of meaning for good reason. The content of the narrative is only a part of the meaning that the contributor can supply, the way they interpret is also key, not only to provide quantitative data and objectivity in an abductive world, but also to empower the story teller over the expert.
If you think they have indexed their story wrongly, then that simply means you are seeing the story through different eyes, you are getting valuable data to the effect that there are contradictions between your view of the world as a researcher and those of your subjects. Some people have the maturity to handle this, many don’t. For the story teller meaning between the layers is a priori coherent, for the third party observer/researcher this may not be the case and its a weak signal, or indicator of the need to go further. The other thing that a lot of people don’t get first time is that for meaning you are looking at the patterns across many stories you don’t dance on the head of a pin, i.e. you don”t get hung up on a single story that tends towards reductionist or aggregation thinking.
I put this table together to help bring together the various threads. I confess to some laziness here in that I tool a screen shot of the Keynote slide rather than going through the tedium of a HTML table!
I’ve brought in three of the Cynefin domains here to help provide some linkage with other aspects of Cognitive Edge work and matched them to the Probable-Possible-Plausible spectrum of the earlier Boisot derivative model. Now I should make it clear that am doing this to fit within the confines of a table. I am going to work on a better picture which shows these as overlapping and gradated, so please accept the table for the moment as an approximation.
The first four rows were described in my first posting on this subject. I am now dealing with how we get meaning in research. In the Simple domain then traditional questionnaires using Likert scales are fine. Its also where we see blog type story databases which depend on starring the stories you like, adding key words etc. Nothing to be honest you can’t do in Facebook, but there are approaches that take this approach and it has utility for storage, but not really for research and only partially for knowledge management. The point here is that we can understand the range of meaning within the material as the subjects they are address work within stable constraint structures. We also see some use of narrative here, either by the provision of free text boxes or pure narrative capture with an academic working through the material to tag it (the assumption here being that the meaning is in the content which is generally the case in Simple and to some extent in Complicated but not so if Complex).
As we move into the complicated then we can still use hypothesis but there is advantage to using the disguised hypothesis technique that we developed (although I think it had been used before) of opposing negatives. To take an example. Instead of asking Does your manager consult you on a regular basis? and gathering the response on a numerical or other scale you ask the respondent to place their story on a scale between the hypothesis not present, or the hypothesis take to excess (this is Aristotle’s Golden Mean as Peter Standbridge pointed out several years ago). That scale (and this is a real case) has the labels Mechanical Indifference and Loving everyone in a big group hug. That means that if the hypothesis is correct or the value present the distribution of stories will be a bell curve around the centre of the scale.
Disguised hypothesis signification or polarities as we call them are best used where the supporting narrative is available for interpretation. This allows their use in research, but also in monitoring and impact measurement. However they are still hypothesis based. As we move to the right of the model we move from the inductive to abductive, from a range of possibilities that can be bounded by our hypotheses to the domain of plausibilities where hypotheses are contra-indicated as they reduce the range of our vision. For this domain we still need self-signifcation, but the signification structure needs to work at a high level of ambiguity. I have illustrated one of these in the triad at the top of the page. This comes from anthropology and is part of a set of signifiers we developed based on anthropology which have been used in many projects since. The labels relate to theory in the field, but they are balanced, a preference is not indicated and people seem to spend more time deciding where to place something. We do by the way sometimes add a top level to a polarity to make it a bad-good-bad variation but that is still hypothesis based and should not be confused with the true and proper use of a triad.
This allows us to use visualisation based on the quantitative data provided. We still get correlations, but the correlation does not simple causation, it indicates a possible linkage (remember abduction is the logic of unexpected connections. The quantitative data allows to handle much large volumes of increasingly fragmented data which is necessary to complex domains with high levels of uncertainty. We can’t have a partial scan we need the broadest possible scan. That also has the advantage that the material is context independent, it has more utility over longer time periods.
That is a very brief summary and will make more sense to members of the network who have been through the training. There is more material in this paper, although I need to update it with the material from these sense of posts.
The key thing is to realise that meaning is not in the content, meaning is an emergent property of content, signifiers, interpretation over time. We are talking about systems which are dispositional but do not have causality. The problem we have is that a generation of researchers and IT professions have grown up with the content heresy, the belief that meaning is contained by the text. Sorry guys, its a little more complex than that.
Just to make it clear, while most of our methods are open source, the methods of self-signification outlined here are patent pending so they are not open source and are a part of SenseMaker®. They can be used as a part of a SenseMaker® project by any member of the network
Cognitive Edge Ltd. & Cognitive Edge Pte. trading as The Cynefin Company and The Cynefin Centre.
© COPYRIGHT 2023