Imperium cogito

August 14, 2008

I didn’t like the title of my last post so I changed it to The Emperor’s Shadow as the various methods and approaches I advocated there worked with the natural self-organising capacity of people, albeit with top down stimulation. In effect I was advocating (as I have for a decade plus) that the shadow organisation is far more powerful and effective in knowledge management than is the formal organization. Indeed this is how I opened this sequence by pointing out that centralisation and formalization of the KM function in Government was the wrong thing to do. In my final post I will talk about the realpolitik of the subject. For the moment I want to complete my list of approaches that I think show more promise than traditional ones. The general theme of this final set is the recognition of cognitive reality and it has taken several days to put together.

Most other management disciplines take an information processing model of the human brain, arguing that what is required is for the right knowledge/information to be in the right place at the right time in effect an assumption that seeing the data will result in attention and action. The assumption is that cognition is, or should be a structured process leading to “correct” decisions. Thankfully the evolution of humanity has avoided the autism of information processing, and we need to reflect those evolutionary realities in our systems and practices, rather than trying to constrain humans into the ordered and linear approaches of information processing. This post will address some of the ways in which we can take a naturalising approach to a range of problems, and will in the main look at technology based approaches: I reference by declaration of commercial interest when I started this series. The thesis is to use computers to process large volumes of information, but let humans be responsible for the creation of meaning when data is captured, and when the wider patterns are interpreted.

Now this is large and developing area so I want to start off with some statements that will form the basis for the rest of the post. I have blogged on most of these before with supporting evidence, so for the sake or brevity I will just list them:

  • Humans make decisions based on a partial datascan (typically 5% or less) which we then match against thousands of fragmented patterns, performing a first fit pattern match, the most frequently occurring patterns are nearer the surface so they get used first. In effect we satifise we don’t optimise.
  • The source of those patterns is personal experience, and vicarious patterns transferred through narrative; fragmented in nature. Negative patterns, in particular tolerated failure imprint faster; avoidance of failure having evolutionary advantage over imitation of success.
  • Cognitive bias is inevitable if you start with the original data. Read a few documents, conduct more than a couple of interviews and you activate a limited set of patterns through which you filter all future material; unless you suffer some type of cognitive shock (otherwise known as reality catching up).
  • The deeply rooted narrative patterns of the culture in which we grew up form a profound filtering mechanism which can lead to stereotyping and at times demonization; to the point where we literally don’t see what is in front of our eyes as we have no mechanism to comprehend it.
  • Expert knowledge can not be structured into simple decision rules or be adequately represented in document form
  • The various forms of semantic analysis and interpretation, while having high utility are necessarily limited by the fact that there are no universal deep structures in language, which is fluid, changing in meaning over time and in context.

There are some profound challenges to traditional knowledge management in those statements, the main implication is for the way we build knowledge databases of whatever nature. When we built SenseMaker™ we wanted to reflect the reality of human decision making, so we focused on fragments, signified.

Now that needs a bit of explanation (but you can skip this paragraph). By fragments I mean anything that helps me make sense of a situation. It can be a document, an anecdote, a URL reference, a video, digital recording or whatever. Signification means, as far as is possible the additional of layers of meaning to the original fragment. A key concept here is that the meaning is not necessarily contained in the content of the fragment, but in the context created by the signifier. We decided to call the process signification to distinguish the process from indexing or tagging. A simple and crude way of looking at this is to see indexing (allocating something to a category) as assuming an ordered condition which can be constrained to the indexing categories without loss of meaning. In contrast tagging assumes a chaotic condition in which the use of tags is unconstrained and then relies on volume to allow statistical techniques to come into play (think of tag clouds as an example). Signifiers represent a semi-strucutured approach to tagging; i.e. a complex system which requires light constraints to be in play, with degrees of freedom on signification with the result that meaning can emerge. That’s a very quick summary, but in essence with SenseMaker™ we create a limited structure (often geometrical in nature) which is ambiguous by design, but has enough structure to create a common grammar of meaning between signifier and interpreter.

I have spent a bit of time explaining that, but if you don’t get it don’t panic. Just imagine that I now have a very large database of fragmented material, in which each fragment has been signified into an explicit structure at the point of its creation. The signifiers form metadata which can be interpreted and searched for patterns without examining the original content of the fragment until that is required to provide an explanation of a metadata pattern. This significantly overcomes cognitive bias and allows large volumes of material to be processed. Critically it also allows us to hold the fragments in their original form, possibly in different languages. Aside from this being a valuable source of knowledge, it is also a means of detecting weak signals and determining action. With that in place I can look at three areas of application (complimenting the list in The Emperor’s Shadow.


Sensing patterns, weak signal detection, determining action overcoming bias

landscape

I want to discuss this in the context of a visual representation, hence the pretty picture! It’s one I use in a lot of presentations and you can see it and several others in the Durham Presentation if you want a live performance. Now the dimensions of this three dimensional model are derived from the signifiers which allows us to display a fitness landscape. We are using it here (and it is only one of a range of visualisations) to handle a significant volume of fragments. The hollows represent strong patterns where change is unlikely, the more volatile areas represent weaker patterns where change can happen quickly. In this case we are looking at Iranian attitudes to the middle east and the hollows represent belief systems that are unlikely to change. This allows us to do several different things:

  • Instead of returning a list of items based on some search criteria we can instead represent a significant volume of data in a single visualisation using the metaphor of a landscape. A decision maker can sense then fly over the model looking for patterns, and having seen one click on the model to reveal the original content which is creating that part of the model. If they are looking at a stable and negative pattern then they can seek out a more volatile and potentially favorable area and then seek to make that stronger, sucking away energy from the negative pattern.
  • The goal here is the disintermediation of any filters between the decision maker and raw intelligence (customer stories, employees pictures or whatever). The decision maker does not receive a report or conflicting reports from different agencies or departments, instead they see a cluster of original fragments, often narrative in nature.
  • In this particular case we froze the actual shape and then stepped it forward in three month periods, showing the stories that did not fit the model as yellow dots. Here you will see a new pattern forming in the bottom right, eight fragmented stories from over 20K which represent a new attractor in the space. Thats called weak signal detection, spotting things early means you can exploit the good patterns, disrupt the bad ones at a much lower energy cost than if you wait for the pattern to become strong enough for conventional sensing devices to pick it up.

Imagine (as in one project in Canada at the moment) each of those fragments is a scenario about what could happen, gathered from a broad range of staff, and potentially the civilian population of a province. The model the allows us to see patterns in people’s expectations about the future; sense the dominant patterns and see where change could be achieved. In issues like climate change where we need to find the small, weaknesses in a consumption culture that would allow for change this can be critical Imagine you are the marketing director of a company that representation is on your screen every morning with the raw material being stories told by customers as they engage with your product service; the hollows represent the real brand values, the volatile areas the opportunities and threats of change. Imagine you are the HR director of a multi-national company, and now the stories are from a distributed workforce captured in native language; you can go directly to those stories and read the two or three which you need to pay attention to today,. I think I’ve made the point. By using the power of technology to process large data volumes, but using humans to signifiy and interpret, I combine the best features of the radically different information processing intelligence of silicon, with the pattern based intelligence of carbon.


Deployment of expert knowledge & distributed cognition

You know the way it normally works. You get together a group of experts to look at a situation. They have a few workshops, gather a bunch of material and write a report. Everyone reads it, is impressed then gradually over time memory fades as the report molders in a drawer somewhere. Maybe you form them into a community of practice/interest/whatever and they have some interesting conversations, create some more documents, thats a bit more dynamic although the consistent sustainability of CoPs is an issue. Either way a year or so later something bad happens; in the investigation that follows you discover that the report in the drawer contains material that could have been used. Maybe an individual or two in that CoP wrote a warning email and is now feted as a neglected hero who spoke unto power but was ignored.

Get the picture? Now what we really need if a deployable model. Lets look at an alternative, using what we know now about signified fragments and using complexity theory. Now you get those experts together again and if they want to write a paper they can, but its not what you pay them for. What you want is the raw material from which they abstracted the paper. So every-time they find something they treat it as a fragment and signify it. You end up with a database of several thousands fragments signified by deep experts which you can search and interrogate in different ways. Such a database has value over a longer period of time than the report as we can assemble the material in different ways, rediscover it in different contexts, recombine it with new incoming data to sense or see connecting patterns.

However we don’t stop there, if I have a few thousand fragments signified by said experts then I can create a Classifer which uses the original data as a training material. Number/license plate recognition from roadside cameras is possible as the software takes a training data set if different representations of the alphabet, and uses that to recognise the letters when they are presented again. That’s how a Classifier works, it takes the way that group of experts signified the training data-set and then uses that to signify new incoming material the way the experts would have signified it if they had been present. Now you can deploy experts in real time. Not only that I can deploy different groups of experts on the same data in real time to get contradictory perspectives on the current state; a critical need if the system is complex.

Lets go on a step from that. We have a difficult or intractable problem, we need to bring lots of knowledge into play but we can’t necessarily reveal the problem. Right, we make it into fragments, different fragments representing different aspects of the problem. Those fragments can be stories, text, a video or whatever. We then distribute each of those fragments to different panels (an old marketing concept of proven effectiveness) who signify the fragment from their perspective and we pull the different results back in a summary graphic with drill down capability. This is wisdom of crowds, or as I prefer to call it distributed cognition. Now I am just touching the surface here (and it took us a non trivial investment over three years to get here) of what is possible, I haven’t gone into model deployment and monitoring, but I will leave that for another day. What I am saying, repeating the theme of the last section, is that using technology with fine granularity information objects (fragments) allows me to make better use of human intelligence.

Conceptual awareness

Remember when I started this series (and its taken over the last two weeks) I quoted Lincoln As our case is new, so we must think anew, and act anew. We must disenthrall ourselves, and then we shall save our country. Now this quote, not taken on board at the time with massive consequences for world remains true and problematic to this day. Thinking and acting anew is not something which comes naturally to humans, and especially not to those in power in both government and industry. In practice they do the same things. I remember in the early days of this work sitting with a group of analysts in Washington. They had a way of working which involved reading incoming data, deciding what it means, filing it and writing reports (ok its a bit more than that but not much). They wanted tools which made that process more automatic, they didn’t want to consider doing things differently. This was post 911 by the way, so there was no excuse for complacency.

Now that is a major pattern, even in the face of proven failure, thinking differently other than in the immediate moment of the crisis is difficult, nigh on impossible. Aside from the natural forces of conservatism and the lack of a catholic attitude (both with a small c), one of the obstacles here is the anti-intellectualism that is all to prevalent, especially in the US and UK. to say that something is conceptual in nature is to dismiss it, when in practice (and I use the word deliberately) new things always start as concepts. If you have the intellect to grasp concepts and critically transitionary paradox then you can get ahead of the curve, if you lack intellect or have had it trained out of you by the mind numbing platitudes of fail-fail “initiatives” that determine management success, then you will carry on doing the same things again and again. Arguing each time (as people are arguing in respect of KM in government in the US at the moment) that past failure was due to lack of resources, the right people etc. Anyone who says Don’t worry this time it will be different has probably had the equivalent of a lobotomy and we just can’t afford that in the modern age.

Leave a Reply

Your email address will not be published. Required fields are marked *

Recent Posts

About the Cynefin Company

The Cynefin Company (formerly known as Cognitive Edge) was founded in 2005 by Dave Snowden. We believe in praxis and focus on building methods, tools and capability that apply the wisdom from Complex Adaptive Systems theory and other scientific disciplines in social systems. We are the world leader in developing management approaches (in society, government and industry) that empower organisations to absorb uncertainty, detect weak signals to enable sense-making in complex systems, act on the rich data, create resilience and, ultimately, thrive in a complex world.
ABOUT USSUBSCRIBE TO NEWSLETTER

Cognitive Edge Ltd. & Cognitive Edge Pte. trading as The Cynefin Company and The Cynefin Centre.

© COPYRIGHT 2024

< Prev

The Empire’s shadow

I'm never doing one of these blog sequences again, it means you miss out on ...

More posts

Next >

Imperio!

I have been derided elsewhere for referencing Harry Potter, when my elder and better is ...

More posts

linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram