Beyond Nostradamus

May 24, 2008

John Bordeaux leads the KM working group for the Project on National Security Reform. He recently broadcast an appeal for advice to the ActKM list serve. Now I need to declare two interests here. Firstly I some involvement in the project, and secondly what ideas I have on the subject are incorporated into SenseMaker™ in its forthcoming 3.0 release or are planned for future releases. Significantly I am writing this while watching a Discovery Channel programme on Nostradamus, its made the very valid point that it has been easy for people to fit the prophecies to facts after the event, but to-date no one has been able to predict anything. Now in the context of prophecy we can easily nod along with this, however I see little difference between using such prophecies and the various post-failure Commission reports that similarly enjoy the benefits of hindsight.

Anyway I came up with seven (those magic numbers again) semi-cryptic bullet points for John which I share below. I think they apply to any knowledge management application. If you want to give John some advice, well he reads this blog so you can add a comment or email him direct.

  • We urgently need to shift from working with chunked documents that seek to summarise material, to increasing direct access to fine granularity raw data in the form of anecdotes, sound files, pictures etc. etc. The process of chunking, or abstraction involves loss of content which may well contain weak signals or subtle clues and more importantly involves making the material specific to the context of its creation in time and socio-cultural context.
  • There is a chronic and overwhelming need to reduce the emphasis on experts interpreting documents, to distributed intelligence in which fragmented material is self-signified at the point of origin or interpreted from many perspectives. To overcome cognitive bias (which includes cultural bias) patterns and anomalies in metadata are used to determine which content gets attention. At this level experts provide expertise, at the level of primary interpretation of content they simply see what they expect to see and confirm hypotheses too early (something known in complexity as premature convergence).
  • In any intelligence application from homeland security to marketing, content capture needs to move from passive gathering of reflections (after action reviews, open source trawling) to proactively seeking data in the general field of investigation. Open source intelligence has value because there is lots of it and it is generated unintentionally, but there is never enough and it is not focused. There is a key need to shift from monitoring for signs of impending change or events, to continuous stimulation of large populations to provide data in a form which is easier to interpret.
  • Expert knowledge needs to be deployable in search systems independently of those experts being present. Getting a bunch of people to write a report is useful, but then the report is filed. Given that in practice those experts would be able to see the significance of something than the non-expert would ignore, systems that deploy expertise also need to work at a fragmented level. In SenseMaker™ we do that, but I’m not allowed to disclose how for a few months while we get the patents sorted out.
  • We need to manage for serendipity, thats how you discover new and novel things, suddenly seeing a novel or unexpected connection. Systems based on pre-given hierarchical taxonomies constrain the way we see the world, they prevent serendipity. This is again an argument for fine-granlairty semi-structured databases and a high use of visualisations – out use of fitness landscapes being one example.
  • We need to realise that all forms of semantic analysis are necessarily limited, albeit useful. Like Newtonian Physics they work within boundaries where I can rely on consistent use of language. Aside from the fact that much valuable intelligence is not in the form of text, even there the subtlties of language use cannot be fully represented by algorithms in a computer. Don’t get me wrong there is some great stuff there, but you need humans to provide signification at primary data level and interpretation.
  • Forget root cause analysis, linear causality (assuming that one thing leads to another consistently). The world is complex, the same thing only happens again the same way twice by accident. In a complex world you are dealing with emergent possibilities, coalescences of meaning rather than categories of certainty, theories of constraint and coherence not theories of control and prediction.

Leave a Reply

Your email address will not be published. Required fields are marked *

Recent Posts

About the Cynefin Company

The Cynefin Company (formerly known as Cognitive Edge) was founded in 2005 by Dave Snowden. We believe in praxis and focus on building methods, tools and capability that apply the wisdom from Complex Adaptive Systems theory and other scientific disciplines in social systems. We are the world leader in developing management approaches (in society, government and industry) that empower organisations to absorb uncertainty, detect weak signals to enable sense-making in complex systems, act on the rich data, create resilience and, ultimately, thrive in a complex world.
ABOUT USSUBSCRIBE TO NEWSLETTER

Cognitive Edge Ltd. & Cognitive Edge Pte. trading as The Cynefin Company and The Cynefin Centre.

© COPYRIGHT 2024

< Prev

In praise of the WIkipedia

One of my earliest blogs recorded an early experience editing the Wikipedia. It's the best ...

More posts

Next >

Golden Gate

Our next accreditation course will be in San Francisco from 18-20 June hosted by the ...

More posts

linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram