My original blog The Emperor’s Chess Board was republished on the ActKM forum and its been interesting to observe some of the exchanges, although I am staying out of them for the moment as part of a self-imposed break from listservs which may become permanent. In general they don’t seem to have grasped the idea that I was challenging the concept of centralisation of a government knowledge function (I will qualify this a bit in a future post), arguing that it would manifestly lead to failure to achieve the key goals of making a nation more secure. So far the following themes seem to have emerged:
Now I don’t intend to devote anytime to the second theme, I think it defeats itself by its nature. However the first this time we will get it right approach is interesting if only for the illustrations it gives of linear thinking which so prevail in this space. So in this third post in the series I want to address that, in effect an extension of my earlier two posts before moving to some hopefully pragmatic suggestions on the way forward.
Now it is undoubtably true that frequently people who need to be heard are not heard. With the benefit of hindsight those who got it right suddenly become seen as prescient martyrs whose advise should have been taken. On the face of it this seems to make sense, and if there advise was rejected that its easy to find a source to blame, politicians or whatever. We like to believe that there is a reason for things, its called fundamental attribution error but following it is rather like the Emperor who was conned out his rice crop, following what may seem to be common sense can cost you a lot, in this case probably detecting the pre-conditions of the next terrorist attack.
One of the many problems is there sheer issue of numbers again. In a very large and distributed workforce the number of people making educated guesses or hunches is very high. We tend to underestimate probabilities. Once the event has happened we pay attention to those whose predictions proved correct, we don’t pay attention to the number that were proved incorrect. One famous con trick involved someone sending out a thousand emails and predicting that a stock would go up to half, and down to the other half. When it went up a second email went out to 500 to say “I got it right” then a prediction on another stock to go up went to 250, down to the other 250. The process continued until a small group of people thought the con-man was infallible at which point they were stung big time. In any event, after the event some will have been proved right, statistics tells us so. It does not mean either that there were right for the right reasons, or that they will be right again, or for that matter that they should have been paid attention to.
We did an experiment some years ago, working with the 911 report to get people to understand complexity concepts of attractors. the morning was a complete failure. With the benefit of hindsight they could all see what should have been done, they could not describe it in terms of uncertainty. Over lunch I was depressed. But then a conversation started about a current situation where the outcome was uncertain and what signals should have attention paid to them unclear. All of a sudden in the context of uncertainty, methods designed for uncertainty worked.
What you have got here, and it is VERY VERY dangerous, is using hindsight as a substitute for foresight. We need methods and tools designed for that and they do not include centralised KM functions for government, but rather something more complex. More on that when I have thought up the next variation of Empire for the heading.
Cognitive Edge Ltd. & Cognitive Edge Pte. trading as The Cynefin Company and The Cynefin Centre.
© COPYRIGHT 2024
I'll get back to the part II of The Emperor's Chessboard tomorrow. At the moment ...
I've missed blogging for a couple of days, in the main because I want to ...