Research methods (cont/d)

February 23, 2008

I woke up to eager anticipation of Wales v Italy later today, two hungry cats and emails on research from Joe, Graham and Stephen in the ActKM list serve. I have replicated my answer below with quotes, so that readers not on the ActKM list will understand the context of the reply. If you want to see the originals then join ActKM!

Stephen Bounds reinterprets my comments within the language of social science research. He references Perception, Interpretative & Communicative bias. He also suggests that bias can arise from factors such as financial sponsorship as well as filtering. I am happy to accept this point – the annual executive surveys of the large service firms (IBM, Bain etc. etc. always seem to end up with commercial consequences beneficial to the sponsors.

Stephen then references the standard examples of means by which such bias is overcome (peer review, control groups, measurement tools and the like). He righty points out that qualitative experiments are harder, and mentions recording or transcripts as a mechanism, criticising those methods on the grounds that the context at the time of capture may not be known. Here I fully agree with him, but not with the conclusion he then comes which I quote in full:

I think there are only two real options:

(1) hire a trusted researcher who has proven himself or herself as comparatively unbiased; or
(2) control for the researchers themselves. If three independent researchers participate in the same experiment, and present independent reports which all come to the same conclusions, then that’s about as certain as you’re going to get.

Since I can’t think of a single commercial entity who would be prepared to do (2), then that’s really why there are any number of experts who purport to fulfill the needs of option (1).

I agree that option 2 has some advantages, although it will not work during a period of radical shift where existing experts will all have the same bias based on their training and experience. A large body of otherwise intelligent people for example continue to defend questionnaires, I assume (charitably) because they have never know anything else and consider it legitimate (rather like the use of torture in the 16th Century ….).

My disagreement is with the only two options statement. I have been arguing here for various forms of pre-process before expert interpretation in management and social systems. In a previous post Graham gave me the opportunity to comment on various methods for this so I will not repeat that. Our main attempts to develop new methods here have focused on mass capture of sense-making data (narratives, pictures, blogs, press cuttings) all of which are self-indexed or signified at the point of origin NOT by the researcher. This links to one of the key findings from the application of complexity theory to social systems, namely that we need to create systems based on much finer granularity. This finding is supported by the success of the fragmented fine granularity of the outputs of social computing.

Now I make no claims to perfection here, but I do think based on theory and also on experimental data that we have made a significant step forward. Self-indexing at the point of origin means that the person creating the material adds context or layers of meaning that would be absent in transcriptions. Multiple interpreters in effect picks up on Stephen’s option 2 but in larger numbers. Large numbers are also significant in increasing the reliance one can place on any findings. Finally experts engage with the metadata (the self indexing) rather than the data itself. They only look at the originating data when they find a metadata pattern (in the context of KM this also allows for cross silo knowledge sharing, you only need to share data when the need can be explained).

Stephen moves on in a separate posting to suggest that I am advocating a form of “double blind” system which I suppose in a way I am. Although I am reluctant to accept that phrase given all the baggage which comes with it. He then says that such methods are resource intensive, time consuming and require a non-cynical co-operative population. Now while this is true for some pre-process methods it is not the case for narrative. We have captured 95K self indexed stories from four continents over a week at a cost of under $35K which compares very favourably with traditional research methods. In a large project with University College London we are gathering stories of large scale transport systems failures at lower costs than traditional research. In two projects with Liverpool Museums the cost of impact measurement and the creation of a KM system is under 10% of the cost of more traditional methods. I will present those cases in Canberra in a few weeks time so people should feel free to turn up and try and take me apart! For the moment I want to make it clear that our experience is that such methods are (i) more objective, (ii) more likely to lead to management action (iii) acceptable to management and (iv) move to dynamic research where KM system creation and research go hand in hand.

An advantage in international research (Graham’s point) is that material can be gathered in native language, translated only if the content proves to be significant based on visual or statistical analysis of metadata. We are doing work on cultural mapping and (in the context of counter terrorism) finding ways in which the world can be seen through an alien cultures eyes by sensing metadata patterns, not by using expertise to interpret the original content (something a lot of traditionally trained researchers find very disturbing.

The vital point here, is a complexity one. Such research methods are economical because they distribute the cognitive process of interpretation to the research subject. Stephen’s statement that from a practical point of view, this kind of work will inevitably be undertaken by one or two people max. So my interest is in building the best possible discovery methods within these constraints is I am afraid wrong

My explanation of my use of bias (not the same thing as a definition) related to the way in which the human brain filters incoming stimulation utilising past and hypothecated patterns. Joe Firestone rightly says that this is present in both the physical and natural sciences. The difference however is that in large parts of natural science it is possible to create experimental processes the mitigate that effect, and also to create replicable experiments that can be carried out by peers. This is also true in medicine (Graham’s example of appendicitis) although we start to get some of the added complexities of interacting symptoms and non-physical causes which mean that medical research has different characteristics than say physics.

Social and management science however, certainly in their popular forms either attempt pseudo objectivity (the nonsense of the survey) or permit validation of theory through retrospective coherence: validating a theory by its relative coherence to a “biased” filtering of the past. Read any popular management book if you want examples of this and the vast majority of HBR articles; with the exception of a rather good cover article last November

I am not using bias in the sense of value judgements neither am I imputing evil intent. I am not unhappy with Joe wants to use source or error rather than bias as I think we mean similar things. However I will continue to use bias as I think it is richer. In general Joe’s comments on my response to Graham’s list of methods indicates that we have similar feelings about the validity of methods, but different language (based on different philosophical assumptions) about the reasons for those opinions. Such differences have significant consequences but not in this discussion. For example it leads me to the conclusion that the originating material in a case study is subject to bias/error and retrospective coherence and this has no validity. Joe, taking a stance based on knowledge claims and refutation has less of a problem.

Now Joe goes on to argue that the recognition that our brain processes filter stimuli and the further conclusion that our research processes are inevitably biased is not a central issue in inquiry. Rather, the central issue is whether the knowledge claims emerging from research processes are true (in the case of their descriptive assertions) or legitimate (in the case of their valuational or normative assertions). Our knowledge claims are never certain, but we can say of them that they have or have not survived our criticisms, evaluations, and tests relative to their competitors, and if our processes of criticism, evaluation and test embody fair comparisons among alternatives, then I think we can say that as far as we know our knowledge claims may be true or legitimate, and our research processes for arriving at them have been unbiased.

Now I think the question of knowledge claims is a useful one. The method by its nature limits the knowledge claims that can be made. I have argued that questionnaires by their nature are either trivial (gathering factual data without ambiguity) or misleading (where there is context dependency). Any knowledge claim along the lines of 85% of managers surveyed regarded trust as a key component in a KM programme thus has no validity, its production a waste of money and time, and its contribution to the sum of human knowledge either neutral (if ignored) or a debit (if taken seriously). I do not have to accept the position that all knowledge claims are never certain, or subject only to a refutation test. This represents a particular position in the philosophy of science in which Joe and I have established differences and will doubtless debate directly and indirectly in the future, for the moment I do not think there is a substantial difference in respect of this thread.

Recent Posts

About the Cynefin Company

The Cynefin Company (formerly known as Cognitive Edge) was founded in 2005 by Dave Snowden. We believe in praxis and focus on building methods, tools and capability that apply the wisdom from Complex Adaptive Systems theory and other scientific disciplines in social systems. We are the world leader in developing management approaches (in society, government and industry) that empower organisations to absorb uncertainty, detect weak signals to enable sense-making in complex systems, act on the rich data, create resilience and, ultimately, thrive in a complex world.
ABOUT USSUBSCRIBE TO NEWSLETTER

Cognitive Edge Ltd. & Cognitive Edge Pte. trading as The Cynefin Company and The Cynefin Centre.

© COPYRIGHT 2024

< Prev

The existential me

I will be in Australia (Sydney and Canberra) shortly for some projects and an accreditation ...

More posts

Next >

Saints would not make good politicians

I must say that the current controversy about the Speaker of the House of Commons ...

More posts

linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram