Research methods

February 22, 2008

As some of you will know there has been controversy in the ActKM list serve on the subject of surveys. No one has yet attempted any serious defense of questionnaires, although one academic who provided a weak one is now arguing that the debate should cease which I will count as an admission of failure. Graham, while not defending such things directly has been taking me to task for not providing information and arguments that in my view would require me to write a book (which I am doing, all too slowly but not on research methods per se). This morning he focused his question and I provided a more detailed answer (thus denying him the last word at least for the moment) which I share below.

You will need one bit of context – I have argued that most surveys are so context free that a random answer is as valid as a considered one. Given that academics need surveys complete to meet nonsensical performance measures I get my children (as an act of charity) to complete them at random, in part fulfillment of the obligations that gain them pocket money.

Graham has also created some new HTML which I like, but for which there is no current representation

Graham’s email follows with my answers below



Dave because I haven’t said anything for about 13 hours, and I desperately want to have the last say, and you won’t or can’t name the research methods that reduce bias, I thought I’d make it a bit easier for you and construct a survey. Knowing your dislike for surveys I will understand if you choose to answer randomly or get a family member to respond. I’ve catered for this eventuality by providing Question 3.

Question 1. From the following list please select the methods, or methods, that reduce cognitive bias. If there is a method that reduces bias that is not listed please include it in your answer.

  • focus groups;
  • content analysis;
  • anthro-simulation environments (Cognitive Edge method);
  • anecdote circles;
  • social network analysis;
  • archetypes-value-themes workshops (Cognitive Edge method);
  • concept mapping;
  • oral history;
  • metaphor simulations (Cognitive Edge method);
  • on-line surveys;
  • structured interviews;
  • model creation by social construction (Cognitive Edge method);
  • Delphi;
  • latent semantic analysis;
  • the future backwards (Cognitive Edge method);
  • randomised control trials;
  • case study.


Question 2. Having chosen a method or methods please explain why the method, or methods, reduces bias in all social research circumstances, regardless of the research context or questions? In other words please explain why it is a meta-method that can be universally applied.

Question 3. Please indicate whether you:

  • considered your responses as you completed the survey;
  • completed the survey using random selection;
  • got someone else to complete the survey;
  • let your cat, dog, canary, chook, or other domestic animal walk across the keyboard so as to completely randomize the response, and remove subconscious human bias;
  • used some other method – please specify.

Dave thank you for participating.
And on that note I intend to follow Patrick Lambe’s sage advice and bow out of this discussion, Thank you to all who participated – I got a good deal from the discussion, and there are no hard feelings at my end!

My response

You keep saying this Graham but I think I have in part answered it, and offered to complete by making 16K words available and deal with the question in more depth at a public presentation (where it is easier to handle context). However as you have given a list I am happy to respond on those as listed although it has taken an hour (so it will also be today’s blog) and is really (you are “wicked” Graham) a request to write what should be a book so it is necessarily abbreviated.

Focus groups – we have conducted controlled experiments which show that most facilitators start to influence the groups direction within 15 minutes and none survive 40 minutes. The judgement call on influence there was done by participants. In addition without constant sub-group disruption pattern entrainment starts to set in as people harmonise their responses with the group norm.

Content analysis: This is rather like your earlier mention of Action Research, you are referencing a huge field, so a valid answer should really have more context. However there some things we can say. Semantic analysis of content using computers has as much value as is possible if you assume common use of language and linguistic form. Given that such an assumption is limited, content analysis is limited by the degree to which a common use of language across the material can be assumed. Human interpretation has the same limitations, with the additional problem of cultural and contextual filtering to which I have previously referred

Anthro-simulation: This method is mainly intended as a learning mechanism for participants, but it can be used for research without bias in a limited way. For example Klein and myself used it in Singapore to measure the impact of different sense-making methods on weak signal detection. By creating a set of weak signals in the feeder systems to different groups as different methods were used we can see which are picked up and which are not. Of course you have to partially qualify this as it is not possible to ensure that each group is identical and sequencing issues have to be addressed

Anecdote circles: Far from perfect and it depends a bit which/whose method you adopt. They are subject to the same issues as focus groups. We have done some work to minimise this with a three facilitator rule (rotating facilitators as they start to display preference) and and also sub-group rotation. The method is useful for a range of functions but there are better ways of gathering narrative. If this is in use and it is one of our practitioners then I would check the three facilitator rule is in operation, with regret few use it

Social Network Analysis: I use your analysis of ActKM as an example of its legitimate use to provide a visual representation of activity from which conclusions may be drawn. The fast bulk of popular SNA based on questionaires has neither validity not compliance with sound ethical principles . I have elaborated this position (and some legitimate approaches) using Cross and Parker’s book as a base in this article:

AVT workshops: These are designed to create cultural signifiers and we went to considerable effort to reduce bias (the method was developed in Denmark and Singapore so we could facilitate in English but conversations would take place in Danish and Mandarin). It is a pre process to reduce bias before an interpretative process takes place, and that interpretative process of necessity has problems. However pre-processes like AVT workshops are ways in which we can reduce some of the issues with semi-strucutured interviews providing (to a degree) some of the contol mechanisms that exist when a Doctor is doing diagnosis. Comparative AVT’s (between two groups) provide relative indicators which are more objective than absolute ones.

Concept Mapping: another one of those phrases with lots of instantiations. It is a representation technique and has the semantic issues mentioned earlier (which limits is usefulness) and the input issues that apply to SNA

Oral History: the traditional use of this in anthropology is heavily biased as you can see by going and looking at some of the examples in the Australian National Museum with an indigenous companion. We have developed a computer mediated KM approach based on oral history which will also incorporate social computing (more on this in Canberra) which we think will provide research data without significant bias when you have large volumes of material.

Metaphor simulations: This is a subset of anthro-simulation and is primarily used as a learning mechanisms. Like those in could in part, with careful preparation provide some objective research data.

On-line surveys: you can gather material in an on-line environment, what matters is how you do it. The vast majority of these fall into the “random answers have the same validity” category.

Structured interviews: the structure determines the answer! Only useful for factual data which can be validated by sample checks. No value other than a a crude pre-process in other areas.

Model creation by social construction: This is not a research method per se, it is designed to create strategic frameworks. However once created within an operational system they can (we think but have not done the work yet) being used to create research data. They have been in anti-terrorist contexts but as part of a full system not in their own right. Use of parallel processes with identical source data and common instructions would produce valid research data.

Delphi: the hypothesis forms to quickly and determines the result. No validity (including MAKE awards) in other than controlled circumstances.

Latent semantic analysis: useful but limited by the limitations of common use of language (like all semantic analysis). Chompsky was wrong, there are no deep structures in language, no grammar gene or structure. All semantic techniques are like Newtonian Physics very useful, within boundaries.

The future backwards: This is a strategic tool and not designed or intended as a research method.

Randomised control trials: Too open a question. It depends on the context, the subject, the trials, the construct of the process. Use in medical contexts is established, use is social context is more limited.

Case Study: to subject to cognitive entrainment, cultural bias and retrospective coherence. No scientific basis.


QUESTION TWO

There are no universal methods or meta-methods in the sense you describe them. Bias can be significantly reduced in context. Only a fool would make a claim for universality in the sense you describe it.

The obligation to reduce bias remains (and to declare context) and it is also possible to exclude some methods in virtually all contexts.

QUESTION THREE

Two cats were involved in the creation of this response, but nether touched the keyboard

Leave a Reply

Your email address will not be published. Required fields are marked *

Recent Posts

About the Cynefin Company

The Cynefin Company (formerly known as Cognitive Edge) was founded in 2005 by Dave Snowden. We believe in praxis and focus on building methods, tools and capability that apply the wisdom from Complex Adaptive Systems theory and other scientific disciplines in social systems. We are the world leader in developing management approaches (in society, government and industry) that empower organisations to absorb uncertainty, detect weak signals to enable sense-making in complex systems, act on the rich data, create resilience and, ultimately, thrive in a complex world.
ABOUT USSUBSCRIBE TO NEWSLETTER

Cognitive Edge Ltd. & Cognitive Edge Pte. trading as The Cynefin Company and The Cynefin Centre.

© COPYRIGHT 2024

< Prev

Oral history and social computing

One of the really interesting things when you start to deal with fragmented narrative (of ...

More posts

Next >

Research methods (cont/d)

I woke up to eager anticipation of Wales v Italy later today, two hungry cats ...

More posts

linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram