welcome
cartLogin

Automated Partners

September 14, 2007

On Tuesday, September 11 I was the keynote speaker for a conference on Modeling and Simulation held at Virginia Beach, Virginia. My topic was the cognitive dimension of modeling and simulation. The topic had two purposes – to describe the cognitive functions of the people using the models and simulations, and also to provide guidance to people who build avatars and intelligent agents and want them to more closely mirror the way people think.

I started with decision making, explaining that people rarely go through the option generation and comparison matrices taught in business schools, military academies and schools of engineering. Next I showed that sensemaking isn’t passively deriving inferences from data but also requires us to use our frames and mental models to define what counts as data. Then I addressed planning and replanning, arguing that most tools assume well-defined and stable goals whereas most situations are marked by ill-defined goals and wicked problems. Finally I talked about the difficulty of making automation a team player.

This last topic was a bit contentious. On the surface it seems that a knowledge based system is entering into some sort of coordination with its users. However, for true teamwork all entities need to make some minimal commitment to (a) making themselves predictable to the other, (b) enabling the other to direct their attention and their actions, and (c) monitoring and repairing common ground.

Automated systems don’t have this capability. (See Klein, Woods, Bradshaw, Hoffman and Feltovich, 2004, Ten Challenges for making automation a “team player” in joint human-agent activity.) For example, in some commercial aviation incidents the flight management system took control of the airplane and adapted to anomalies and malfunctions without alerting the pilots. Then, when the system ran out of adaptation capacity it abruptly turned control over to the flight crews, who were completely unprepared. We wouldn’t tolerate such behavior from a human partner – we would expect some warning, some indication that things are getting difficult.

By considering why automated systems aren’t true partners we can also learn more about the criteria for successful human teams.

Linked paper: Download file

Leave a Reply

Your email address will not be published.

Related Posts

About the Cynefin Company

The Cynefin Company (formerly known as Cognitive Edge) was founded in 2005 by Dave Snowden. We believe in praxis and focus on building methods, tools and capability that apply the wisdom from Complex Adaptive Systems theory and other scientific disciplines in social systems. We are the world leader in developing management approaches (in society, government and industry) that empower organisations to absorb uncertainty, detect weak signals to enable sense-making in complex systems, act on the rich data, create resilience and, ultimately, thrive in a complex world.
ABOUT US

Cognitive Edge Ltd. & Cognitive Edge Pte. trading as The Cynefin Company and The Cynefin Centre.

© COPYRIGHT 2022. 

Social Links: The Cynefin Company
Social Links: The Cynefin Centre
< Prev

The causal power of the last tag

- No Comments

For years I have wondered about the disproportionate influence of the final events in sporting ...

More posts

Next >

facebook: the Starbucks of social networking?

- No Comments

I joined up to facebook some time ago and accepted a trickle of invitations to ...

More posts

linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram