On Tuesday, September 11 I was the keynote speaker for a conference on Modeling and Simulation held at Virginia Beach, Virginia. My topic was the cognitive dimension of modeling and simulation. The topic had two purposes – to describe the cognitive functions of the people using the models and simulations, and also to provide guidance to people who build avatars and intelligent agents and want them to more closely mirror the way people think.
I started with decision making, explaining that people rarely go through the option generation and comparison matrices taught in business schools, military academies and schools of engineering. Next I showed that sensemaking isn’t passively deriving inferences from data but also requires us to use our frames and mental models to define what counts as data. Then I addressed planning and replanning, arguing that most tools assume well-defined and stable goals whereas most situations are marked by ill-defined goals and wicked problems. Finally I talked about the difficulty of making automation a team player.
This last topic was a bit contentious. On the surface it seems that a knowledge based system is entering into some sort of coordination with its users. However, for true teamwork all entities need to make some minimal commitment to (a) making themselves predictable to the other, (b) enabling the other to direct their attention and their actions, and (c) monitoring and repairing common ground.
Automated systems don’t have this capability. (See Klein, Woods, Bradshaw, Hoffman and Feltovich, 2004, Ten Challenges for making automation a “team player” in joint human-agent activity.) For example, in some commercial aviation incidents the flight management system took control of the airplane and adapted to anomalies and malfunctions without alerting the pilots. Then, when the system ran out of adaptation capacity it abruptly turned control over to the flight crews, who were completely unprepared. We wouldn’t tolerate such behavior from a human partner – we would expect some warning, some indication that things are getting difficult.
By considering why automated systems aren’t true partners we can also learn more about the criteria for successful human teams.
Linked paper: Download file
Cognitive Edge Ltd. & Cognitive Edge Pte. trading as The Cynefin Company and The Cynefin Centre.
© COPYRIGHT 2022.