I had a thoroughly enjoyable day yesterday down at the University of the West of England looking at, and talking about the use of robots as a simulation device with Alan Winfield. Now anyone involved in complexity knows about simulation, although too many confuse simulation with prediction as their linear predecessors confused, and confuse, correlation with causation. Simple rules give rise to complex phenomena, so if we build a computer model in which each agent operates on a series of rules, then as the agents interact, patterns emerge. Flocking of birds, termite nest building are all examples of this, as is most of the special effects industry in Hollywood.
Swarm Robots in effect act like agents in a computer simulation, but in a physical environment. You can see an example of a robot swarm patch sorting here. Alan also provided me with this, which he thinks is “probably the best demo of robot swarming in the world so far from the EU FP6 Swarm-bots project , led by Marco Dorigo”. Each robot is actually a sophisticated computer, albeit a low cognition device. It is equipped with various sensors and communication devices. The robots are programmed placed in a space; they then interact with each other and the physical environment. Here we see them sorting, and here is a more sophisticated social experiment. Now the really interesting thing about swarm robots is that, unlike computer agents, they operate in the real world, so things go wrong. Alan told me the story of one day when the cleaners (who had anthropomorphised the robots) wanted him to know that one of the robots was not well; it had developed a wobbly wheel. Now the net result of this was a changed behaviour pattern, which had been imitated by the other robots. Even without the wobbly wheel, each robot has some differences created during the assembly process. You, therefore, get more variation, including accidental variation than is possible in a computer simulation.
Another interesting thing for me was that many of the more interesting results were achieved with small numbers of agents. This seems to be of particular importance in understanding issues of threshold limits. For example, one of the experiments introduced a “deviant” agent who tried to thwart the behaviour of the swarm. The simulation showed that the swarm was remarkably resilient to deviancy. Now, this is important for a variety of reasons. Understanding in different cultures, the threshold levels for deviance is important in counter-terrorism (when will a civilian population provide tacit support to terrorist action?) as well as in consumer marketing (what level of brand transfer is sufficient to break market dominance?) and several other areas.
I met Alan, and the robots for the first time at an EPSRC sandpit on the subject of emergence. This was an open space event (something that will be the subject of a substantial blog next week) that brought together academics from many backgrounds to create trans-disciplinary research programmes. I had the privilege of being Director of the programme, supported by an outstanding team of assessors: Peter Allen, Pierpaolo Andriani and Bill McKelvey. I also learned that political plays amongst academics are worse than in industry! In effect over a million pounds of government money was allocated during an intensive week in what is wonderful alternative for the lottery of form filling that normally goes with grant processes. Either way, out of this week we ended up with the opportunity to create parallel simulation environments that could be used to look at issues of civil emergency and culture. The simulation environments were robotic (my swarms) and biological (ants). Now, this is leading-edge stuff but it is fascinating.
One of the first experiments will be to see what sort of artificial cultures can evolve with a population of robots. Will they learn to tell stories? Will one community use language or humour to differentiate itself? Now the way these environments are set up the robots are left to themselves for a year having had some initial programming. During this period the researchers just observe. This will be available on the web by the way and I will blog it as soon as I have details, the more observers the more likely it is that something unusual will be seen. Alan calls this meme time. The system is then frozen and genetic algorithms are used to set the neural controllers of the robots to encode the dominant memes; this is gene time. Now Alan and I fought ourselves to a standstill over memes, a concept I don’t like but we also talked about some of the ways in which our narrative work could be used to shorten the training time. We also talked about some interesting applications: modeling phase shifts in society and many others. I am going to try and hunt down a client with an intractable problem or two to experiment with this as it could represent a breakthrough.
One of the fascinating conversations took place around an interesting question. While humans are high cognition systems, maybe human society is a low cognition one? Stories pattern the range of options available to us, we copy exemplars (even if they have wobbly wheels) and so on. Now if this is the case, then robot simulations might given us a valuable tool in understanding markets and society. The physicality of the robot environment is closer to a human system than pure silicon.
Cognitive Edge Ltd. & Cognitive Edge Pte. trading as The Cynefin Company and The Cynefin Centre.
© COPYRIGHT 2022.
It has been some time since I shared a Mulla Nasrudin story, so here is ...