As promised I am moving on today to a more dystopian theme with specific books from two of my favourite authors Neil Stephenson and Iain M Banks. Both of them have in common novels in which the transfer of a human to a virtual environment is deemed possible. They also both deal with what is wrongly known as Artificial Intelligence. As any reader of Fantasy knows, names have power and getting them right is key.
I first came across Iain M Banks, as Ian Banks with his first novel The Wasp Factory and he continued to write some pretty outstanding fiction as Iain Banks, the addition of the M was reserved for his science fiction. He had, shall we say, a dark sense of humour and when he was given a fatal diagnosis of cancer he proposed to his then-girlfriend, famously asking if she would do him the honour of becoming his widow. With the M he created The Culture series which deserves reading, and rereading and listing too on Audible as many times as you can create time for. Player of Games is I think the best but for today I want to talk about Surface Detail which is the penultimate novel in the series published three years before his death.
The basic plot of Surface Detail starts with a War Game to determine if societies are to be allowed to create artificial hells in which the mindsets of people are subject to torture in full accordance with the cultural tradition of which they are a part. The description of the Hells is pretty terrifying and while, we have a happy ending with the destruction, by The Culture, of the hardware which contains the bulk of the Hells it is pretty terrifying. There are also strong elements of sacrifice in the characters of Prin and Chay but my intention here is not to spoil the novel but to point out that Banks explored much of the dark side of AI, as The Culture itself is more positive, I think.
The other author is Neil Stephenson whom I first encountered through what I consider his masterpiece, namely the Baroque Trilogy followed closely by Cryptonomicon. But all his books are worth reading although some of the political stereotyping in Seven Eves to my mind damages an otherwise wonderfully inventive novel. I also had the opportunity to work with him once as part of a foresight team in Singapore and he is a fascinating character. Again all the books are worth reading but the variance in quality (both good and bad) is wider than in Banks. Anathem is also brilliant by the way, especially for anyone with a background in Philosophy. Again I am picking up one specific book in this post namely Fall, or, Dodge in Hell.
Now I must admit this is by far and away least favourite of his novels although he links to Deutsch’s work (which was one of the inspirations for Estuarine Mapping) and also Milton to whom I will return indirectly later in this series. The main plot is situated around another artificial hell taken over by a terminally and mentally ill billionaire in which said anti-hero creates a religion centred on worshipping him. The final confrontation between the creator and usurper is interesting in its implications and underpinnings. But my interest here is in section two of the novel where social media determines that a small town on the Colorado River, Moab, has been destroyed by a nuclear explosion when in fact it is still there and life is trying to proceed as normal. In effect this is a world where only the rich can afford to have information curated, everyone else is subject to the whim of advertisers. Now I wish he had expanded on this idea at length. The wider book has many problems and the Guardian compared it with the Narnia books. To be clear, those will not feature in this series, I despise the writing of C S Lewis and would not consider such a description favourable, although I think it is accurate.
Now all of this gives me a context to talk a little about Advertising Interference or Anodyne Inference (I am working on alternative names). To give a sense of where I am coming from let me quote something I wrote in my Introduction to the Flow System playbook:
One of the issues we face here is that the promise of utopia from the creators of these ‘tools’ does not contain any detail of the suggested promised land that would allow an informed choice as to its desirability. In effect, it is an invitation to an undefined future state in which we are invited to simply trust the creator’s promise that ‘we will enjoy it when we get there’. Ironically, we would not allow the unregulated development of new technology with the potential for unpredictable change and unintended consequences in any other field but software; attempting to develop nuclear fusion in your backyard would quickly attract attention from the authorities.
You might also want to take a look at this work by the LIS and read Bender & Gebru’s 2021 paper Stochastic Parrots. And we can add one more book to this post, namely Aldous Huxley’s. Brave New World where much use of AI seems to be to reflect the role of soma in that book. A study of 7% of the BCG workforce by a group of academics from Harvard, MIT and Warwick denied that consultants using AI “finished 12.2% more takes on average, completed tasks 25.1% more quickly, and produced 40% higher quality results than those without”. That paper has been used by many an advocate of AI but to be it says more about the poverty of consultants in the larger consultancy firms. Their business model is largely based on regurgitating existing material and AI will always be better at that, but we need to be better.
People are rushing into this whole area without considering a range of issues. Just to give you some examples: the ability of AI to write a virus, in effect act as a sleeper behind the firewall; mental health issues, we have already had one suicide based on ‘advice’; the energy issues, one training algorithm is the equivalent of transatlantic flights but I don’t see that in organisations carbon neutral strategy.
I could go on, and will at some length in the new year when we are launching something that will measure attitudes, including hopes and fears, and the ability to look at the level of complacency in AI adoption. The danger is we become dependent, we see the results of AI rather in the same way as we perceive conspiracy theories and the magician’s illusions. The danger of AI is that we meet it halfway.
But there are also many positive aspects – summarising existing material to free people up from conventional reporting is one of the areas we are working with. But primary sense-making? Remember that humans evolved to think/act abductively a process that is not dependent on training data sets.
The reason for the swarm in the opening picture is to make the point that an algorithm replicates the way a hive is creating and to a degree automatic processing in humans, but it lacks the ability for novelty receptive processing and also necessary inhibition. Think of what happened with trading algorithms a few years back.
There is a lot more to say here, and some of it I will pick up in next week, For the moment I am raising some questions and suggesting that Stevenson is more or less describing where we are now, Banks where we might go: to Hell or The Culture?
The banner picture is cropped from a Riksmuseum engraving: Laatste Oordeel, Willem Isaacsz. van Swanenburg, after Joachim Wtewael, 1606. The opening picture of a swarm is Steve Sharp obtained from Unsplash
Cognitive Edge Ltd. & Cognitive Edge Pte. trading as The Cynefin Company and The Cynefin Centre.
© COPYRIGHT 2024