This is the opening post in a short series looking at AI, which, while most undoubtedly artificial, is not intelligent and can never be, to get my cards on the table. I flagged up this subject in the Twelfetide series, where I raised several issues and a particular concern: an invitation to an undefined future state in which we are invited to trust the creator’s promise that we will enjoy it when we get there. It is no coincidence that a lot of the West Coast hype smacks of a Bible Belt tradition of redemption and possibly the rapture. But while I have considerable concerns, I don’t want to take a Luddite position, although I think we need to pay attention to Ned Ludd’s situation.
I’ve been involved in AI in various forms since the 1980s, and the promise of better decision-making has always been there. I’ve designed systems to trigger anticipatory alerts to improve weak signal detection, and SenseMaker® was initially intended to create more balanced training datasets to feed the algorithm. AI will be of increasing importance in medicine, diagnosis & discovery. It significantly contributes to carbon emissions (one training data set is the equivalent of a transatlantic flight). Still, it may generate ways to ameliorate some of the disasters to come. No longer prevent, I am afraid. It can transform day-to-day drudgery, and in various forms, it helps a dyslexic like myself write without too much embarrassment. The issue, for me, links back to our more comprehensive work around the theme of rewilding, is one of balance.
The question is when the human abductive capacity is best deployed, and how can we ensure it continues to develop and evolve? I remember an awkward moment with Sam Palmisano around the IBM investment in Fountain Park, which he was convinced would remove all middle managers, allowing him to micro-manage the whole company. Micro-manage was my addition there, and the question I asked, which was ignored, was: if you get rid of the middle managers, where will the next generation of CEOs learn their craft? Then there are the behaviour issues. I remember working on a project in the North Sea where we had to remove the AI to ensure that geophysicists would trust their abductive insights. What was happening was that they could not be faulted if the AI system said there was oil, but if it didn’t, and they did, the risk was too high. And then, if anything can be created in digital form, what can we trust?
These and many other issues need attention now, in particular, because there seems to be a rejection of anything negative about AI in many organisations; it will work and save us. This is partly understandable given many commentators’ doom and dire prognostications and with good reason. We also face a situation where few people understand the dangers or the potential and get swept up into something they only partly understand. I’ve seen that with a few clients now, and it’s not so much they don’t understand, but they are too enthusiastic and naïve about what is possible.
In a much-reported study by Boston Consulting Group, which had a sample size of 7% of the workforce, they found that “Consultants using AI finished 12.2% more tasks on average, completed tasks 25.1% more quickly, and produced 40% higher quality results than those without”. When I first saw that my response was not, it was wonderful. Instead, it proved that all consultants do these days is cut and paste things they probably don’t understand. If an AI system can pass an MBA exam, then all that demonstrates is the poverty of an education system that rewards synthesising other peoples’ material. Rather than seeing these as illustrations of AI’s competence, we need to see that these demonstrations of its ‘capability’ expose us to examples of the increasing poverty of human inventiveness, hopefully before it is too late to do something about it. We need to find new and different ways of engaging employees, customers, academics, citizens and so on in new, more innovative forms of sense-making to rebalance or rewild the role of humans. The fictional character of Ned Ludd was still the possessor of a craft, which may have partly been automated but which is still needed. That will require us to think differently than the current hype and baffle the lemming-like dash to the cliffs.
We need to understand what people already know and how they know it because that will influence what we train the AI systems on and what we don’t. It should affect how we generate the training data; black-box confusion of correlation with causation is not the way. We need to retain the draft skills for future generations because excess automation may cause us to lose things we regret later. We need to be able to discriminate between AI and human input and map the contexts in which they work stand-alone or hybrid. We need to constantly measure and test the ethical awareness of those using technology of any type, but AI in particular. We must understand the evolving attitudes as AI use extends or is constrained.
There is a lot we can do, and what I want to develop over this week is an exploration of that, linked to some new mapping tools we will announce late this week, maybe early next.
The banner picture was cropped from an original by Dominik Vanyi and the opening picture is an artist’s illustration of artificial intelligence (AI). This image represents the boundaries to secure safe, accountable biotechnology. Artist Khyati Trehan created it as part of the Visualising AI project launched by Google DeepMind, both on Unsplash. I am using a theme here of something produced by the AI community, which they see as positive, coupled with a banner picture of some of the consequences of the last ‘revolution.’
Cognitive Edge Ltd. & Cognitive Edge Pte. trading as The Cynefin Company and The Cynefin Centre.
© COPYRIGHT 2024
Observant readers will have noticed that I am using pragmatism in two respects: yesterday in ...
One of the broader issues, and I am by no means the first person to ...