Skip to main content

Please enter a keyword and click the arrow to search the site

Good news for humans: AI doesn't do judgement

Human judgement is still irreplaceable despite advancements in AI, reinforcing the value of critical thinking and decision-making

The evidence of artificial intelligence's (AI) presence is all around us.

To illustrate, let’s go shopping. The barcodes being swiped through the tills may mark the end of our weekly visit, but they are raw material for AI.

Stock control programmes using Big Data on weather and demand trends combine with our swipe to get the right replacement products onto the shelves. Meanwhile, increasingly sophisticated cameras track purchases and help to identify potential shoplifters. Behind the scenes, AI programmes in Head Office model strategy, market trends, financial planning and much more.

This is a long way from the friendly local store, where there is a personal response to each visit and each transaction. Ordering, stocking, keeping theft down and income up is all down to the shopkeeper. No wonder the march of AI is seen as dehumanising, even apocalyptic.

But this apparently straight replacement of the cheerful shopkeeper by dehumanised AI plays more to our fears of what is being substituted than the reality. AI needs humans to set goals (or the tasks for programmes without goals), write the programmes, train the data, check data quality and interpret the results.

Discover fresh perspectives and research insights from LBS

"AI needs humans to set goals (or the tasks for programmes without goals), write the programmes, train the data, check data quality and interpret the results"

'Garbage in' remains 'garbage out' and, in a fast-moving world, human judgement is required to keep the garbage out and make sense of what the machine tells us. That includes the answers we get from ChatGPT. Without human judgement, Big Data is just Big Numbers.

Machines:

  1. Don’t have consciousness or intentionality;
  2. Cannot think abstractly or form an opinion;
  3. Are not good at identifying relevance through context (the appropriate comment in one situation or culture which is deeply offensive in another) and they don’t “do” meaning (think metaphor, irony, or sense of humour);
  4. Don’t have belief nor conscience through ethics and spirituality, or self-belief through aspiration or ambition;
  5. Don’t have emotion or empathy and can’t create relationships or other social bonds involving feeling;
  6. Can’t anticipate spontaneity, idiosyncrasy, contextual shifts, or fallibility; and
  7. Cannot remedy incompleteness, including the confusion of correlation with causation.

That’s quite a list. Moving on from shop-keeping, all the major judgements we have to make in our working and personal lives include some combination of them. Dealing with colleagues, competitors, climate change or children involve most.

To pin 'judgement' down, it’s the combination of relevant knowledge and experience with personal qualities to make decisions and form opinions.

We exercise it by the awareness we have, in knowing who and what to trust, in understanding our feelings and beliefs, by the way we make our choice and, in the case of decisions, by being able to deliver what we have chosen.

So, whatever a machine does through AI, it does not exercise judgement – machines are not mechanical human beings. Even the disputed possibility of 'General Artificial Intelligence', where what the machine can do equals what the human can, does not fill these gaps.

These reasons do not mean that humans are better than machines in all situations. On the contrary, there may well be relative strengths and weaknesses in using machines. The comparative superiority of the machine comes in some cases from human weakness. AI provides speed and consistency, neutrality and focus, while not being bored, ill, temperamental, carried away by greed and fear, or by algorithms having distracting love affairs with each other.

The assumption that there is universal substitution is, in any case, simplistic.

"So, whatever a machine does through AI, it does not exercise judgement – machines are not mechanical human beings"

Dr Eric Topol, in his book Deep Medicine, describes the superiority of AI over humans in some medical specialties such as radiology, where human fallibility is an issue. AI is even better at some aspects of nursing as a result of remote monitoring of patients at home.

But AI cannot provide “the power of detailed, careful observation”, especially with complex and psychological support, not only in nursing but all aspects of medicine. Topol believes that the ideal is humans and machines working together, with machines freeing humans to do what they do best – talking to patients.

AI is not a zero-sum game where machines gain and humans lose. As illustrated in Deep Medicine, it will be the combination of the machine and the human being that provides quality medical care.

Those who fail to recognise what AI can and cannot offer will be outflanked by those who do. But far from diminishing the role of judgement, AI will make the central contribution of human beings even clearer in choices combining permutations of situations without a precedent, significant complexity, new variables, abstract thinking, unusual trade-offs, insufficient data, convoluted qualitative factors, multidimensional risk, idiosyncratic relationships, and the nuances of personality.

In other words, most of what senior managers do and the reason they are paid handsomely to do it.

Sir Andrew Likierman  is Professor of Management Practice at the London Business School and a former dean. He has published on judgment in leadership, the professions and on Boards.

Select up to 4 programmes to compare

Select one more to compare
×
subscribe_image_desktop 5949B9BFE33243D782D1C7A17E3345D0

Sign up to receive our latest news and business thinking direct to your inbox