Artificial Intelligence and Human Insight: 
The Importance of Understanding What You Are Saying

Marek Kohn 


When people were first beginning to think about artificial intelligence, around the beginning of the 1950s, the theorist Alan Turing proposed a criterion for evaluating the intelligence of a machine. He argued that instead of asking whether the machine could think, it would be more useful to ask whether its responses could be distinguished from those of a human being. The computer was like a black box; you could never know what was going on inside it. Instead, people should look at what they could see coming out of the black box: the responses the computer gave when they ‘talked’ to it If they couldn’t tell whether they were talking to a computer or a human being, then the computer was the functional equivalent of a human.

Turing called this the ‘imitation game’. His approach to artificial intelligence turned it from a problem of philosophy to one of performance. It didn’t matter whether the computer could think, or had a mind, or understood what it was saying. What mattered was how well it performed in a competitive situation. But back in 1950, the idea of such a contest was closer to science fiction than reality. Computers were a long way from being able to match human performance in anything other than mathematical calculations. 

Seventy years later, though, the imitation game is on. Machines can now produce translations of text that compare respectably with translations produced by humans. Although the contexts in which they can do that are fairly limited so far, they have demonstrated their potential, and there is every reason to expect that their capacities will grow. Real-time voice translation systems are in their early stages, and (as we heard in the first of these webinars), their engineers have some challenging problems to solve, but there’s no fundamental obstacle standing in the way of their further development. It’s reasonable to expect that in the not-too-distant future, organisations looking for interpreting services will have a real choice to make between humans and AI.

So what effect might competition from computers have on human interpreters? Could AI replace them and turn their profession into an automated industry? It might become able to provide many of the services they now deliver – but in the very process of doing so, it would highlight the qualities that will always give human interpreters a competitive advantage. AI may prove to be a valuable marketing tool for interpreters, helping them draw attention to the most sophisticated skills that they bring to their work, and the value that only humans can add to interpreting services.

Interpreters will gain this opportunity because the interpretation game will be as testing a version of the imitation game as Alan Turing could have hoped to find. It’s an ideal system for telling humans and computers apart. And it’s an ideal activity for demonstrating the importance of understanding what you are saying.

It’s true that there may be circumstances in which organisations would positively prefer interpreters that don’t understand what they’re saying. When security and confidentiality are decisive considerations, it may seem safer to trust an AI system than a person. When people speak about sensitive issues via a human interpreter whom they don’t know, they are being obliged to share confidences with a stranger. They might feel more comfortable knowing that their words are being translated by a machine that is incapable of having thoughts or feelings about what they say. They might speak more freely knowing that the digital interpreter wouldn’t judge them.

However, they might feel much more comfortable with human interpreters if they see them as a source of sensitive and deep forms of support that machines can’t provide. Computers can be taught etiquette, but they can’t feel empathy. They can be programmed to behave in ways that respond to social sensitivities, and artificial intelligence will enhance their social skills by learning from their interactions with people. For example, they could learn when to use polite forms of address, and when not to. That would put them ahead of many native English speakers, who assume that speakers of other languages are as happy as they are to be addressed by their first names.

But that’s not empathy – and formality isn’t what it used to be, either. In Alan Turing’s day, the conventions of institutional discourse were rigid and stable. Although Cold War adversaries denounced each other in sometimes incendiary terms through their propaganda and media channels, they didn’t undermine the traditional idiom of international diplomacy, in which everybody was ‘Monsieur’, or occasionally ‘Madame’. Without being too sentimental about a past golden age of formal civility, it’s reasonable to suppose that participants in all kinds of international conferences – commercial, governmental or non-governmental – kept within a register that was highly conventionalised and therefore relatively straightforward to grasp.

If the world had stayed the same in that respect, an artificial intelligence with today’s computing power could learn how to operate in a consistent register of cross-linguistic acceptability. But nowadays people go all over the place. Politicians, officials and diplomats range flexibly between formality and colloquialism. The more diverse public speech becomes, the more it widens cultural differences, and the more challenges it poses for interpreters.

A couple of years ago, Angela Merkel made some remarks in which she used an Anglicism that is inoffensive in German but is based on a word that is still sometimes rendered using asterisks in English. The head of media relations at the European Central Bank commented that he had been telling German colleagues that the word ‘isn't really OK in English’ for about five years.

The German Chancellor’s use of it caused amusement but no harm – unlike the American President’s use of a similar term earlier that year to describe countries from which people immigrate to the USA. It was an example of President Trump’s spectacular disruption of previously accepted norms in public discourse, which is the most dramatic manifestation of a highly pervasive and fast-moving international trend.

Much of it, like Germans innocently using a word that’s not really OK in English, is at most mildly consequential. But much of it can have adverse consequences on communication, especially between speakers of different languages. It looks as though the challenges are only going to get more complex and prevalent – which implies that the world of the future will need human interpreters, not artificial ones. The more complex the situation, the more you will need people.

And not just any people, obviously. Imagine an artificial intelligence, fresh from the factory, at some point in the foreseeable future. It can hold its own in an interpretation game with a young graduate who is a balanced bilingual, highly and equally fluent in two languages, but has no experience in professional settings and isn’t so strong on social skills either. Both have been made familiar with some basic rules of etiquette, but their sensitivities and contextual awareness don’t go much further than that. Neither of them would be able to get hired as conference interpreters. But if some kindly employer did see potential in the graduate and decided to give them a chance, in a few years they might have learned enough from life and work to do a good job.

The AI, on the other hand, would be unable to acquire the vast amount of vital information about human relations that people gain from immersion in communities, both at work and in the world outside. It might become better at translating, but not at interpreting. It would never learn how to decide when to leave an utterance untranslated, because that requires high levels of emotional sensitivity, cultural knowledge and contextual awareness. It wouldn’t become good at paraphrasing, because that requires understanding what is being said, and being able to express it in a way that is suitable for your listeners. 

Paule Kekeh, a European Commission conference interpreter, has observed (in a talk that is unfortunately no longer online) that some cultures like to hear words in abundance, while others prefer. When a speaker from a prolix culture is speaking to somebody from a culture of fewer words, part of the interpreter’s task is to provide a shorter version of the statement without losing its content or meaning. An AI could easily be given a list of languages that should be edited when translating into languages on another list. But it would be very hard for the AI to learn what to edit out, since it would not understand what it was hearing, and so would struggle to distinguish between florid metaphors and essential details.

If such an AI was available for conference duties, the choice for the event’s organisers would be between an unconscious automaton and a highly skilled professional person who can serve, as Paule Kekeh says, as a cultural mediator. She observes that the path of interpretation runs through the heart, through emotion and through intuition. It draws upon deep knowledge of cultures, and insight into the variations between them. In her words, ‘We transform the concepts of the original language into experiences that make sense to our listeners because they fit into their cultural reality, their identity, their reference framework,’ she says.

Doing that in real time, as the speakers speak and the listeners listen, is a kind of magic. A little box that translates speech is another kind of magic. Both are compelling in their different ways. But in a world that is becoming more diverse and more complex, and in many ways more divergent; in which cultural identities are an ever more prominent presence, and sensitivities of a variety of kinds demand more consideration than they have received in the past, the personal magic will be the more compelling kind. Indeed, it would be more compelling in any kind of world, because interpreting is always about so much more than language.

All this implies that the interpreting profession will be able to market itself highly effectively by emphasising the human skills and insights that it brings to its work. Interpreters even have an advantage in their name. The word ‘interpret’ carries a sense of those higher-level, insightful processes in which humans will always outperform machines. It can serve as a reminder that interpreters don’t just turn one stream of words into another. They help to support live dialogues and manage interpersonal dynamics between people whose understandings of the situation don’t match, and whose interests may not be closely aligned.

By drawing attention to these aspects of their work, interpreters can remind organisations just how much they bring to meetings. Faced with competition from computers, they can present themselves as cultural mediators, dialogue facilitators and supporters for speakers. All that added value derives from human experience and insight. As AI gets better at interpreting, its limits will come into view, and the value of understanding what you are saying will become increasingly clear.

AI will force people in all kinds of jobs to find new roles. For many, it may mean finding work of a different kind altogether. As with Alan Turing’s imitation game, it will be a question of performance. AI is likely to outperform humans in many fields of activity, while at the same time undercutting them on price. It’ll transform road transport, for one thing. Artificial intelligence will drive vehicles better and more safely than humans, as well as more cheaply. Driving will become a recreational activity, like horse-riding, instead of an occupation.

But although AI interpreting systems may be able to undercut humans on cost, human interpreters will always be able to beat computers in the interpretation game. Organisations that need interpreting services may be drawn by the prospect of cost savings, but they will have to ask themselves whether they can afford the loss of quality that will go with opting for artificial rather than human intelligence. And the advance of automation has been accompanied by an increase in demand for people who offer human guidance, such as trainers, coaches and counsellors. Interpreters are relatively fortunate in being able to point out that human guidance and support are what they already provide. The more automated the world becomes, the more appealing those services will be.

© Marek Kohn. Text written for AIIC UK & Ireland and delivered at AIIC UK & Ireland’s webinar “It’s a Kind of Magic? Artificial Intelligence and Human Insight” on 4 December 2020

Dr Marek Kohn is the author of eight books, the subjects of which include race and science, the evolution of the mind, the lives of leading evolutionary thinkers, trust, and the impact of climate change on the British Isles. He has also written for a wide range of publications, including the Guardian, Observer, Financial Times, Independent and New Scientist. As well as exploring the implications of scientific thinking for society­ – he has been described in the Guardian as “one of the best science writers we have” – he is particularly concerned with questions about diversity and identity. These concerns are rooted in his heritage as the son of a Polish father and a British mother. In his most recent book, Four Words for Friend: The Rewards of Using More Than One Language in a Divided World, he draws heavily on his experience of losing his first language, Polish, and on his efforts to recover it.

YouTube video thumbnail image: Paul Musso / Hay Festival
Photo: Four Words for Friend by Marek Kohn, published by Yale University Press, ISBN: 9780300251517