LSE Business Review Tue, 11 Feb 2020 06:00:05 GMT language
“When I use a word,’ Humpty Dumpty said in rather a scornful tone, ‘it means just what I choose it to mean — neither more nor less.’ – Lewis Carroll, Alice In Wonderland.
In Lewis Carroll’s Alice In Wonderland, Humpty Dumpty is a wordsmith focused on making the most of his vocabulary. He not only recites but explains poetry to Alice. He is obsessed with being able to force meaning to happen. When he uses a word, he makes it mean whatever he wants. Recently I was reminded of Humpty Dumpty when I took part in a panel at a major event in London. The subject under discussion: would the service automation industry evolve faster if we stopped referring to robots? Initially none of the panellists were in favour of presenting service automation software as robots, me included. But to create a discussion I offered to take the contrary position. Surprisingly, it proved to be a very lively debate. Let’s face it, ostensibly, there are no physical robots being used in robotic process (RPA) and cognitive (CA) automation deployments. It’s just software. Robotic process automation or RPA was a phrase coined for marketing purposes back in 2012. It has been very successful in attracting attention, less so on clarifying exactly what was being sold.
My own understanding has been that RPA software automates tasks previously performed by humans by following rules to process structured data and produce a single, correct answer. The ideal process is simple, takes little time to complete, and rules can easily be written for the component tasks. The word ‘robotic’ is relevant here, metaphorically, in two ways. Firstly, the process is robotic, and secondly what the human was required to do was robotic, leading to us coining the phrase that ‘RPA takes the robot out of the human’. So far so good. But has the term RPA outlived its usefulness?
Panel members argued persuasively that the terminology of RPA had served a purpose in giving an identifiable label to complex technology to support easier understanding across the business community. But the term, and the concept of ‘robot,’ no longer represent the complex, AI-infused solutions that underpin scaled strategic adoption of automation. The terminology of robots or ‘bots’ leads to a tactical mindset about automation and is misleading – it’s really efficiency software and algorithms backed by impressive, growing computing power and memory. Counting ‘bots’ is not a good indicator of business value or sophisticated automation. There is already a plethora of alternative terms that better represent where connected RPA, cognitive automation and eventually Artificial Intelligence (AI) are leading. It follows that the term ‘robot’ is holding back the automation industry, and should be recalled, taken out of service, discontinued, de-activated, stood down. These are good arguments, but I am going to offer five reasons why this might be a difficult thing to achieve, and not necessarily a sensible move. See what you think.
1. Stable definitions are useful
When it comes to RPA, at least, there is a stable definition in place. People have spent some seven years with the term. RPA is experiencing exponential sales likely to last several more years, and there is no sign of the software becoming discontinued, only enhanced and complemented. There is an IEEE (Institute of Electrical and Electronics Engineering) standard for terminology, and personally I cannot see any reason for changing most of the IEEE definitions. The call for taking this term out of service would seem to be more an itchiness amongst marketing departments eager to suggest they are selling something more exciting. Some people are already calling RPA ‘AI’, which it manifestly is not, if AI is defined as using computers to replicate what human minds can do. In practice the industry loves to move the vocabulary on to the next big, new thing, leaving us all a little puzzled about what the words really mean, and whether the ‘thing’ exists. The truth is the whole area of automation has become a veritable Tower of Babel, proliferating terms such as digital workforce, virtual workforce, algorithmic certainty, the catch-all term ‘AI’, and many, many more to come. And vendors seem to thrive on inventing new words and using phrases to mean whatever they want them to mean, aka Humpty Dumpty. In such an environment, let’s stick with RPA for the time being. But what about the wider references to ‘robots’ beyond RPA?
2. Robots have been with us for a long time. Robots are found in Greek, Indian, Chinese and Persian myths, and throughout history. In Greek myth, for example, the gods’ blacksmith Hephaestus built Talos, robot guardian of the island of Crete, and Pandora, possessor of the evils for mankind in her notorious Pandora’s ‘box’ (more accurately ‘jar’). Robots and human-made creatures have lived on in the public imagination from Frankenstein, through films like Metropolis, Blade Runner, and the Terminator series, to mention just a few high-profile examples. Robot myths form a persistent, useful way of thinking about anxieties and machines very relevant to our rising dependence on information and communications technologies. They represent a narrative, symbol and repository for our anxieties, fears and hopes when it comes to relating to our self-created machines. As the technology becomes more virtual, opaque and less visible, so humans seem to need to make sense of the machines by rendering them in physical form. This appears to be a deep-set, human psychological need, not easily circumvented, or substituted for. Which raises the fundamental question…
3. How do you declare the term ‘robot’ non-operational? In our research into RPA and cognitive automation deployments, we found employees time and again voluntarily visualising the software in robot, often human form, and investing in them psychologically, giving the ‘bots’ names, characters, and roles, for example ‘tireless trainee’, ‘my virtual assistant’, ‘digital worker’. Humans seemed to want to establish working relationships with the ‘robots’, give them human characteristics, which increased their comfort levels about work and technological change. Managers endeavouring to bring in service automation could also see the value of staff and customers buying into these developments. In short, personification can drive adoption, while the concept of a robot helping people to become more efficient and augmenting their skills may well remove fears over job losses from automation. Given such positive aspects, organisationally and managerially speaking, it is difficult to see why participants in automation would want to declare the term ‘robot’ non-operational. And if they did so, what would they put in its place, that served the same purposes?
4. Where do you stop? If you are going to get fussy about terminology, why limit your strictures to using the word ‘robot’? The whole language of computing and AI is suffused with the fundamental misunderstanding that the brain is some kind of computer, and that machines have progressively human qualities. Machines are said to remember, understand, possess intelligence, make sense of data, know, even most recently empathise and create… but none of these things are true of the machines we design, build, and deploy. This misleading metaphorical language reflects perhaps the fairy tale that we want to believe – they really are like us. Is AI ‘intelligent’, or is Meredith Broussard more accurate in her recent book entitled “Artificial Unintelligence: How computers misunderstand the world”? Do cognitive automation technologies really have neural networks like human brains do, or is the terminology just wish fulfilment? I am all for much better use of language, but I cannot see why removing the word ‘robot’ is going to solve a language problem, and language habits, that are much more widespread and deeply imbedded and more seriously misleading the further we adopt these emerging technologies.
5. You cannot rein in the media – they love robots. Finally, let us face the fact that ‘robots’ play straight into how media love to portray, talk about and make sense of the technological world. That’s why pictures of robots, and/or talk of robots are all over the media, every media. Robots have become an essential part of media currency. The media thrive on the power of narrative, and, with service automation, the stories told tend to polarise around hype or fear, optimism or pessimism, technological triumph or technological catastrophe – what I have called in our book “Service Automation, Robots and The Future of Work” the stories of either Automatopia or Automageddon. Robots form a crucial part of both stories in either being a benign overwhelming force for the good, or machines that wreak havoc and disaster, and may well come to be all powerful and turn against their human creators. This is easy, persuasive and irresistible story making, and I cannot see the media giving it up, whatever attempts are made to decommission the word ‘robot’ to support the expansion of the service automation industry.
So, what do you think? In all this I do believe it is important to retain a stability and accuracy in the use of language. Otherwise we get totally lost and unable to communicate, or create the mistake of thinking we are communicating when in fact we are not. I also believe that the massive hype around what is now called ‘artificial intelligence’ needs to be confronted and pointed out, that words and what they represent cannot be totally disassociated and their relationships rendered fluid and moveable in the way that the hype merchants would like them to be. I am also anxious to point out that while using metaphor – comparing one thing with another – is a fundamental way in which we think and make sense of the world, it is vitally important also to identify the limits of every metaphor that we live by.
The word ‘robot’ comes from the Czech robota referring to a feudal class of forced labourer or serf. It was used in R.U.R., a 1920 play by Karel Čapek. Here the robots were technologically created artificial human bodies without souls, ruthlessly exploited by factory owners. Ultimately the robots revolted and destroyed humanity. This imaginative comparison of human serfs to a new class of artificial workers carrying out serf-like work persists to this day and is read into our developing use of advanced technologies, and crosses into more cognitive, then perhaps emotional areas of work. The end of the play still haunts us. In Samuel Butler’s nineteenth century ‘Erewhon’ the utopia’s inhabitants, faced with the same possibility – of the machines taking over – decide to destroy all the machines. But, while the robots remain servants, it would seem sensible to keep the word, don’t you think?
- This blog post draws on the author’s new book ‘Becoming Strategic With Robotic Process Automation’, with John Hindle and Mary C. Lacity, SB Publishing.
- The post expresses the views of its author(s), not the position of LSE Business Review or the London School of Economics.
- Featured image by TCB, under a Pixabay licence
- When you leave a comment, you’re agreeing to our Comment Policy.
Leslie Willcocks is professor of work, technology and globalisation in the department of management at LSE. He is a leading global researcher on technology at work, globalisation and ITCs and innovation and is a recipient of the PwC/Michael Corbett Associates World Outsourcing Achievement Award. He is co-author of 65 books, including four on automation, the latest being Becoming Strategic With Robotic Process Automation.