The only way is Ethics…

Arnie is not about to transport back in time and kill the saviour of humanity. Artificial intelligence is here – kind of – but we really don’t need to be worrying about robot overlords taking over the planet. Yet.

I have my doubts about the “intelligence” part of AI. RPA bots can do some amazing things, and when it comes to undertaking increasingly complex menial tasks, they are clearly vastly superior to even the smartest human being, but is this intelligence? It’s programming. Brilliant and smart? Without question. But actual intelligence? I am not so sure.

Putting semantic distinctions to one side, the ethics pertaining to AI are multi-faceted. We can discuss what might happen if AI does eventually outgrow its programming (or perhaps fulfil it), but this is the stuff of science fiction. And whilst the possibilities are both fascinating and terrifying, such concerns are fanciful speculation.

Robotic process automation (RPA) is perhaps the most significant technological advancement in modern times; its impact on the human workforce is just beginning, but people being replaced by machines is nothing new. An integral part of the capitalist system that dominates the planet is that companies are always looking to improve efficiency and cut costs. ATMs started to replace bank tellers fifty years ago, and if you go into a supermarket these days, you can scan your own items and complete the whole transaction without any need for human contact.

We are all complicit in embracing the convenience that technology brings, and I don’t think we are in a position to complain when it might impact on us as individuals. We will adapt. Human beings are staggeringly good at adapting. Before the industrial revolution, nearly all of us lived short, squalid lives, living off the land and rarely left the villages we were born in. A mere fifteen generations later (that’s about five years for rabbits, should such silly comparisons amuse you as much as they amuse me), billions of us live complex, sophisticated lives that bear no resemblance to our peasant forebears.

And we will adapt again, as RPA alters the working landscape. The ethics of the impact of this particular application of AI can be dealt with easily. This is progress, in the same way that James Hargreaves’ Spinning Jenny was. The genie is out the bottle, and technological advancements will continue to replace human beings in the workplace.

It isn’t just RPA, either. Autonomous vehicles – should the technology be realised – will have a huge impact on the workforce. Consider the logistics of the transportation of goods. This is very heavy on manpower right now. If we can achieve a future where fallible creatures with physical limitations no longer have to be in charge of driving vehicles, then the myriad benefits are obvious. This brings in some interesting ethical considerations, even if we put aside the loss of employment, sacrificed in the name of progress.

What if a driverless vehicle is faced with a split second decision where it must choose the lesser of two evils? To kill an old lady or to kill a baby? This is very similar to the “trolley problem”, a philosophical dilemma posited in the 1970s, with a modern twist. And there is no easy answer, despite reams of debate. Indeed, it is even a fair question.

A big issue with robot drivers is that they will kill people. It’s inevitable. Machines moving at speed in an environment that is by its very nature uncontrollable will mean situations will occur where accidents are unavoidable. However, on the other hand, a reported 94% of road accidents are caused by human error, so if we remove the involvement of people in driving, then far less people will die. This is inarguable. But we won’t eradicate road deaths entirely, and when people are still killed, how will this be processed morally, and, perhaps more saliently, legally?

Indeed, it is even a fair question?

A big issue with robot drivers is that they will kill people. It’s inevitable. Machines moving at speed in an environment that is by its very nature uncontrollable will mean situations will occur where accidents are unavoidable. However, on the other hand, a reported 94% of road accidents are caused by human error, so if we remove the involvement of people in driving, then far less people will die. This is inarguable. But we won’t eradicate road deaths entirely, and when people are still killed, how will this be processed morally, and, perhaps more saliently, legally?

I’ll leave such questions in the hands of insurers!

There is a very sinister aspect of AI that perhaps does invoke Arnie’s most iconic role – the mixing of artificial intelligence and weaponry. I would say that this is a philosophical minefield, but this metaphor suggests that there is a safe path to be walked if we are careful. History suggests though that the development of killing machines and circumspection are not comfortable bedfellows. That’s why they call it the “arms race”.

The nation that wins the arms race has an undeniable advantage, which inevitably means that moral and ethical considerations are secondary. With the rise of drone technology, it doesn’t take the wildest of imaginations to see how devastating AI weapons could be. And even if the good guys (whoever they may be) develop the tech in a pre-emptive manner, how long will it be before the lure of money sees artificially intelligent killing machines in the hands of the bad guys (whoever they may be)? And when (if?) this ever happens, what kinds of wars will be fought?

There is plenty of speculation within AI ethics about what is termed “the singularity”, a term for a tipping point where artificial intelligence becomes smarter than human beings. As it stands, this theoretical development is still in the hands of science fiction, but the predicted impact is often foreboding, perhaps apocalyptic.

I am not sure this is what we should be fearing. AI technology has the propensity to be misused right now. With technology comes progress.

But at what cost?