Schipke'd it.

View Original

The Morality of Autonomy

In the battle of AI vs Human decision making, we can't win for speed.

Despite sceptics' views on autonomous AI, the inevitable shift towards machine-learning purpose-driven technology is already here. Statistically, Google Home devices are able to understand speech better than humans. Automated drones can fly better and safer than the best manned aircraft. Decisions made in real-time in various emergent scenarios are almost always more successfully executed by machines than by professionals in any field. Given this inevitable shift, there is an interesting moral dilemma that even the best of us struggle to regulate.

What is the value of a human life?

Tesla has proven that their self-driving cars are exponentially safer than anything driven by a human. There are hundreds of video’s on YouTube where a Tesla dash cam has inexplicably detected anomalies in other drivers’ behaviors and instantly reacted accordingly, saving the life of the Tesla driver and, on occasion, the driver of the other car. In all of these instances the incident has happened so fast that a human driver without Tesla’s safety AI would unavoidably have crashed, regardless of their skill level. Often the Tesla driver was complying with every road rule in the book and the near-miss was completely external to their field of influence.


But in the extremely rare scenario where any course of action, no matter how extreme, results in the loss inevitable loss of life - who’s life should a machine take?

First of all, this is not a situation where ‘that’s not a call for a robot to make’ is a valid argument. It’s impossible to have a car that is using advanced AI to avoid accidents to also determine when it should and shouldn’t react to a potential accident - that’s simply not how AI works. The car must make a ‘decision’, even if that decision is to simply keep driving, and it needs to know the variables to which it is reacting. The ‘code’ that would be used to determine what values are placed in those variables to make the decision are programmed by us - humans.
 

Eventually your life will be in the hands of a machine


So in a situation where, for example, a solo driver is following the law, and a bus driver makes an illegal maneuver that puts 30 kids in fatal danger, what should the Tesla do? AI is at a stage where it can determine size, distance and age of people very easily. So, in this situation, if the Tesla absolutely can not avoid the accident, and there is certain loss of life due to the forces of both the bus and the car involved, does the Tesla save it’s law-abiding owner’s life and kill 30 kids? Or does it run into the nearest street lamp, knowing it will kill the driver but save the lives of 30 kids despite being in the hands of an illegal driver?


Or how about a decision between a single driver and some poor child who runs on the road for his favorite toy without knowing any better? A family of 6 when there are 4 people in the Tesla? Aged passengers vs young adults?


At some point here, there is a moral decision to be made that ultimately means programmers are ‘playing god’ by coding in parameters that will decide who gets to live, and who gets to die. It’s easy to think that ‘whatever will cause the least loss of life’ is the correct answer - but that’s not what owners of Tesla will want when they purchase the car that they believe will value their own life above all else. On the other hand, it’s pretty scary to think that a Tesla will choose the owner over all else if the scenario with a bus-load full of people or random crowd is involved.


Personally, I’m of the view that whoever is abiding by standards and the law should ultimately be the safer of the two parties, unless there is a substantial difference in amount of people. For example if two drunk idiots are walking down the middle of the street and decide to ‘jump out’ in front of a Tesla for fun, then despite there being two of them I would still advocate for the driver to live. However if there are two people in the Tesla speeding stupidly down a narrow suburban road, and there happens to be somebody crossing the road around a corner, then I would err on the side of killing the driver and passenger. However if there are 5 people in the Tesla and the decision is between one person, even if they’re not doing anything wrong, and 5 people in the car - then I would lean towards the car being the higher priority.


It’s a tough choice, and there’s no ‘right’ answer in my opinion - but it’s a very real complication for AI and Machine Learning engineers right now. It's important to not lose sight of the fact that these accidents would occur hundreds of times less frequently then they do with a human driver, and you’re still safer in a Tesla than in a normal car. This is just in freak-of-circumstance scenarios that be unavoidable no matter how ‘smart’ the vehicle is - even if it reacts infinitely faster than a human would.

This is an easy choice, but other's aren't so straight forward.


What do you think the value system should look like? Law abiding, vs number of people vs owner of car?  Let me know in the comments below!