Artificial intelligence (AI) is already making decisions in the fields of business, health care and manufacturing. But AI algorithms generally still get help from people applying checks and making the final call. However, the issue of trust escalates when AI systems have to make independent decisions that could mean life or death for humans.
Unlike humans, robots lack a moral conscience and follow the “ethics” programmed into them. At the same time, human morality is highly variable. The “right” thing to do in any situation will depend on the person. For machines to help us to their full potential, humans need to make sure they behave ethically. So the question becomes: how do the ethics of AI developers and engineers influence the decisions made by AI?
In a future with self-driving cars that are fully autonomous, If everything works as intended, the morning commute will be an opportunity to prepare for the day’s meetings, catch up on news, or sit back and relax. However, when things go wrong, for example, the car approaches a traffic light, but suddenly the brakes fail and the computer has to make a split-second decision. It can swerve into a nearby pole and kill the passenger, or keep going and kill the pedestrian ahead.
The computer controlling the car will only have access to limited information collected through car sensors and will have to make a decision based on this. Autonomous cars will generally provide safer driving, but accidents will be inevitable—especially in the foreseeable future when these cars will be sharing the roads with human drivers and other road users.
A big car company does not yet produce fully autonomous cars, although it plans to. In collision situations, the cars do not automatically operate or deactivate the Automatic Emergency Braking (AEB) system if a human driver is in control. In other words, the driver’s actions are not disrupted—even if they themselves are causing the collision. Instead, if the car detects a potential collision, it sends alerts to the driver to take action.
In “autopilot” mode, however, the car should automatically brake for pedestrians. Some argue if the car can prevent a collision, then there is a moral obligation for it to override the driver’s actions in every scenario. But would we want an autonomous car to make this decision?
A car’s computer could evaluate the relative “value” of the passenger in its car and of the pedestrian. If its decision considered this value, technically it would just be making a cost-benefit analysis. There are already technologies being developed that could allow for this to happen.
Through the Moral Machine experiment, researchers posed various self-driving car scenarios that compelled participants to decide whether to kill a homeless pedestrian or an executive pedestrian. Results revealed participants’ choices depended on the level of economic inequality in their country, wherein more economic inequality meant they were more likely to sacrifice the homeless man.
There have been many philosophical debates regarding the ethical decisions AI will have to make. The classic example of this is the trolley problem. People often struggle to make decisions that could have a life-changing outcome. When evaluating how to react to such situations, one study reported choices can vary depending on a range of factors including the respondent’s age, gender and culture.
AI is not good or evil. The effects it has on people will depend on the ethics of its developers. So to make the most of it, humans need to reach a consensus on what we consider “ethical.” While private companies, public organisations and research institutions have their own guidelines for ethical AI, the United Nations has recommended developing what they call “a comprehensive global standard-setting instrument” to provide a global ethical AI framework—and ensure human rights are protected.