Philosopher: AI decision-making can take the weight of moral responsibility off our shoulders
Irish philosopher and researcher John Danaher has just released the 100th episode of his popular podcast Philosophical Disquisitions. The podcast is about ethics and artificial intelligence and has the enticing subtitle Things hid and barr'd from common sense. And that is exactly why we were interested in talking to him. Because he investigates something that is not straightforward. And also believes that part of our ethical scepticism towards artificial intelligence is due to instincts, stereotypes, and unrealistic expectations. Before we get to those points, however, we need to identify what he sees as technology’s most important ethical issues.
Three main ethical questions about AI
John Danaher explains that there are three main areas that keep emerging. The first concerns automated decision-making, in which questions about allocation of responsibility arise. For example, if a self-driving car crashes, or ChatGPT is used to write an essay. He points out that there are both questions of responsibility for damages and questions of responsibility for good effects of the technology.
The second one comes down to fairness and bias. He explains that we have to consider whether there is a bias in the dataset that causes the system to make decisions that affect particular groups. A well-known example are algorithms that have given worse parole terms to black convicts than to white convicts. The third area is about transparency and explainability of artificial intelligence. Can decisions be explained? Or is the system inaccessible to human understanding?'
Three different answers
We decided to delve into the first problem area, which also intersects with the other two. The philosopher with a background in law explains that the core of the problem is the so-called responsibility gap.
He says that, for someone to be called responsible in the Aristotelian sense requires that they have control over the outcome of the decision and know what they are doing. But when artificial intelligence makes decisions, no one meets those two conditions. We do not have control and we do not know what the system is doing.
This causes problems with both moral and legal responsibility, and there are, broadly speaking, three answers to that, according to John Danaher.
The first one is that people dealing with ethics and AI are exaggerating the problem; the systems are not at all as autonomous as the philosophers who formulate the problems believe. The second one is that we already have rules that can cover autonomous systems just fine; so new legal and ethical principles are redundant. The third approach is more radical and is also seen as controversial. According to that approach, the responsibility gaps created by AI can be beneficial. The explanation is that in Western societies we overprioritize the importance of moral responsibility, and that our way of practicing it can be dysfunctional and harmful, says John Danaher, who is well aware that this point requires elaboration, so he continues.
Problems can turn into an advantage
John Danaher says that, in reality, we have very limited control over many of the decisions we make in life. Especially when we encounter conflicting moral considerations that are difficult to tackle. In these kinds of situations, there can be great benefits in outsourcing moral responsibility to machines. Because this could reduce our natural tendency to blame and punish people and thus remove the psychological burden in certain types of decisions, he elaborates.
The responsibility gap—the apparent ethical and legal vacuum—which we normally consider a problem can therefore become an advantage that makes life easier for us. He emphasises that does not apply to all types of decisions. But it does apply to situations in which we encounter ethical dilemmas with competing and equally balanced interests to which there is no obvious right answer.
John Danaher says that we saw an example of this during the coronavirus pandemic, where the issue of having too few ventilators forced the doctors to choose who should have the ones that were available. Who should be left to die—a 60-year-old patient with COPD or an otherwise healthy 40-year-old father of two?—John Danaher asks to emphasise the unfairness of putting a medical professional who is already under stress in such a situation.
Tragic dilemmas
The example is suitable because it is precisely when one ends up in what John Danaher calls tragic dilemmas that technology can be beneficial.
He says that it is by no means obvious that it is better for people to make these kinds of decisions and feel guilty than for them to be made by machines and points out that, although it may seem like a controversial solution, there have been studies which indicate that many citizens would be in favour of the approach.
If a machine makes the decision, people seem less inclined to blame someone and hold them legally responsible. This means that, for example, when resources are to be allocated in the healthcare system, doctors can stay out of the whole blame game that it often entails, John Danaher explains but immediately adds a caveat:
It is not about removing all responsibility from people. After all, it is also people who have to make the decision to use the system in the first place. So one can always question if that was a good idea.
We must change our moral culture
John Danaher acknowledges that it will not be easy for us to implement this in practice, but he believes that the main reason behind that are some habits and instincts that we can learn to rein in.
He explains that we are strongly inclined to look for people we can hold responsible and accuse morally or legally. It is a deep-seated instinct in many cultures, and our legal systems often have a built-in element of retribution. Therefore, it will require a shift in our moral culture, and I am not saying that it will be easy to implement or that it will be accepted without further ado, John Danaher says but nevertheless believes that it will make things better, and that it will be able to be applied in many places.
He further explains that the approach can be useful in all scenarios where, for example, a distribution of scarce resources takes place. This also includes distribution of resources across different public bodies or different education programmes and research projects. There are many examples of allocation of resources taking place across conflicting but balanced interests, John Danaher says.
Be careful!
John Danaher emphasises that the solution requires a safeguard against bias, and he recommends great caution in the selection of the areas where it is used.
He says that we have to be very careful about the situations in which we use it. The general instinct is to say that it is bad, but he wants to push back against that because there are many contexts where the possibility of picking out people to blame entails an unreasonable psychological burden.
When asked if we are not diving straight into even bigger problems around bias, transparency, and explainability, he also pushes back, albeit for other reasons.
We do not always understand ourselves
John Danaher says that algorithmic systems are often more transparent than human decisions because we are familiar with more parameters and have a better insight into how they work. Our own decisions are influenced by many subconscious things and by structural forces we do not fully understand. When we explain them, we often use fabricated explanations. In that area, psychologists and neurologists are still working to understand the mysteries of the human brain, he explains.
John Danaher generally believes that we often neglect our own mistakes and shortcomings when comparing our decision-making with algorithm decision-making, and that the same occasionally applies when we talk about algorithm bias; that is, prejudices that are passed on to algorithms. Humans are also biased when making decisions, and it is not always a given that their bias is less significant than the algorithm’s.
He ends the interview with an appeal for composure and consideration. Many big and small decisions in everyday life are nowadays left to algorithms, and his point is not to support that as a general trend. On the contrary, he is very sceptical in quite a few areas. But if we want to get the best out of artificial intelligence, we have to keep our eye on the ball and evaluate the technology on the right basis.
