How important is empathy?

Ard: If we compare ourselves to animals, do animals have moral sentiments?

MC: That is a long-debated question. My read of the literature is that the building blocks of moral sentiments are present in animals, especially in mammals who care for their young, who live in social groups. You can see evidence for the building blocks of things like empathy even in rats. So there have been experiments done recently showing that rats will work to free a trapped cage-mate, particularly if the trapped, caged mate is vocalising signs of distress. There were experiments done on monkeys in the 1960s showing that if you give a monkey an opportunity to get some food by delivering shocks to his cage-mate, he will refuse to do that many times.

Ard: As opposed to David, who is quite happy to… Even the monkeys don’t do it.

David: There’s always exceptions!

MC: And researchers like Frans de Waal would argue that there are profound examples of prosocial behaviour and the roots of empathy in non-humans.

David: Do you find the work of people like de Waal… Do you think that he’s right?

MC: Yes, I think there’s a lot of evidence for the roots of moral sentiments in animals.

David: Could you be a moral person if you were perfectly able to think about these deep, philosophical, moral… this school of moral thought or that school or moral thought, but you had no empathy? In other words, you were lacking these humble instincts. Would you be able to be a moral person or would you be a monster?

MC: Well, so are you describing a psychopath?

David: I don't know.

MC: Maybe.

David: You put the label on it. I’m just describing this…

MC: Right, I think moral behaviour is motivated, and to have the motivation you need the sentiments.

David: You need to care about someone.

MC: You need to care. So a paper that was published a few years ago, the title was: Psychopaths Know Right From Wrong, And They Just Don’t Care. So…

David: And is that true?

MC: Well this area is still controversial, because it really depends on the way that you ask the question with these different kinds of moral dilemmas. Some studies have shown that psychopaths are indistinguishable from healthy people on certain kinds of moral dilemmas, but other studies have shown differences, so it really depends on the way that you ask these types of questions.

But it’s certainly the case, just anecdotally, that if you ask a psychopath, a serial killer, ‘Do you know that what you did was wrong?’ they’ll say, ‘Yeah. Yeah, I realise it was against the law.’ You know, they can apply moral principles to, sort of, understand, or not understand; they can tell you what you want to hear, but they don’t have that feeling.

David: It doesn’t mean anything to them.

MC: Right. Walter Sinnott-Armstrong – who’s a well-known philosopher who works on psychopaths – he’s made this analogy, which I think is really great, which is you [to Ard] can talk about physics and you’re a physicist, so E=mc2: I know that E=mc2 is Einstein’s famous equation – I can tell you that E=mc2 – but I don’t actually understand why is the c squared, for example, whereas you have an understanding based on your knowledge of physics that is a much richer understanding of that equation. So on the surface I can say E=mc2: I know that is a truth, and you can say that at the same time, but we have a very different understanding of what that means.
Similarly, a psychopath can say, ‘Killing is wrong’, and I can say, ‘Killing is wrong’, but our understanding of that statement is very different.

Ard: That’s really interesting. I think if you say E=mc2 to me, I feel something because I know what it… It kind of has a depth to it. Whereas I think in areas that are outside my own field, I might know that something is true, but it doesn’t have the same feeling for me because I don’t… I just know that because people told me it’s true, and it’s probably true.

MC: Yeah.

Ard: So psychopaths are basically people who understand the rules but don’t have the sentiments to make them follow the rules?

MC: Exactly.

Ard: It’s interesting – there are three things: there’s my sentiments that make me want to do certain things and not do other things; there’s my judgement that tell me how I should behave, in this or that way; and then there’s the question of whether I’ll actually follow them. Right?

MC: Yeah.

Ard: So maybe my sentiments say, ‘Be nice to David’, and my thoughts are it’s a good thing to be nice to David. Maybe I just don’t care and I’ll just be mean! Probably a psychopath!

David: So, it’s not a matter of trying to move away from our moral sentiments as if they were somehow lowly and somewhat dangerous, and just become purely rational about it? It suggests like you’re saying, ‘Look, we’ve got these things… we need to have more of a thoughtful relationship between our instincts and our ideas.’

MC: Yeah, and I don’t think anyone is suggesting that we do away with moral sentiments, not in the least.

David: No, but we used to, though, didn’t we? There was that old image of, sort of, our base, fleshy, animal nature, which had its own drives and then there was the noble, rational mind on top, struggling to control it.

MC: Mm... yeah, but I think that’s safely put to bed by this point. I mean, I think you can see this argument progressing in discussions about artificial intelligence and how we should think about building super-intelligent agents that might, one day, have a lot of impact over the course of humanity.
One of the central issues in AI research right now is how do we load moral values into an AI: it’s called the value-loading problem. How do we build an artificial intelligence (AI) that actually cares about us as human beings and cares about our welfare? So I think the fact that moral sentiments are really very central in this endeavour to build in artificial intelligences shows that the scientific community really is giving these sentiments a starring role.

Ard: That’s a very interesting point; that’s a nice way of thinking about it. So if I were to make a computer that was incredibly intelligent, so intelligent that it could understand all these different philosophies, including different moral philosophies, what you’re saying is, the worry is, that’s not going to be enough. We have to give that computer some kind of moral sentiments as well.

David: Otherwise it would be a clever psychopath.

Ard: Yeah, exactly, the computer would be a psychopath. That’s a kind of a scary thought, actually.

MC: It is.