How do we decide right from wrong?
Ard: So, Molly, can you tell us something about the difference between moral instincts and moral judgements? Are they the same thing? Do we need moral instincts to be moral people?
MC: I think of moral sentiments as emotions that motivate moral behaviour towards other people. So those can be benevolent moral sentiments: so feeling concern and care for others – feeling averse to harming others. There are also malevolent moral sentiments: so feeling like you want to take revenge on someone who has harmed you in some way. And I think of moral sentiments as really motivating decision-making and behaviours, and also judgements – so outputs. And judgements are just one kind of output of this process where you think about a situation, a scenario, and you make a judgement of right or wrong.
Ard: So what you’re saying is, moral judgements are, more or less, when we’re thinking about what’s happening, whereas moral sentiments are when we just kind of feel what’s happening, and in a lot of situations we don’t have time to think about what’s happening: we just react.
MC: Right. So, you might have one person who says, ‘Well, moral judgements are driven by emotions’, and you have another camp who says, ‘No, moral judgements are actually really a prolonged, deliberative process’. And I think they can be both, and the interesting questions, I think, are when do we deploy these more conscious, deliberative processes in moral judgements and decisions? And when do we employ more instinct and fast emotional processes?
Ard: And when do we?
MC: That’s a really good question! We’re working on it.
David: One of the things which Frans de Waal said… He often felt that people would do something and they’d made a moral decision, but then after the fact, they would come up with a great post-hoc justification for why they did it. And he said he often felt that the story which the person told afterwards was just that: it was a story and didn’t actually match up.
MC: They called this ‘moral dumbfounding’: this idea that there’s an emotional reaction that drives the judgement, and any sort of reasoning that takes place is sort of a post-hoc rationalisation. And that’s a very popular account of how moral judgement works, and I think that there is solid evidence from those experiments that that explains how moral judgement works in certain kinds of cases.
But I think that, broadly, there are other kinds of moral decisions that people make where they are factoring in both moral norms and reasons, as well as the emotions at the same time. And I think the field is generally moving away from this sort of false dichotomy – between emotion on the one hand and reason on the other hand – and starting to think of moral decision-making more like the way we think of other kinds of decision-making, which is an integration of different sources of value.
Ard: Are there some illustrations that can help us understand this moral sentiments, moral reasoning? How that works?
MC: So, Josh Greene, who’s a psychologist at Harvard, likes to make the analogy of a camera. So, with a camera you have the automatic mode – point and shoot – sort of pre-fixed settings that work in a variety of situations, and then there’s manual mode, which you can switch into if you want to have really tight control over how you’re going to take the photo.
He likens the moral brain to this kind of a camera where there’s an automatic setting, where moral intuitions can guide our moral judgements and decisions. And then there’s also manual mode, where we can switch into this more deliberative, reasoning mode. And there’s a lot of evidence supporting this kind of a distinction. I like to think about it in terms of different decision-making systems in the brain: so we have a very old system that you could think about as an instinctual system where we approach rewards, we avoid punishments, we have these very innate drives towards good stuff and away from bad stuff, and this can infuse a lot of our moral judgements.
So, in the case where, for example, people are asked, ‘Is it appropriate to kill one person in order to save many others?’ our gut instinct might be, ‘Oh, no. I don’t want to do that because killing is wrong.’ And that’s a sort of automatic reaction that we have to that idea.
So, there’s another brain system – which I call the ‘goal-directed system’, or some people call it the ‘model-based system’ – that is designed to prospect into the future, to represent the state of the world and the link between the actions that we take and the consequences that ensue. And it’s been shown, for example, that when people are under stress, they rely less on this goal-directed system. And equally, it’s been shown that when people are under stress, they’re more likely to make these more emotional moral judgements.
So, it seems like people shift between different modes of thinking. And just like different modes of thinking influence the way people take risks, or decide between a small reward now and a large reward later, they might make different moral judgements and decisions depending on the state that they’re in.
Ard: I have a ten-month-old daughter and she has jet lag, so I have not slept very much the last few nights.
MC: Oh dear.
Ard: So that, probably, will make me behave in emotional ways. Is that what you’re saying? I’m more likely to behave in just an instinctual way than think about it properly?
Ard: That probably explains a few things.
Ard: When we behave in a moral way, is that because we reason ourselves towards that or is it because it's something that's just instinctive inside of us?
FdW: If we had to reason ourselves to it every time we did, it would be a pretty cumbersome system, no? Each time I have a choice between being kind or not kind, I would have to go through all the reasoning why. That would be a terrible system. So I think there's a lot of intuitive and impulsive behaviour, and that some people end up on the moral side and some people won't, and I think that the justification afterwards is definitely at that point for our behaviour. At that point we're going to use all sorts of reasons and rationales, and I think philosophy has gotten it a little bit backward because they have focused on the justification part as if that's the motivation part, which of course it isn't.
Ard: So what is the motivation?
FdW: Well, there's lots of pro-social motivations that we have and that we share with other mammals and with other animals.
David: What do you mean by pro-social?
FdW: Pro-social? I would mean it's a bit more than altruistic: pro-social is sort of a motivational system. Altruistic in biology is often used in a very functional sense: I do something costly for myself that benefits you regardless of my motivation – so a bee who stings you, which is probably in an aggressive motivation, is defending the hive. We call that altruistic because the bee loses its life, and is giving its life for the hive. But we don't necessarily think that the bee has a pro-social motivation at that point. So pro-social usually refers more to the motivation part: why do I do these things intentionally. And we use that also now in the animal literature – we use that term.
David: So you think the motives, it's not to do with rationality, it's to do with this pro-social idea?
FdW: Yes, motives usually don't come from reason: reason comes later, I think. And so, yes, sometimes we sit down and take a decision, like you need to decide am I going to help my grandmother – yes or no – today? And so you may try to come to a rational decision given all the other circumstances, but most of the time I don't think we go through all these reasons, and we have just a certain motivation to do this or to do that.
Ard: Sometimes people think that our kind of instincts that we have are dangerous ones, where nature is read in tooth and claw – we're trying to beat up our enemies and win – and so we have to subjugate those instincts.
FdW: Yeah, that's a view of nature that I don't hold necessarily.
Ard: What would that view be?
FdW: Well to use nature only for the negative side of human nature: so when we're killing each other, we say, ‘We're acting like animals.’ And so all the nasty things that we do and the selfish things, I've called that ‘veneer theory’. It's like all the basic emotions of humans are bad, and then there's a little veneer of morality that we achieve, culturally or religiously or whatever, and how we achieve that. And so morality is just a little veneer over the bad human nature that we have.
I don't buy into that at all. I think humans have all these tendencies: we have good tendencies and bad ones, and they're all connected to our human nature and our primate nature. And you can recognise all of that in the chimpanzee as well. The chimpanzee can be very nasty and they can kill each other, and people have got obsessed by the killing that they do and said, ‘Well, chimpanzees are nasty animals.’ And so then when you say chimpanzees also have empathy and they care for each other, they're very surprised, because that's not consistent with what they think a chimpanzee is. But just like humans can kill each other and be very nasty, humans can also be extremely altruistic and kind to each other, and so we have that whole spectrum and many, many mammals have that whole spectrum.
Ard So, you know this veneer theory you called – which is kind of like this very thin layer of morality over this terribly dangerous animal nature – where do you think that came from historically?
FdW: Yeah, that's a very dangerous idea, because it basically says that deep down we are bad and with a lot of struggle we can be good, but as soon as something happens it disappears. It's a very pessimistic view of human nature. Huxley had that view. So Thomas Henry Huxley, who was a contemporary of Darwin, the big defender of Darwin, he didn't really believe in human nature being any good. Darwin was much more a believer in that, and Darwin even talked about sympathy in animals, and he didn't look at humans as automatons, like the way Huxley looked at it. So Huxley had this view that goodness cannot come from evolution – it's impossible – and Darwin never said that: he disagreed with him on that.
Ard: So in fact what you’re saying is the idea that underneath we're just animals and therefore selfish or bad is not a Darwinian idea?
FdW: No, it’s not. Darwin himself didn't think like that, and he also said, literally sometimes, that selfishness is really not what explains the behaviour of certain social animals. He felt they had a social instinct and morality was grounded in that social instinct, very similar to the views that I have, even though I have more precision because I'm talking about specific behaviours of animals. Darwin had, at an intuitive level, he had that insight also.
David: What was the significance of the Milgram experiments as perceived when they first happened, do you think?
MC: So the Milgram experiments were conducted shortly after WW2, and I think the sentiment at the time was trying to figure out how on earth did the Holocaust happen. How is it that human beings could allow this atrocious torture and horrible acts to happen to other human beings? How on earth could this happen? And so Milgram’s experiments were really going after the idea that people are very compelled to obey authority. And people are willing to do atrocious things when they are persuaded to do so by authority.
And the headline was one of many experiments that he conducted. I think he conducted dozens of experiments where he and his team tweaked different aspects of the setup of the experiment to try and figure out how could you get people to deliver these life-threatening shocks to the confederate in the study. And, of course, the one that got the most press was the one showing something like 60%, or a large proportion of people in the experiment, were willing to go all the way to the fatal level of shock. But they had to go to a lot of pains to get people to do that.
David: How do you mean?|
MC: The majority of participants in that experiment voiced some protest at some point. They were deeply uncomfortable with the situation. They were sweating, heart racing, very distressed and asked to stop many times. And the experimenter would say things like, ‘You must go on. The experiment requires that you go on.’
The experimenter had to persuade the participants to carry on with the experiment. And so you can draw many different conclusions from this. One conclusion you can draw is that the majority of people are willing to shock a stranger to death.
David: And that was what was picked up at the time, isn’t it?
MC: That was what was picked up at the time, and I think that reflects the public sentiment at the time in trying to understand what had happened in Germany.
David: What do you see in it now when you look at it?
MC: Well, now, given the work that we’ve done, what I see in it is the distress that people were feeling, and really found it quite aversive to harm this other person.
David: So, in other words, the experiment… It’s an illustration of how hard you have to work to overcome…
David: Was there empathy, do you think?
MC: Yeah. Yeah, exactly.
David: So it’s a measure of how much you have to do to overcome that natural empathy.
David: So what is it that you’ve been trying to focus on with the whole suite of experiments that you’ve done?
MC: Our work is really focused around a central question, which is how do we value the welfare of other people? And we can ask how we value, for example, harm to others. How much are we willing to pay to avoid harming others, and how does this compare to the way that we value harm to ourselves?
So what we’re able to do in our studies is to develop very precise, mechanistic accounts of how people actually make these decisions and the values that people place on the other person’s welfare and what they’re willing to sacrifice to preserve that welfare.
So what we’ve shown is that if you compare how much money people are willing to pay to prevent shocks to another person with the amount of money people are willing to pay to prevent shocks to themselves, most people that we’ve tested will give out more money to avoid shocking a stranger than to avoid shocking themselves.
So we have to pay people more to deliver shocks to the other guy than to deliver shocks to themselves.
David: That’s reassuring.
MC: It is reassuring, yeah.
Ard: So when Bertje showed real, what I would consider loving behaviour, and he would hug you and be very affectionate, very loving. So they have a loving side.
JG: Absolutely: love, compassion, true altruism. I mean, there are wonderful stories of chimpanzee altruism. Like a young male at Gombe, a twelve-year-old adolescent, and he adopted a motherless three-year-old who had no older brother or sister who would normally look after an orphan.
But little Mel didn’t have an older brother or sister, and we thought he’d die. Three, just beginning to be able to survive without his mother’s milk, just. But we didn’t think he’d make it, and then Spindle waited for him and let him travel on his back, even clinging below if it was cold or Mel was frightened. And then Mel would creep up to his nest at night, and Spindle was lying in the nest and Mel was always a little bit apprehensive. And he’d, ‘Ooh, Ooh,’ and Spindle would reach out and draw him close.
David: And they weren’t related?
JG: Not at all.
David: So this was just genuine care and generosity?
JG: Yes. I think when you have the long-term family supportive bonds they have, which can last through a life of up to 70 years in captivity, then all of that behaviour – of nurturing and caring for another – is kind of, now, inbuilt, so you can extend it, like we do, out beyond the immediate family.
David: Do you feel that because we are related, chimps and us, that when you see that we have the ability for empathy, and they do that… Does that then say to you that this is something that is in our nature?
JG: Yes, I think so. You know, Louis Leakey sent me just to learn about the chimps because he was digging up the remains of early humans, and a lot fossilises. You can tell a lot about what the creature was eating from the tooth wear, whether it’s upright or not from the bone, the muscle attachment, and so forth, but behaviour doesn’t fossilise.
So he believed in a common ancestor about six million years ago. Louis Leakey believed in a common ancestor: ape-like, human-like, maybe six million years ago, something like that, and he argued that if I would find behaviour that was similar, or maybe the same, in modern chimp and modern human, then possibly we had brought this along our separate evolutionary pathways from that common ancestor, and therefore he could then imagine his early humans behaving like that. That was his whole theory, and, of course, it turned out even better than he might have dreamed, all the different things I was seeing: kissing, embracing, holding hands, using tools, making tools – all of these things.
Ard: Yeah, it’s amazing.
David: Which does suggest that, somehow, you’ve got to find an evolutionary theory which says how this can have happened, because it has happened.
Ard: Here’s another thing I was wondering that really struck me when I was a child. So we had some pet goats that also ran around the area.
Ard: They were goats, actually quite cute little goats. And one day, one little goat was wandering close to him, and he jumped up and grabbed it, ran up into the tree and just wrung its neck and killed it. And then he started poking out its eyeballs, and it was just… It was cruel. It was mean. And the other goats were bleating and he just, kind of… It was actually cruel behaviour. And I remember looking at that and thinking, ‘That’s bad. That’s evil.’
And later, I’ve thought about it and thought, was he morally responsible for that kind of gratuitous killing? What do you think?
JG: I’ve personally decided that only humans are capable of true evil…
JG: …because we can deliberate and do it, knowing the harm we’re inflicting. For him it was just, sort of, curiosity: ‘what is this creature?’
Ard: So I’ll tell you another story of our chimpanzee. We had chickens as well, and one day there was a mother hen with little chicks, and he kind of sat there pretending to mind his own business until they got very close and he grabbed the chick. And, of course, he was playing with it, and his hands were so strong, he just killed it, and then he got bored because it wasn’t doing anything. And then he noticed that the mother hen was coming, very protectively, trying to get the chick. So what he did is, he would hold the little chick out like this and entice her to come. And then when she got close, he tried to grab her: bang! And so this poor mother hen was trying to get her chick back – her dead chick – and he was just teasing her with it. And I was shocked by that behaviour. I thought, that’s morally outrageous: you take someone’s young and you basically used it as a game.
David: And how old were you at that time?
Ard: Four… three or four – I was small enough to realise that that was a bad thing. I was morally outraged. I thought that was…
JG: I’m sure you were.
Ard: …a bad thing to do.
JG: And I’m sure in your upbringing there was enough that you’d been taught, that you would have that feeling.
Ard: Yeah. I just felt it was evil, it was wrong. But he just thought it was a funny game.
JG: Yeah, a funny game.
Ard: Did you see any of this in the wild as well? What we might call cruel behaviour.
JG: Yes. Well, they can… I mean, they have a very dark, aggressive, brutal side, just like us: their intercommunity conflicts; these gang attacks, horrendous... leaving the victim to die of the wounds inflicted.
Ard: Oh, wow!
JG: You know, descriptions of twisting around a leg. I mean, really awful, awful stuff.
Ard: While the animal is still alive?
JG: While the chimp is… This is another chimp…
JG: And this was one who they had known, because the community split, and so seven of the males who had split away and two of the females were savagely attacked like this once they had taken up part of the range that previously all had shared.
And it was the most horrifying thing. I mean, it’s bad enough when they’re attacking a stranger, but to attack somebody who you groomed with, and fed with, and travelled with, was horrifying.
David: Yes. So you saw that –your description of the little baby chimp – they were capable of tremendous empathy and, I don't know, maybe love, but then also capable of a much darker set of emotions as well.
JG: Yes, they share that with us. I used to think they were like us, but nicer, and then I realised that, just like us, they have this terrible, dark streak.
Ard: But you would say they’re not morally responsible for that behaviour, even…
JG: I don’t believe so. I don’t think chimpanzees, or probably any animal, is capable of torture, which I would define as premeditated, planned intention to inflict pain, mental or physical. That’s torture, and that’s evil.
Ard: Sometimes hey show what we might call evil behaviour, but it’s not… We are much more evil?
JG: It’s cruel behaviour.
Ard: Cruel behaviour, sorry. Cruel behaviour.
JG: We have the true evil, which is the premeditation, the plan, the knowing, the completely understanding. We have greater capacity for understanding the effect of our actions, I believe.
Ard: In your work, you speak about certainty and uncertainty in moral decision-making. Do you want to explain a little bit about what you mean by that?
MC: Yeah, so this is a really new interest of mine, and I’m really excited about trying to understand how uncertainty plays into moral decisions that we make. Earlier, when you guys were in the lab, and David, you were making decisions about Ard and pain for him, when I asked you afterwards what you thought about those decisions, you sort of thought, ‘Oh, well, I thought he could take it.’ And then I asked you, ‘Well, if it had been a stranger, how would you have chosen?’ And what you said is very much in line with the way we’ve been thinking about it. You said, ‘Well, you know, I wouldn’t know who the person was. I wouldn’t know anything about them. What if it was an old lady?’ And this idea that at the end of the day we can’t get inside another person’s head, right? And so when we make a decision that’s going to affect someone else, there is an element of uncertainty that can never be satisfactorily resolved.
And I think that the more uncertainty there is, the more cautious we are when we’re dealing with other people because we don’t want to put someone in a bad way. And there’s a sense of risk associated with making an assumption about someone that could be wrong.
David: That rings so true. I mean, I think about it in my own child. I’ve been in the situation where there’s something… it’s a bit risky, and I’ll say to my own son – who, of course, I’m very related to – ‘You’ll be fine. Just go ahead.’ I would never dream of doing that to somebody else’s child because I don’t know how capable they are.
MC: Exactly. So we’re doing experiments now where we actually push around people’s sense of uncertainty about the other person and see how that affects their moral behaviour. And what seems to be the case is that people are more moral: they are concerned about other people more and they’re more averse to harming others when there’s more uncertainty.
David: That’s fantastic, because that runs completely counter to the old arguments about altruism where you’ll only actually care about the people you’re related to.
David: Those nearest and dearest to you. And this is running completely counter to it.
MC: Well it does set up some unusual predictions.
David: That’s really interesting.
MC: But back to what we saw in the lab, where you were quite unfriendly towards your friend here, who you know very well. But had it been somebody who you didn’t know at all, you suspected you would have been a lot nicer.
Ard: I hope you would have been nicer! But I think it’s interesting because we’ve been toying a lot about these ideas about certainty and uncertainty, and I think the general sense is that when people become too certain that they’re right, they often end up doing things that are harmful to others in the name of whatever their certainty is. And certainty can be lots of different things. And so we’d be nervous about that certainty. On the other hand, we don’t want to descend into kind of fluffy, who knows what, right?
David: I’m the fluffy one.
Ard: The fluffy one. But I think there’s… I think what you’re saying is a really good point: that we’re uncertain about other people because we lack certain knowledge about them.||
Ard: And so what we do is we err on the side of caution.
Ard: And that’s a very wise thing to do.