Molly Crockett Full Interview Transcript

Reason v emotion

Ard: So, Molly, can you tell us something about the difference between moral instincts and moral judgements? Are they the same thing? Do we need moral instincts to be moral people?

MC: I think of moral sentiments as emotions that motivate moral behaviour towards other people. So those can be benevolent moral sentiments: so feeling concern and care for others – feeling averse to harming others. There are also malevolent moral sentiments: so feeling like you want to take revenge on someone who has harmed you in some way. And I think of moral sentiments as really motivating decision-making and behaviours, and also judgements – so outputs. And judgements are just one kind of output of this process where you think about a situation, a scenario, and you make a judgement of right or wrong.

Ard: So what you’re saying is, moral judgements are, more or less, when we’re thinking about what’s happening, whereas moral sentiments are when we just kind of feel what’s happening, and in a lot of situations we don’t have time to think about what’s happening: we just react.

MC: Right. So, you might have one person who says, ‘Well, moral judgements are driven by emotions’, and you have another camp who says, ‘No, moral judgements are actually really a prolonged, deliberative process’. And I think they can be both, and the interesting questions, I think, are when do we deploy these more conscious, deliberative processes in moral judgements and decisions? And when do we employ more instinct and fast emotional processes?

Ard: And when do we?

MC: That’s a really good question! We’re working on it.

David: One of the things which Frans de Waal said… He often felt that people would do something and they’d made a moral decision, but then after the fact, they would come up with a great post-hoc justification for why they did it. And he said he often felt that the story which the person told afterwards was just that: it was a story and didn’t actually match up.

MC: They called this ‘moral dumbfounding’: this idea that there’s an emotional reaction that drives the judgement, and any sort of reasoning that takes place is sort of a post-hoc rationalisation. And that’s a very popular account of how moral judgement works, and I think that there is solid evidence from those experiments that that explains how moral judgement works in certain kinds of cases.

But I think that, broadly, there are other kinds of moral decisions that people make where they are factoring in both moral norms and reasons, as well as the emotions at the same time. And I think the field is generally moving away from this sort of false dichotomy – between emotion on the one hand and reason on the other hand – and starting to think of moral decision-making more like the way we think of other kinds of decision-making, which is an integration of different sources of value.

Ard: Are there some illustrations that can help us understand this moral sentiments, moral reasoning? How that works?

MC: So, Josh Greene, who’s a psychologist at Harvard, likes to make the analogy of a camera. So, with a camera you have the automatic mode – point and shoot – sort of pre-fixed settings that work in a variety of situations, and then there’s manual mode, which you can switch into if you want to have really tight control over how you’re going to take the photo.

He likens the moral brain to this kind of a camera where there’s an automatic setting, where moral intuitions can guide our moral judgements and decisions. And then there’s also manual mode, where we can switch into this more deliberative, reasoning mode. And there’s a lot of evidence supporting this kind of a distinction. I like to think about it in terms of different decision-making systems in the brain: so we have a very old system that you could think about as an instinctual system where we approach rewards, we avoid punishments, we have these very innate drives towards good stuff and away from bad stuff, and this can infuse a lot of our moral judgements.
So, in the case where, for example, people are asked, ‘Is it appropriate to kill one person in order to save many others?’ our gut instinct might be, ‘Oh, no. I don’t want to do that because killing is wrong.’ And that’s a sort of automatic reaction that we have to that idea.

So, there’s another brain system – which I call the ‘goal-directed system’, or some people call it the ‘model-based system’ – that is designed to prospect into the future, to represent the state of the world and the link between the actions that we take and the consequences that ensue. And it’s been shown, for example, that when people are under stress, they rely less on this goal-directed system. And equally, it’s been shown that when people are under stress, they’re more likely to make these more emotional moral judgements.

So, it seems like people shift between different modes of thinking. And just like different modes of thinking influence the way people take risks, or decide between a small reward now and a large reward later, they might make different moral judgements and decisions depending on the state that they’re in.

Ard: I have a ten-month-old daughter and she has jet lag, so I have not slept very much the last few nights.

MC: Oh dear.

Ard: So that, probably, will make me behave in emotional ways. Is that what you’re saying? I’m more likely to behave in just an instinctual way than think about it properly?

MC: Yeah.
|
Ard: That probably explains a few things.


5:59 – Morality and the brain

Ard: And so, I’m just thinking about myself in my now, currently sleep-deprived state. So, my judgement is probably a little impaired, and maybe my sentiments are impaired, but also my ability to make decisions is a little impaired, so I’m more likely to make not good decisions because I just don’t take the time to think about them.

MC: I think most people probably think that their moral sentiments, their moral values, are set in stone: that they’re very difficult to change. They feel central to who we are. But in our work we’ve shown that we can actually shift around people’s moral decisions and their moral judgements by giving them a pill, like an antidepressant drug, or a drug that boosts dopamine in the brain, and this, I think, is evidence that our values really do shift around to the extent that our brain chemistry is shifting around.

We’ve shown, for example, that a single dose of an antidepressant drug nearly doubles the amount of money people are willing to pay to avoid shocking someone else. And it makes people less willing to say it’s morally acceptable to harm one person in order to save many others.

So I think it’s really interesting to think about how brain chemistry can influence our moral values. It’s certainly evidence that they’re not set in stone, which I think is really encouraging, because it suggests that these intractable conflicts that involve a disagreement in moral values could potentially be resolved.

And it’s interesting to think about whether, one day, there actually might be medications or pills that could actually change people’s moral behaviour. I don’t think we’re there yet. I think it’s a long way before we would have the technology to target moral sentiments in this specific way, and one reason for that is it’s so difficult to define what is morality, which I’m sure you guys are very sympathetic with. But the fact that we’ve shown different chemicals in the brain do influence moral decisions is preliminary evidence that this could, maybe, be feasible.

David: And worrying.

MC: And worrying. But I think there’s no need to worry about an off-the-shelf morality pill, because that could never exist. I think morality is way too complex to be delivered in pill form.

8:33 – How important is empathy?

Ard: If we compare ourselves to animals, do animals have moral sentiments?

MC: That is a long-debated question. My read of the literature is that the building blocks of moral sentiments are present in animals, especially in mammals who care for their young, who live in social groups. You can see evidence for the building blocks of things like empathy even in rats. So there have been experiments done recently showing that rats will work to free a trapped cage-mate, particularly if the trapped, caged mate is vocalising signs of distress. There were experiments done on monkeys in the 1960s showing that if you give a monkey an opportunity to get some food by delivering shocks to his cage-mate, he will refuse to do that many times.

Ard: As opposed to David, who is quite happy to… Even the monkeys don’t do it.

David: There’s always exceptions!

MC: And researchers like Frans de Waal would argue that there are profound examples of prosocial behaviour and the roots of empathy in non-humans.

David: Do you find the work of people like de Waal… Do you think that he’s right?

MC: Yes, I think there’s a lot of evidence for the roots of moral sentiments in animals.

David: Could you be a moral person if you were perfectly able to think about these deep, philosophical, moral… this school of moral thought or that school or moral thought, but you had no empathy? In other words, you were lacking these humble instincts. Would you be able to be a moral person or would you be a monster?

MC: Well, so are you describing a psychopath?

David: I don't know.

MC: Maybe.

David: You put the label on it. I’m just describing this…

MC: Right, I think moral behaviour is motivated, and to have the motivation you need the sentiments.

David: You need to care about someone.

MC: You need to care. So a paper that was published a few years ago, the title was: Psychopaths Know Right From Wrong, And They Just Don’t Care. So…

David: And is that true?

MC: Well this area is still controversial, because it really depends on the way that you ask the question with these different kinds of moral dilemmas. Some studies have shown that psychopaths are indistinguishable from healthy people on certain kinds of moral dilemmas, but other studies have shown differences, so it really depends on the way that you ask these types of questions.

But it’s certainly the case, just anecdotally, that if you ask a psychopath, a serial killer, ‘Do you know that what you did was wrong?’ they’ll say, ‘Yeah. Yeah, I realise it was against the law.’ You know, they can apply moral principles to, sort of, understand, or not understand; they can tell you what you want to hear, but they don’t have that feeling.

David: It doesn’t mean anything to them.

MC: Right. Walter Sinnott-Armstrong – who’s a well-known philosopher who works on psychopaths – he’s made this analogy, which I think is really great, which is you [to Ard] can talk about physics and you’re a physicist, so E=mc2: I know that E=mc2 is Einstein’s famous equation – I can tell you that E=mc2 – but I don’t actually understand why is the c squared, for example, whereas you have an understanding based on your knowledge of physics that is a much richer understanding of that equation. So on the surface I can say E=mc2: I know that is a truth, and you can say that at the same time, but we have a very different understanding of what that means.
Similarly, a psychopath can say, ‘Killing is wrong’, and I can say, ‘Killing is wrong’, but our understanding of that statement is very different.

Ard: That’s really interesting. I think if you say E=mc2 to me, I feel something because I know what it… It kind of has a depth to it. Whereas I think in areas that are outside my own field, I might know that something is true, but it doesn’t have the same feeling for me because I don’t… I just know that because people told me it’s true, and it’s probably true.

MC: Yeah.

Ard: So psychopaths are basically people who understand the rules but don’t have the sentiments to make them follow the rules?

MC: Exactly.

Ard: It’s interesting – there are three things: there’s my sentiments that make me want to do certain things and not do other things; there’s my judgement that tell me how I should behave, in this or that way; and then there’s the question of whether I’ll actually follow them. Right?

MC: Yeah.

Ard: So maybe my sentiments say, ‘Be nice to David’, and my thoughts are it’s a good thing to be nice to David. Maybe I just don’t care and I’ll just be mean! Probably a psychopath!

David: So, it’s not a matter of trying to move away from our moral sentiments as if they were somehow lowly and somewhat dangerous, and just become purely rational about it? It suggests like you’re saying, ‘Look, we’ve got these things… we need to have more of a thoughtful relationship between our instincts and our ideas.’

MC: Yeah, and I don’t think anyone is suggesting that we do away with moral sentiments, not in the least.

David: No, but we used to, though, didn’t we? There was that old image of, sort of, our base, fleshy, animal nature, which had its own drives and then there was the noble, rational mind on top, struggling to control it.

MC: Mm... yeah, but I think that’s safely put to bed by this point. I mean, I think you can see this argument progressing in discussions about artificial intelligence and how we should think about building super-intelligent agents that might, one day, have a lot of impact over the course of humanity.
One of the central issues in AI research right now is how do we load moral values into an AI: it’s called the value-loading problem. How do we build an artificial intelligence (AI) that actually cares about us as human beings and cares about our welfare? So I think the fact that moral sentiments are really very central in this endeavour to build in artificial intelligences shows that the scientific community really is giving these sentiments a starring role.

Ard: That’s a very interesting point; that’s a nice way of thinking about it. So if I were to make a computer that was incredibly intelligent, so intelligent that it could understand all these different philosophies, including different moral philosophies, what you’re saying is, the worry is, that’s not going to be enough. We have to give that computer some kind of moral sentiments as well.

David: Otherwise it would be a clever psychopath.

Ard: Yeah, exactly, the computer would be a psychopath. That’s a kind of a scary thought, actually.

MC: It is.

15:44 – Negative Moral Sentiments

Ard: So, Molly, are there bad moral sentiments, or moral sentiments that lead us astray?

MC: We can think about a set of moral sentiments that are to do with retribution and punishment. So when we think someone has harmed us, or harmed someone we care about, or they’ve violated a social rule, they’ve been unfair, they’ve desecrated something, people can get very angry. And this anger, this sort of retribution, can motivate a lot of very harmful behaviour. I think a lot of the intractable religious conflicts of today are reflections of this.

Ard: Yes.

MC: People have harmful moral sentiments in situations where they think that they’ve been done wrong, and this can fuel very destructive cycles of violence.

Ard: And do you think that’s linked to a very deep moral instinct that we have?

MC: Yes, I think so, and I think it’s very unproductive. The research shows that people’s motivation to punish is largely driven by a desire to harm. Even though if you ask people after the fact, ‘Why are you punishing?’ they’ll say things like, ‘Well, we want to prevent this from happening in the future.’
But we’ve done experiments, actually looking to see whether in the lab people are motivated when they punish more by retribution or more by deterrence. The way that we’ve done this is we’ve set up a situation where people are able to punish, and there’s no possible way that the punishment can deter a crime in the future.

So, we set up two different situations. In one case, Ard, you can punish David. It takes away money from him and David learns that you’ve done this, so he might be less likely to do that in the future because you’ve deterred his bad behaviour. But we’ve also given people the opportunity to punish in secret. So you can take money away from David. David doesn’t know that this has happened. So there’s no way that your punishment could deter him from behaving unfairly towards you in the future.

The question is, do people use punishment when there’s no deterrence possible? And the answer is a resounding yes. People punish almost as much when they’re just taking money away, but not sending the message that you’ve done something wrong, as they will when they are able to send this message that teaches a lesson. So what that shows is that a lot of punishment behaviour is really motivated by this dark motivation to harm. And it’s not concerned with the future. It’s not concerned with teaching a lesson and making things better off for everybody else by deterring this bad behaviour.

Ard: And how about something like racism? Because racism is all pervasive in the world. Is that because of a moral sentiment that we have that inclines us towards that?

MC: Josh Greene, in his recent book, argues that this is, sort of, the other side of the coin of psychology that really evolved to help us solve cooperation problems. So, there’s the Me Versus Us problem, and moral sentiments, he thinks, evolved to help us solve this tragedy of the commons and to help us sacrifice our own personal interests for the sake of the group.

But then that also leads to this conflict between us and them, because the same sentiments that seem to motivate us to help our kin are those that make us suspicious of people in the other group.
I think the overall lesson is that these sentiments evolved for certain purposes that, in the complexity of today’s world, we have to be very careful with how they’re deployed, because what can produce a very beneficial behaviour in one context can actually produce a very harmful behaviour in another context.


20:05 – Does True altruism exist?

David: When I was at university in the ‘80s, moral sentiment was… All the talk was about altruism and that we find it very difficult. We might be altruistic to someone we’re related to, but that’s about it.

MC: I think what’s the most fascinating aspect of human nature, to me, is the fact that we harbour these benevolent sentiments at the same time as being quite selfish and even malevolent in some situations.
One of my favourite quotes is from Blaise Pascal, who says, ‘Human beings are the glory and the scum of the universe,’ which is just so evocative, right?

And it’s absolutely true: we care a lot about fairness, for example. We’re really motivated to achieve fair outcomes and will even sacrifice personal costs to ensure that outcomes are fair. And this has been shown many times in the lab, and I’m sure you talked a lot about this with Martin [Nowak].

We like to cooperate. We like to cooperate for the sake of cooperating. We like to do good for the sake of doing good. And that’s why you can make people more generous just by reminding them about moral norms. You can also make people more generous by reminding them about their reputation.

So not only do we care about doing good, we know that other people care about that, and so then that gives us a selfish reason to do good. And one of the great debates in the altruism and cooperation literature over the past – well, as long as we’ve been studying it, really – is this question of does ‘true’ altruism exist? Are people willing to sacrifice themselves to help someone else, even maybe a stranger, for the sake of that other person? Or does it all come down to selfish value? And I don’t think that question has been fully resolved, yet. But there are certainly hints of evidence that we do genuinely value the welfare of others for its own sake, and not for the sake of what it can bring to us.

David: I’m amazed to hear you say, ‘we haven’t answered that question fully.’ I would have though when you say ‘we’, do you mean academics?

MC: Yes.

David: Because the rest of the world… There’s about several thousand years of clear evidence, surely. I mean, I don’t see that it’s a question. Yes, people do. They’re willing to be completely unselfish.

MC: Of course they are, behaviourally, but the unresolved question, in my mind, is, when I help someone, am I helping them because I truly care about them? Or am I helping them because it feels good to me?

Ard: Yeah. Scratch an altruist, watch a hypocrite bleed, right?

David: But when you when you say, ‘It feels good to me…’

MC: It feels good, yeah.

David: Would that…? I mean, if you’re standing by the side of the road and a child who you’ve never met trips and is going to be hit by the bus, and you reach in, and you risk life and limb, but you pull them back. You didn’t have time to think, ‘Now, am I related to them?’ And, ‘are there a lot of people watching?’

MC: Of course.

David: ‘Will people clap?’ You just did it.

MC: Yeah.

David: Now, is that being hypocritical or selfish, or is that just doing it? Is that the moral sentiment just making you do something good?

MC: Yeah, it’s making you do something good.

David: So there’s no question then, surely?

MC: So again the debate is not about whether people do good.

David: No.

MC: Even really, really profound heroic acts, like risking your life to save a stranger, people do do this, of course. There’s no argument about that. The question in academic research – which might be the kind of question which only academics who think about this all the time care about – is the question of, what is the motivation? And I think your point is a good one, which is to say that maybe a lot of these more selfish kinds of motivations actually take some time to compute.

There’s research by David Rand, who went into the narrative accounts of people who won the Carnegie Medal for heroism. (So these are people who have risked their lives to save a stranger), and he went in and he analysed the narratives of these experiences and looked for language indicating whether people thought about it or whether they just did it impulsively, and overwhelmingly the evidence shows that people are not deliberating in these kinds of situations, which is, I think, pretty good evidence for a pure, altruistic motive. But it’s not the smoking gun. I think we would we would need to be very clever in order to find that smoking gun.


25:13 – How do we value others?

David: What was the significance of the Milgram experiments as perceived when they first happened, do you think?

MC: So the Milgram experiments were conducted shortly after WW2, and I think the sentiment at the time was trying to figure out how on earth did the Holocaust happen. How is it that human beings could allow this atrocious torture and horrible acts to happen to other human beings? How on earth could this happen? And so Milgram’s experiments were really going after the idea that people are very compelled to obey authority. And people are willing to do atrocious things when they are persuaded to do so by authority.

And the headline was one of many experiments that he conducted. I think he conducted dozens of experiments where he and his team tweaked different aspects of the setup of the experiment to try and figure out how could you get people to deliver these life-threatening shocks to the confederate in the study. And, of course, the one that got the most press was the one showing something like 60%, or a large proportion of people in the experiment, were willing to go all the way to the fatal level of shock. But they had to go to a lot of pains to get people to do that.

David: How do you mean?|

MC: The majority of participants in that experiment voiced some protest at some point. They were deeply uncomfortable with the situation. They were sweating, heart racing, very distressed and asked to stop many times. And the experimenter would say things like, ‘You must go on. The experiment requires that you go on.’
The experimenter had to persuade the participants to carry on with the experiment. And so you can draw many different conclusions from this. One conclusion you can draw is that the majority of people are willing to shock a stranger to death.

David: And that was what was picked up at the time, isn’t it?

MC: That was what was picked up at the time, and I think that reflects the public sentiment at the time in trying to understand what had happened in Germany.

David: What do you see in it now when you look at it?

MC: Well, now, given the work that we’ve done, what I see in it is the distress that people were feeling, and really found it quite aversive to harm this other person.

David: So, in other words, the experiment… It’s an illustration of how hard you have to work to overcome…

MC: Exactly.

David: Was there empathy, do you think?

MC: Yeah. Yeah, exactly.

David: So it’s a measure of how much you have to do to overcome that natural empathy.

MC: Yeah.

David: So what is it that you’ve been trying to focus on with the whole suite of experiments that you’ve done?

MC: Our work is really focused around a central question, which is how do we value the welfare of other people? And we can ask how we value, for example, harm to others. How much are we willing to pay to avoid harming others, and how does this compare to the way that we value harm to ourselves?

So what we’re able to do in our studies is to develop very precise, mechanistic accounts of how people actually make these decisions and the values that people place on the other person’s welfare and what they’re willing to sacrifice to preserve that welfare.

So what we’ve shown is that if you compare how much money people are willing to pay to prevent shocks to another person with the amount of money people are willing to pay to prevent shocks to themselves, most people that we’ve tested will give out more money to avoid shocking a stranger than to avoid shocking themselves.
So we have to pay people more to deliver shocks to the other guy than to deliver shocks to themselves.

David: That’s reassuring.

MC: It is reassuring, yeah.

 

29:23 – Certainty and uncertainty

Ard: In your work, you speak about certainty and uncertainty in moral decision-making. Do you want to explain a little bit about what you mean by that?

MC: Yeah, so this is a really new interest of mine, and I’m really excited about trying to understand how uncertainty plays into moral decisions that we make. Earlier, when you guys were in the lab, and David, you were making decisions about Ard and pain for him, when I asked you afterwards what you thought about those decisions, you sort of thought, ‘Oh, well, I thought he could take it.’ And then I asked you, ‘Well, if it had been a stranger, how would you have chosen?’ And what you said is very much in line with the way we’ve been thinking about it. You said, ‘Well, you know, I wouldn’t know who the person was. I wouldn’t know anything about them. What if it was an old lady?’ And this idea that at the end of the day we can’t get inside another person’s head, right? And so when we make a decision that’s going to affect someone else, there is an element of uncertainty that can never be satisfactorily resolved.

And I think that the more uncertainty there is, the more cautious we are when we’re dealing with other people because we don’t want to put someone in a bad way. And there’s a sense of risk associated with making an assumption about someone that could be wrong.

David: That rings so true. I mean, I think about it in my own child. I’ve been in the situation where there’s something… it’s a bit risky, and I’ll say to my own son – who, of course, I’m very related to – ‘You’ll be fine. Just go ahead.’ I would never dream of doing that to somebody else’s child because I don’t know how capable they are.

MC: Exactly. So we’re doing experiments now where we actually push around people’s sense of uncertainty about the other person and see how that affects their moral behaviour. And what seems to be the case is that people are more moral: they are concerned about other people more and they’re more averse to harming others when there’s more uncertainty.

David: That’s fantastic, because that runs completely counter to the old arguments about altruism where you’ll only actually care about the people you’re related to.

MC: Exactly.

David: Those nearest and dearest to you. And this is running completely counter to it.

MC: Well it does set up some unusual predictions.

David: That’s really interesting.

MC: But back to what we saw in the lab, where you were quite unfriendly towards your friend here, who you know very well. But had it been somebody who you didn’t know at all, you suspected you would have been a lot nicer.

Ard: I hope you would have been nicer! But I think it’s interesting because we’ve been toying a lot about these ideas about certainty and uncertainty, and I think the general sense is that when people become too certain that they’re right, they often end up doing things that are harmful to others in the name of whatever their certainty is. And certainty can be lots of different things. And so we’d be nervous about that certainty. On the other hand, we don’t want to descend into kind of fluffy, who knows what, right?

David: I’m the fluffy one.

Ard: The fluffy one. But I think there’s… I think what you’re saying is a really good point: that we’re uncertain about other people because we lack certain knowledge about them.||

MC: Yes.

Ard: And so what we do is we err on the side of caution.

MC: Exactly.

Ard: And that’s a very wise thing to do.