Do our evolved emotions, feelings and instincts provide the real basis for our morality? Could we be moral beings if we didn’t feel for others?
Ard: If we compare ourselves to animals, do animals have moral sentiments?
MC: That is a long-debated question. My read of the literature is that the building blocks of moral sentiments are present in animals, especially in mammals who care for their young, who live in social groups. You can see evidence for the building blocks of things like empathy even in rats. So there have been experiments done recently showing that rats will work to free a trapped cage-mate, particularly if the trapped, caged mate is vocalising signs of distress. There were experiments done on monkeys in the 1960s showing that if you give a monkey an opportunity to get some food by delivering shocks to his cage-mate, he will refuse to do that many times.
Ard: As opposed to David, who is quite happy to… Even the monkeys don’t do it.
David: There’s always exceptions!
MC: And researchers like Frans de Waal would argue that there are profound examples of prosocial behaviour and the roots of empathy in non-humans.
David: Do you find the work of people like de Waal… Do you think that he’s right?
MC: Yes, I think there’s a lot of evidence for the roots of moral sentiments in animals.
David: Could you be a moral person if you were perfectly able to think about these deep, philosophical, moral… this school of moral thought or that school or moral thought, but you had no empathy? In other words, you were lacking these humble instincts. Would you be able to be a moral person or would you be a monster?
MC: Well, so are you describing a psychopath?
David: I don't know.
David: You put the label on it. I’m just describing this…
MC: Right, I think moral behaviour is motivated, and to have the motivation you need the sentiments.
David: You need to care about someone.
MC: You need to care. So a paper that was published a few years ago, the title was: Psychopaths Know Right From Wrong, And They Just Don’t Care. So…
David: And is that true?
MC: Well this area is still controversial, because it really depends on the way that you ask the question with these different kinds of moral dilemmas. Some studies have shown that psychopaths are indistinguishable from healthy people on certain kinds of moral dilemmas, but other studies have shown differences, so it really depends on the way that you ask these types of questions.
But it’s certainly the case, just anecdotally, that if you ask a psychopath, a serial killer, ‘Do you know that what you did was wrong?’ they’ll say, ‘Yeah. Yeah, I realise it was against the law.’ You know, they can apply moral principles to, sort of, understand, or not understand; they can tell you what you want to hear, but they don’t have that feeling.
David: It doesn’t mean anything to them.
MC: Right. Walter Sinnott-Armstrong – who’s a well-known philosopher who works on psychopaths – he’s made this analogy, which I think is really great, which is you [to Ard] can talk about physics and you’re a physicist, so E=mc2: I know that E=mc2 is Einstein’s famous equation – I can tell you that E=mc2 – but I don’t actually understand why is the c squared, for example, whereas you have an understanding based on your knowledge of physics that is a much richer understanding of that equation. So on the surface I can say E=mc2: I know that is a truth, and you can say that at the same time, but we have a very different understanding of what that means.
Similarly, a psychopath can say, ‘Killing is wrong’, and I can say, ‘Killing is wrong’, but our understanding of that statement is very different.
Ard: That’s really interesting. I think if you say E=mc2 to me, I feel something because I know what it… It kind of has a depth to it. Whereas I think in areas that are outside my own field, I might know that something is true, but it doesn’t have the same feeling for me because I don’t… I just know that because people told me it’s true, and it’s probably true.
Ard: So psychopaths are basically people who understand the rules but don’t have the sentiments to make them follow the rules?
Ard: It’s interesting – there are three things: there’s my sentiments that make me want to do certain things and not do other things; there’s my judgement that tell me how I should behave, in this or that way; and then there’s the question of whether I’ll actually follow them. Right?
Ard: So maybe my sentiments say, ‘Be nice to David’, and my thoughts are it’s a good thing to be nice to David. Maybe I just don’t care and I’ll just be mean! Probably a psychopath!
David: So, it’s not a matter of trying to move away from our moral sentiments as if they were somehow lowly and somewhat dangerous, and just become purely rational about it? It suggests like you’re saying, ‘Look, we’ve got these things… we need to have more of a thoughtful relationship between our instincts and our ideas.’
MC: Yeah, and I don’t think anyone is suggesting that we do away with moral sentiments, not in the least.
David: No, but we used to, though, didn’t we? There was that old image of, sort of, our base, fleshy, animal nature, which had its own drives and then there was the noble, rational mind on top, struggling to control it.
MC: Mm... yeah, but I think that’s safely put to bed by this point. I mean, I think you can see this argument progressing in discussions about artificial intelligence and how we should think about building super-intelligent agents that might, one day, have a lot of impact over the course of humanity.
One of the central issues in AI research right now is how do we load moral values into an AI: it’s called the value-loading problem. How do we build an artificial intelligence (AI) that actually cares about us as human beings and cares about our welfare? So I think the fact that moral sentiments are really very central in this endeavour to build in artificial intelligences shows that the scientific community really is giving these sentiments a starring role.
Ard: That’s a very interesting point; that’s a nice way of thinking about it. So if I were to make a computer that was incredibly intelligent, so intelligent that it could understand all these different philosophies, including different moral philosophies, what you’re saying is, the worry is, that’s not going to be enough. We have to give that computer some kind of moral sentiments as well.
David: Otherwise it would be a clever psychopath.
Ard: Yeah, exactly, the computer would be a psychopath. That’s a kind of a scary thought, actually.
MC: It is.
Ard: So could you give us some examples of this kinder behaviour in chimpanzees that you've seen?
FdW: Well chimpanzees, we had for example an old female who died recently, but she could barely walk anymore.
Ard: What was her name?
FdW: Her name was Penny, and she could barely walk anymore, and each time she would try to get up and get to the water spigot to get some water, younger females, but adult females, they would run over to the water and suck up a lot of water and bring it to her and spit it in her mouth so that she didn't need to, because her walking across the enclosure would be an enormous effort. Or they would push her up on the climbing frame if she tried to join a group of grooming chimps and get her there.
And we've seen many of these cases. We've seen recently a case of a male who was dying of something on his stomach and others taking care of him and bringing him a wood roll that they would put behind his back so that he could lean back. And they help each other on occasions, but they only help, of course, the individuals that they like. It's just like humans. In humans we have all these moral imperatives – you should do this under these circumstances and that under those circumstances – but it really applies only, that kind of behaviour, to individuals that you are close to.
David: You sent me a paper a few years ago where you discussed some fascinating experiments which were done at the end of the 50s, the 60s. There was one in particular where the monkey had been taught to pull a lever to get food, but then the experiment was changed and when it pulled the lever, another monkey it didn't know got a shock. Would you tell us about that experiment, because I was so struck by it.
FdW: Yeah, so these are experiments on empathy or sympathy that were done in the 50s that we wouldn't do anymore. I certainly I wouldn't do it, because it's a pretty horrible experiment, in the sense that you would have, let's say, one macaque who is sitting here, and he can pull a lever and get food – each time he pulls, he gets food. Then they're going to pair the lever with a shock to the partner, so as soon as you pull the lever, another monkey who is sitting there gets shocked. The monkey will then stop pulling. Some monkeys would stop pulling for five days. They would starve themselves for five days in order not shock the other monkey. So it is an interesting idea of aversiveness to the distress of somebody else.
So, it used to be thought, twenty years ago, thirty years ago, that empathy is a sort of decision: I decide to be empathic with you, or I decide to put myself in your shoes. That's not how it works at all. It's an automated process that is a very biased process in addition.
David: Does that mean that, for you, you think that when we're talking about… well people will endlessly talk about morals, and where do morals come from, and are there moral solutions, that it's not so much to do with thinking about it, but that, in some sense, the foundation of a moral system is built into us in our emotions?
FdW: Yes, I think so. I do think that you need an interest in other people, and so I always consider empathy, sort of, the foundation of morality. It’s that if I'm not interested in others, and the well-being of others, then I cannot be a moral being. If you don't have any level of generosity and interest in others, then you would never be a moral being: you would be a psychopath basically. And, actually, interestingly enough, that whole literature on selfish genes and how we humans are overly competitive and just like the rest of the animal kingdom, that was all a literature about psychopaths, I think. It was basically describing the human species as a psychopath: all we can think about is what is good for me, and very reluctantly thinking about what is good for you.
So that view of the human was popular in the 70s and 80s, I mean, after all, there was this biologist, I think, Ghiselin, who said, ‘Scratch an altruist and watch a hypocrite bleed.’ That was a very popular saying, and basically describes a psychopath. And everyone was happy with that at the time.
Ard: And the idea would be that if somebody shows behaviour that looks like something good, fundamentally they're doing it for selfish reasons.
FdW: Yeah, of course.
Ard: And you scratch them and they're really hypocrites?
FdW: Yes, there cannot be genuine altruism, there cannot be genuine kindness because there's always a selfish agenda behind it.
Ard: And you disagree? You think that there can be genuine altruism?
FdW: Oh, of course, yeah, yeah, I absolutely think that.
Ard: So, Molly, are there bad moral sentiments, or moral sentiments that lead us astray?
MC: We can think about a set of moral sentiments that are to do with retribution and punishment. So when we think someone has harmed us, or harmed someone we care about, or they’ve violated a social rule, they’ve been unfair, they’ve desecrated something, people can get very angry. And this anger, this sort of retribution, can motivate a lot of very harmful behaviour. I think a lot of the intractable religious conflicts of today are reflections of this.
MC: People have harmful moral sentiments in situations where they think that they’ve been done wrong, and this can fuel very destructive cycles of violence.
Ard: And do you think that’s linked to a very deep moral instinct that we have?
MC: Yes, I think so, and I think it’s very unproductive. The research shows that people’s motivation to punish is largely driven by a desire to harm. Even though if you ask people after the fact, ‘Why are you punishing?’ they’ll say things like, ‘Well, we want to prevent this from happening in the future.’
But we’ve done experiments, actually looking to see whether in the lab people are motivated when they punish more by retribution or more by deterrence. The way that we’ve done this is we’ve set up a situation where people are able to punish, and there’s no possible way that the punishment can deter a crime in the future.
So, we set up two different situations. In one case, Ard, you can punish David. It takes away money from him and David learns that you’ve done this, so he might be less likely to do that in the future because you’ve deterred his bad behaviour. But we’ve also given people the opportunity to punish in secret. So you can take money away from David. David doesn’t know that this has happened. So there’s no way that your punishment could deter him from behaving unfairly towards you in the future.
The question is, do people use punishment when there’s no deterrence possible? And the answer is a resounding yes. People punish almost as much when they’re just taking money away, but not sending the message that you’ve done something wrong, as they will when they are able to send this message that teaches a lesson. So what that shows is that a lot of punishment behaviour is really motivated by this dark motivation to harm. And it’s not concerned with the future. It’s not concerned with teaching a lesson and making things better off for everybody else by deterring this bad behaviour.
Ard: And how about something like racism? Because racism is all pervasive in the world. Is that because of a moral sentiment that we have that inclines us towards that?
MC: Josh Greene, in his recent book, argues that this is, sort of, the other side of the coin of psychology that really evolved to help us solve cooperation problems. So, there’s the Me Versus Us problem, and moral sentiments, he thinks, evolved to help us solve this tragedy of the commons and to help us sacrifice our own personal interests for the sake of the group.
But then that also leads to this conflict between us and them, because the same sentiments that seem to motivate us to help our kin are those that make us suspicious of people in the other group.
I think the overall lesson is that these sentiments evolved for certain purposes that, in the complexity of today’s world, we have to be very careful with how they’re deployed, because what can produce a very beneficial behaviour in one context can actually produce a very harmful behaviour in another context.
David: Can I ask you about this wonderful phrase, ‘nice nihilism’. What do you mean by ‘nice nihilism’? What is it?
AR: Nihilism is the thesis which you forced me to admit to embracing: that there are no fundamental moral values. But the nihilism which I embrace, or endorse, is nice in the sense of how it is that we human beings could have evolved to be cooperative, empathetic and altruistic creatures.
In fact, we would never have survived having been thrown out of the rainforest into the African savanna, on the bottom of the food chain. We never would have survived, let alone moved ourselves right up to the top of the food chain in only a matter of a couple of hundred thousand years, if we hadn’t had certain features: the tendency to cooperate with one another that’s required by the division of labour and by the coordination of activities of people with one another.
And those behavioural dispositions are so deeply written into our evolutionary history that now we are all, except for the small number of psychopaths among us, pretty nice people, easy to get along with. We can be trusted – even us nihilists can be trusted with the family silverware – to take care of the children if there are other adults gone and not to cut moral corners, because we are driven, like everybody else, by the same emotions of guilt and shame and anger and fear, which make us all pretty decent, moral people.
Ard: So, do you think…?
AR: That’s the nice part.
Ard: That’s the nice part. So do you think cooperation is part of the evolutionary story?
AR: Absolutely. It had to have been.
Ard: We interviewed Martin Nowak who was very excited about cooperation coming out of game theory. Do you have any thoughts on those things?
AR: I think that one of the great advances in evolutionary anthropology and experimental economics that has enabled us to understand human origins is what we now know about various kinds of cooperative and zero-sum games. Inevitably it turns out that people’s behaviours are not narrowly short term, economically self-interested, rational: they’re always cooperative.
We are playing games, and game theory is probably the worst name for the most important theory in social science. We’re playing games. We’re engaged in strategic interaction all the time, and the optimal ones are the ones that produce niceness in us.
Ard: What you also say is one of the ways that it does this, that we feel shame and other…
Ard: Guilt, which prevents us from behaving in ways that people might call immoral.
Ard: But don’t you worry that once you’ve explained this away, that people will think…?
AR: No, no. So the difference between shame and guilt. We know the difference between shame and guilt?
AR: Shame is when you’re caught, and guilt is when you’re not caught.
AR: And these are so hard-wired into our psychological make-up that even knowing that they are evolutionary…
AR: …adaptations, has no tendency to weaken their hold on us. Learning that pulling your hand away from a fire because of the pain is an evolutionary adaptation, isn’t the slightest reason to stop doing it.
Ard: It’s easy to think, well, we’re all nice and we’re cooperating, but when you’re in a difficult situation, where life is a lot harder and the payoff that you get by cheating is a lot bigger…
AR: Of course there will be circumstances, environments, in which today’s adaptations become tomorrow’s mal-adaptations. Are we likely to face such circumstances? We certainly have in the past. I think that the explanation, the adaptational explanation for these moral norms, helps us unravel them and reduce their grip on us, given the actual environment that we’re in, which is so different from the environment in which they evolved.
Ard: But it…
AR: And, of course, I’ll have to be honest in answer to your question. If the environment changes over the long haul in such a way as to make cooperative, altruistic, empathetic motivated behaviour mal-adaptive, it’s going to disappear, of course.
Ard: Could you say that something like our empathy instinct, if you want to call it that, is like a moral compass pointing us in that direction?
FdW: Yes, if that's how you want to phrase it, yes. We have a built-in sense of fairness which relates to our cooperative tendencies. We have a sense of reciprocity which is very important in a chimpanzee society: doing each other favours and obligations and so on. We have a sense of empathy, and we're in tune with the situation and the feelings of others. And, yes, all those natural tendencies that we have, they steer us in a particular direction in our social relationships, especially social relationships with the in-group.
Now with the out-group the story is sometimes quite different, and that's why we have a lot of trouble applying our moral principles outside of our group. And at the moment, of course, in the world we're trying to do that. We talk about universal human rights, which is sort of stretching… We're trying to stretch the system basically.
Ard: So, thinking about the parable of the Good Samaritan, in some sense that's a story about care for someone who's in the out-group. Is that pushing us beyond our moral compass?
FdW: Yes, I think that is typically human, and that's where the top-down processes come in. So I think we have a lot of bottom-up morality, which basically comes from our primate social tendencies and… and that's a view that Darwin also had. But then what we humans do – and I don't think my chimpanzees do that in any way – is we try to translate that in to justifications and in principles, and we come up with a narrative – and actually the Good Samaritan is a narrative – we come up with a narrative that justifies our behaviour. And then we are capable of applying those principles outside the usual box, which is the in-group where it really evolved. I think morality evolved for the in-group, not for the out-group. We could hack off their heads, it was fine.
But then with our mental capacities, we say, ‘Well, why is that fine? Maybe it's not fine. Let's question it.’ And, for example, the Geneva Convention, which tells us how to treat our enemies, is such an innovation, and I don't think chimpanzees would ever come up with a Geneva Convention. ‘The enemies?’ ‘Get rid of them. That's the main thing. Enemies are not there to be treated well.’
So, that's a human thing, I think, and that's a top-down process where we then use these justifications and narratives that we have and say can we apply them outside of the group, yes or no? It's a sort of intellectual experiment. And, of course, if the in-group was not doing well – let's say we're all starving and we have terrible circumstances – we might care less about the out-group. And so it is dependent on the circumstances. But nowadays we live in societies which are wealthy enough that we can start thinking about these issues.
Ard: So in some sense these are things that transcend our kind of moral instincts…
FdW: They're built on them, because we still use that bottom-up morality to arrive at the principles and justifications, but then we take it one step further, which is an intellectual step, really it's more like a cognitive step, and that's maybe why I don't see any of that in my primates.
David: It’s a pleasure to meet you, finally.
GP: I’m just humbled that you considered me to be a part of what I think is a wonderful discussion. We need more of it.
David: Now, you’ve spent your life battling against ideas that you thought were unfair.
GP: Well, you have to remember, first of all, I come from a very race-conscious family, from what you would call black middle class. My granddaddy, in particular, had his own construction company, and my father, as a youngster, teenager, would accompany my granddaddy Patton to the bank ‒ you know, to get money for construction projects. And my father was constantly humiliated by the bank. We’re talking about, now, the Forties… the late Thirties and the Forties.
David: How was he humiliated?
GP: First of all, lots of times they had to stand up to talk to the bank president. They were not allowed to sit down. My grandfather had cultivated – which my father later learned – a survival mechanism of ‘Yes Sir, no Sir.’ And questions would be: ‘Nigger, are you just building stuff in the black community?’ Which meant there was a limitation. And my grandfather would reply, ‘Yes, Sir’, and my father was just humiliated by that. You know? That kind of interaction.
And my father finished a year of college at Alabama State, the Blacks’ college, and he would always tell my brother and me: ‘If a white man hit me in Detroit, I could hit back. If a white man hit me in Montgomery, I just had to bow and scrape’ ‒ because it was always a life-and-death situation.
David: Was it really that bad here? Even in the Forties and Fifties?
GP: It was really that bad. My uncle, whom I never saw – was just part of my folklore – was shot and killed by a policeman in 1943, September. He had been shot and killed by the policeman because you could not be in certain neighbourhoods.
David: When did those ideas which your generation… the ideas of…
Ard: Freedom and equality.
David: When did those ideas…?
GP: Germinate in me?
GP: I’m eight years old ‒ it’s 1952 ‒ I will never forget it, and I’m down here visiting. And across the street from my Mommy’s home – if we can keep my lineage together – there’s a bus stop, and every Sunday after Sunday school and church, the treat for my brother, my first cousin Al and me was to ride the entire bus line to the end of the route and come back. And Mommy always told us to sit on the back long seat and look out the big back window and see the world going backwards. That was such a wonder. I never knew we couldn’t sit in the front of the bus.
So one Sunday – being the elder of my brother, my cousin – they wanted an ice-cream cone, and I decided we would stop downtown at the Court Square and go on to a drugstore called Liggett’s drugstore, which is part of the Rexall chain, to get an ice cream, get ice cream cones. And I drink a lot of water, always have, and I wanted a cup of water, and I paid three cents – at that time we had paper cone cups ‒ and I just sat on a counter stool to drink my water. And soda jerk, which is not a pun, called me a ‘piccaninny’, and told me to get up. Now I’d never heard of the word ‘piccaninny’, but I knew it was an insult. And she would turn red as a beet. And I poured my cup of water on the counter.
David: And you were eight years old?
Ard: Eight years old?
GP: I’m eight years old, and we stamp out of there, come home, and Mommy would always be at the bus stop to greet us, to meet us. And I told Mommy about it, and Mommy hugged me and said, ‘No, you’re beautiful,’ reaffirmed, and that when God made me, he didn’t make none other like me. I’m sure a whole lot of folks are happy that happened.
But then I had to tell her why I had to ask. Why would she say this to me? I’m a little girl. So Mommy then had to tell me we couldn’t ride in the front of the bus, and all of what the mores were. And I wanted to know why, so that began to germinate in me.
Meanwhile, I’m getting older. My grandmother, Mommy, canvassed the neighbourhood to get people to go down to attempt to register to vote. So I grew up with that. And I was an obedient child, but also very curious and wanting answers.
David: Do you think there was… Do you think ideas have a power to change people?
GP: Oh, yes, oh, yes ‒ ideas are powerful. That’s why they didn’t want us, the white class, powers that be, didn’t want us to get educated. You know? Don’t you educate them little negroes. And you have to understand I learned much, much later about the midnight school during antebellum slavery days. But our parents instilled in us to get an education.
GP: To be smart.
David: I just briefly wanted to go back to that, that experiment of the monkey that starved itself. That seems to me an extraordinary thing for a monkey to do. I mean, did it not surprise them when they did that in the 50s?
FdW: I did that together with Sarah Brosnan, and we were doing experiments on economics in monkeys. Sort of like, how much food do you need for a task? How willing are you to work with a partner? How willing are you to share with a partner who helped you or did not help you? … and all of these things. And in that context, by accident, we discovered that the monkeys were very sensitive to what the partner was getting compared to what they were getting. It doesn't make any sense, because if you read the literature on rats pressing levers and things – all that animal learning theory – there's never any talk about this rat thinking about what the other rat is getting.
So we didn't know what to do with that, and we started to test it out systematically. And so we developed a very simple task. We would give a monkey a pebble in his cage; we would hold up a hand and he would have to give it back to us, and as soon as he gave it back, he got a reward – a very simple task.
Now, if you do that with one monkey and you use little pieces of cucumber, you can do that thirty times in a row, and he will eat a lot of cucumber. If you put a partner next to him and you give that partner cucumber, they will both do it thirty times in a row and they're perfectly fine. But if you give the partner grapes, and grapes are ten times better than cucumber, then the one who gets the cucumber still is going to refuse. And not only is he going to refuse, he's going to get agitated, and he's going to shake the cage and he's going to throw the cucumber out, and he gets agitated. He becomes very upset by the whole situation, which is irrational because the cucumber was good before, why is it not good anymore?
We called it inequity aversion, but the media immediately talked about fairness and saying the monkeys have a sense of fairness. Then a philosopher wrote to us, and said it’s impossible for monkeys to have a sense of fairness, because fairness was discovered during the French Revolution! Basically, that tells us morality comes from a bunch of old guys in Paris who sit around and say, ‘Well fairness would be a good idea, you know.’
David: And everyone else in the world went, ‘Oh my God, what a good idea.’
FdW: Yeah, ‘let’s spread the word on fairness; it's a good thing, you know.’ And so that's how they think we arrive, through reasoning and logic, at a point where we say fairness is a good thing and we implement it in society. But it is completely the other way around. Young children, two-year-old children, already have a sense of fairness, just like my monkeys do. And so it's an emotional process: you compare what you get with what somebody else gets.
People have done it with dogs; they've done it with crows, now, because it's a little up and coming field of inequity aversion in animals. And it's basically in many cooperative species we find this now. Our prediction is that in uncooperative species, like solitary animals – like, let's say, the domestic cat is a bit of a solitary animal – you will not find it as much. So it is related, we think, to animals needing cooperation partners with whom they need to share things, and if you're not willing to share, you're not a good partner.
David: In that experiment, did the monkey who was getting the cucumber sometimes just look at the one with the grapes and say, ‘Hey’?
FdW: No, chimpanzees will do that.
David: They do? They'll say, ‘Give me the grapes, come on hand them over’?
FdW: The monkeys have a sense of fairness that is at the level of a two or three-year-old child, which also don't look for hand-outs and things like that. But the chimpanzees, yes. In chimpanzees you may have a situation that the one who gets the grape will refuse the grape until the other one also gets the grape. People have said, why would you call it a sense of fairness? You could call it resentment. I resent that you get more than me.
Ard Yeah, sure.
FdW: But then you still wonder, why do I resent that you get more than me and under what kind of circumstances? And certainly, when chimpanzees go further than that and share with the one who gets less – because the monkeys don't do that but the chimps do that – they have an understanding that that's also probably necessary for cooperative relationships. And so it's much than just resentment. I think it is a sense of fairness and a sense that they understand that if things are not equally divided, then cooperation’s going to fall apart. So there's a self-interested component to it. That is, for my cooperative relationships I need this kind of sense of fairness, and that's what we're seeing for humans also. Humans don't have a sense of fairness for no reason at all: they have a sense of fairness because our cooperative societies rely on it.