This site uses cookies and other tracking technology to assist with navigation and your ability to provide feedback, analyse your use of our products and services, assist with our promotional efforts, and provide content from third parties. Find out more

Frans De Waal

Frans de Waal

Primatologist

'I always consider empathy the foundation of morality. If I'm not interested in others, and the well-being of others, then I cannot be a moral being.'

FULL INTERVIEW 36 min

What motivates moral behaviour?

‘Motives usually don't come from reason. Reason comes later, I think. If we had to reason ourselves to it every time, it would be a pretty cumbersome system.’

Transcript

What motivates moral behaviour?


Ard: When we behave in a moral way, is that because we reason ourselves towards that or is it because it's something that's just instinctive inside of us?

FdW: If we had to reason ourselves to it every time we did, it would be a pretty cumbersome system, no? Each time I have a choice between being kind or not kind, I would have to go through all the reasoning why. That would be a terrible system. So I think there's a lot of intuitive and impulsive behaviour, and that some people end up on the moral side and some people won't, and I think that the justification afterwards is definitely at that point for our behaviour. At that point we're going to use all sorts of reasons and rationales, and I think philosophy has gotten it a little bit backward because they have focused on the justification part as if that's the motivation part, which of course it isn't.

Ard: So what is the motivation?

FdW: Well, there's lots of pro-social motivations that we have and that we share with other mammals and with other animals.

David: What do you mean by pro-social?

FdW: Pro-social? I would mean it's a bit more than altruistic: pro-social is sort of a motivational system. Altruistic in biology is often used in a very functional sense: I do something costly for myself that benefits you regardless of my motivation – so a bee who stings you, which is probably in an aggressive motivation, is defending the hive. We call that altruistic because the bee loses its life, and is giving its life for the hive. But we don't necessarily think that the bee has a pro-social motivation at that point. So pro-social usually refers more to the motivation part: why do I do these things intentionally. And we use that also now in the animal literature – we use that term.

David: So you think the motives, it's not to do with rationality, it's to do with this pro-social idea?

FdW: Yes, motives usually don't come from reason: reason comes later, I think. And so, yes, sometimes we sit down and take a decision, like you need to decide am I going to help my grandmother – yes or no – today? And so you may try to come to a rational decision given all the other circumstances, but most of the time I don't think we go through all these reasons, and we have just a certain motivation to do this or to do that.

Ard: Sometimes people think that our kind of instincts that we have are dangerous ones, where nature is read in tooth and claw – we're trying to beat up our enemies and win – and so we have to subjugate those instincts.

FdW: Yeah, that's a view of nature that I don't hold necessarily.

Ard: What would that view be?

FdW: Well to use nature only for the negative side of human nature: so when we're killing each other, we say, ‘We're acting like animals.’ And so all the nasty things that we do and the selfish things, I've called that ‘veneer theory’. It's like all the basic emotions of humans are bad, and then there's a little veneer of morality that we achieve, culturally or religiously or whatever, and how we achieve that. And so morality is just a little veneer over the bad human nature that we have.

I don't buy into that at all. I think humans have all these tendencies: we have good tendencies and bad ones, and they're all connected to our human nature and our primate nature. And you can recognise all of that in the chimpanzee as well. The chimpanzee can be very nasty and they can kill each other, and people have got obsessed by the killing that they do and said, ‘Well, chimpanzees are nasty animals.’ And so then when you say chimpanzees also have empathy and they care for each other, they're very surprised, because that's not consistent with what they think a chimpanzee is. But just like humans can kill each other and be very nasty, humans can also be extremely altruistic and kind to each other, and so we have that whole spectrum and many, many mammals have that whole spectrum.

Ard So, you know this veneer theory you called – which is kind of like this very thin layer of morality over this terribly dangerous animal nature – where do you think that came from historically?

FdW: Yeah, that's a very dangerous idea, because it basically says that deep down we are bad and with a lot of struggle we can be good, but as soon as something happens it disappears. It's a very pessimistic view of human nature. Huxley had that view.  So Thomas Henry Huxley, who was a contemporary of Darwin, the big defender of Darwin, he didn't really believe in human nature being any good. Darwin was much more a believer in that, and Darwin even talked about sympathy in animals, and he didn't look at humans as automatons, like the way Huxley looked at it. So Huxley had this view that goodness cannot come from evolution – it's impossible – and Darwin never said that: he disagreed with him on that.

Ard: So in fact what you’re saying is the idea that underneath we're just animals and therefore selfish or bad is not a Darwinian idea?

FdW: No, it’s not. Darwin himself didn't think like that, and he also said, literally sometimes, that selfishness is really not what explains the behaviour of certain social animals. He felt they had a social instinct and morality was grounded in that social instinct, very similar to the views that I have, even though I have more precision because I'm talking about specific behaviours of animals. Darwin had, at an intuitive level, he had that insight also.

Scientists and morality

‘I would never trust scientists to tell me what is moral or immoral because they're a bit like philosophers: they're sort of narrow in how they look at things.’

Transcript

Scientists and morality

David: If you say that the natural part of us is going to be aggressive and selfish and bad, in some way, then you're either left… you've got to say, well, where does the good part come from? And it seems to me in religion we say, well it comes from God, and then if you're not religious, you say, well it comes from rationality. So it seems to me that rationality steps in for the atheist where God used to be.

FdW: That’s what happened during the Renaissance, the philosophers did that. They said, well religion, let's move that to the side and we philosophers, we will propose rationality as an explanation of human morality. More recently there have been proposals, like Sam Harris and people like that, that science is going to solve the moral issue: science is going to tell us what is moral and immoral.

Ard: And what do you think about those kinds of proposals, because they sound attractive… science…

FdW: I would never trust scientists to tell me what is moral or immoral, because they're a bit like philosophers: they're sort of narrow in how they look at things. And if you look narrow enough, like, for example, take the utilitarian view, which is very popular amongst philosophers – that you do the greatest good for the greatest number of people – if you follow that rule, for example, I could give a very good scientific explanation why slavery would be beneficial: slavery is actually, rationally, a very good system. What's wrong with slavery? We could have that argument and I might win it. You know, I might say slavery is good, even though we now recognise that…

David: Well, the utilitarian would say, look, if we have to enslave a few people to benefit a larger number of people, then that's the greatest good for the greatest number of people, which is in fact the argument that was made.

 

The foundation of morality

‘I always consider empathy the foundation of morality. If I'm not interested in others, and the well-being of others, then I cannot be a moral being really.’

Transcript

The foundation of morality

Ard: So could you give us some examples of this kinder behaviour in chimpanzees that you've seen?

FdW: Well chimpanzees, we had for example an old female who died recently, but she could barely walk anymore.

Ard: What was her name?

FdW: Her name was Penny, and she could barely walk anymore, and each time she would try to get up and get to the water spigot to get some water, younger females, but adult females, they would run over to the water and suck up a lot of water and bring it to her and spit it in her mouth so that she didn't need to, because her walking across the enclosure would be an enormous effort. Or they would push her up on the climbing frame if she tried to join a group of grooming chimps and get her there.

And we've seen many of these cases. We've seen recently a case of a male who was dying of something on his stomach and others taking care of him and bringing him a wood roll that they would put behind his back so that he could lean back. And they help each other on occasions, but they only help, of course, the individuals that they like. It's just like humans. In humans we have all these moral imperatives – you should do this under these circumstances and that under those circumstances – but it really applies only, that kind of behaviour, to individuals that you are close to.

David: You sent me a paper a few years ago where you discussed some fascinating experiments which were done at the end of the 50s, the 60s. There was one in particular where the monkey had been taught to pull a lever to get food, but then the experiment was changed and when it pulled the lever, another monkey it didn't know got a shock. Would you tell us about that experiment, because I was so struck by it.

FdW: Yeah, so these are experiments on empathy or sympathy that were done in the 50s that we wouldn't do anymore. I certainly I wouldn't do it, because it's a pretty horrible experiment, in the sense that you would have, let's say, one macaque who is sitting here, and he can pull a lever and get food – each time he pulls, he gets food. Then they're going to pair the lever with a shock to the partner, so as soon as you pull the lever, another monkey who is sitting there gets shocked. The monkey will then stop pulling. Some monkeys would stop pulling for five days. They would starve themselves for five days in order not shock the other monkey. So it is an interesting idea of aversiveness to the distress of somebody else.

So, it used to be thought, twenty years ago, thirty years ago, that empathy is a sort of decision: I decide to be empathic with you, or I decide to put myself in your shoes. That's not how it works at all. It's an automated process that is a very biased process in addition.

David: Does that mean that, for you, you think that when we're talking about… well people will endlessly talk about morals, and where do morals come from, and are there moral solutions, that it's not so much to do with thinking about it, but that, in some sense, the foundation of a moral system is built into us in our emotions?

FdW: Yes, I think so. I do think that you need an interest in other people, and so I always consider empathy, sort of, the foundation of morality. It’s that if I'm not interested in others, and the well-being of others, then I cannot be a moral being. If you don't have any level of generosity and interest in others, then you would never be a moral being: you would be a psychopath basically. And, actually, interestingly enough, that whole literature on selfish genes and how we humans are overly competitive and just like the rest of the animal kingdom, that was all a literature about psychopaths, I think. It was basically describing the human species as a psychopath: all we can think about is what is good for me, and very reluctantly thinking about what is good for you.

So that view of the human was popular in the 70s and 80s, I mean, after all, there was this biologist, I think, Ghiselin, who said, ‘Scratch an altruist and watch a hypocrite bleed.’ That was a very popular saying, and basically describes a psychopath. And everyone was happy with that at the time.

Ard: And the idea would be that if somebody shows behaviour that looks like something good, fundamentally they're doing it for selfish reasons.

FdW: Yeah, of course.

Ard: And you scratch them and they're really hypocrites?

FdW: Yes, there cannot be genuine altruism, there cannot be genuine kindness because there's always a selfish agenda behind it.

Ard: And you disagree? You think that there can be genuine altruism?

FdW: Oh, of course, yeah, yeah, I absolutely think that.

 

Cooperation throughout nature

‘Most animals cannot survive without cooperation. If competition is stronger as an instinct than cooperation, it would all fall apart.’

Transcript

COOPERATION THROUGHOUT NATURE


Ard: So when you first made these discoveries, what did the scientific community… how did they react to this?

FdW: Well, that's an interesting historical question because when I was a student all we could talk about was violence and aggression, and in that context I discovered, by accident, it was not necessarily my intention, but I discovered that chimpanzees reconcile after their fights. So they may have a big fight, two males for example, and then ten minutes later one of them approaches the other and they kiss and embrace, and then they groom each other for an hour, and I thought this was awfully interesting. I found that more interesting than the fight itself. And so when I first presented that at meetings, scientific meetings, first of all people didn't know what to do with it, because they'd never heard of something like this, and second they would say, ‘Well, maybe chimps do that, but my animals certainly don't do it.’

So they wanted to make an exception for chimps, maybe, but they really didn't buy into it as an important problem. Now we know that lots of social animals reconcile after fights. It's actually very common behaviour, and it’s related to the fact that they have competition, but they also depend on each other, so it's just as in humans.

Ard: So they have cooperation?

FdW: I would say most animals cannot survive without cooperation. So cooperation occurs between single-cell organisms, between insects, all the mammals – well, there's some solitary mammals, but the majority live in either troops or groups or herds. So the level of cooperation, it's all sorts of different levels, but it's everywhere.
Ard: So it's like an instinct? Would you say cooperation is an instinct as much as competition is?

FdW: Yes, of course.

David: Why do you think we haven't seen it before?

FdW: I think after World War II there was this obsession with competition and aggression and selfishness that lasted until the 1980s or so. So our genes were selfish, we were selfish, and cooperation was a special case that we needed to explain, and we had a lot of trouble explaining it.

But now I think it's very well recognised that not just humans, but all sorts of animals, are cooperative, and there are all these tests of cooperation. So you can set up a test with chimpanzees, as we did… This was actually developed a hundred years ago here at Yerkes Primate Centre by Robert Yerkes and his people.
So they would set up a box that is too heavy for one chimp to pull in, and they would put food on the box. So you put two chimps who have ropes, – so they will have to cooperate – and they will have to synchronise and pull it in at the same time, and then they can share the food afterwards.

And we recently set it up in our group of chimps where, instead of just having two chimps, we used fifteen chimps. So now you have the potential of competition. So you have a box with food that they can pull into, two chimps or three chimps at a time. But everyone is present, and so the highest ranking members of the group, they can kick you away or they can steal your food, or someone can sit next to you and try to steal your food, free-loading basically, the free-loading problem.

And so we wanted to see how the chimps handle this situation where they have the options between competition and cooperation. If competition is stronger as an instinct, so to speak, then cooperation would all fall apart and there would be bickering and fighting. But actually we had 3,500 cooperative pulls in our test. So basically they were cooperating all the time, and they handled the competition very well. They were very good at either alternating positions or getting rid of free-loaders, or not working with someone who's too competitive so that he learns to become cooperative. And all these things were happening at that time.

David: That's terribly sophisticated behaviour.

FdW: Yeah, but they do that in the field also. We were not particularly surprised because in the field we know the chimpanzees hunt together for monkeys, for example, and they share the meat afterwards. So it's not like a totally new thing, but now we had sort of formalised it and shown that that's what's possible.

I wrote a book, Chimpanzee Politics, which is all about how chimpanzees compete for power, but it is cooperative, otherwise we wouldn't call it politics – then it would be just a pecking order like among chickens, and the biggest, meanest chicken is top. But that's not how chimpanzees operate.

In chimpanzees it’s very well possible for the smallest male to be the alpha-male. Now how is that possible? That's only because that male is maybe very diplomatic and grooms particular partners and gives them particular favours after he has reached the top. And so you don't need to be a big, heavy, strong male to be the alpha-male, and that is because the system is basically a cooperative system where you pick out your partners and your partners support you, as long as you’re useful for them, because why would they support you if you withhold all the benefits from them and so on?

Who invented fairness?

‘A philosopher said it’s impossible for monkeys to have a sense of fairness, because fairness was discovered during the French Revolution.’

Transcript

WHO INVENTED FAIRNESS?

David: I just briefly wanted to go back to that, that experiment of the monkey that starved itself. That seems to me an extraordinary thing for a monkey to do. I mean, did it not surprise them when they did that in the 50s?

FdW: I did that together with Sarah Brosnan, and we were doing experiments on economics in monkeys. Sort of like, how much food do you need for a task? How willing are you to work with a partner? How willing are you to share with a partner who helped you or did not help you? … and all of these things. And in that context, by accident, we discovered that the monkeys were very sensitive to what the partner was getting compared to what they were getting. It doesn't make any sense, because if you read the literature on rats pressing levers and things – all that animal learning theory – there's never any talk about this rat thinking about what the other rat is getting.
So we didn't know what to do with that, and we started to test it out systematically. And so we developed a very simple task. We would give a monkey a pebble in his cage; we would hold up a hand and he would have to give it back to us, and as soon as he gave it back, he got a reward – a very simple task.

Now, if you do that with one monkey and you use little pieces of cucumber, you can do that thirty times in a row, and he will eat a lot of cucumber. If you put a partner next to him and you give that partner cucumber, they will both do it thirty times in a row and they're perfectly fine. But if you give the partner grapes, and grapes are ten times better than cucumber, then the one who gets the cucumber still is going to refuse. And not only is he going to refuse, he's going to get agitated, and he's going to shake the cage and he's going to throw the cucumber out, and he gets agitated. He becomes very upset by the whole situation, which is irrational because the cucumber was good before, why is it not good anymore?

We called it inequity aversion, but the media immediately talked about fairness and saying the monkeys have a sense of fairness. Then a philosopher wrote to us, and said it’s impossible for monkeys to have a sense of fairness, because fairness was discovered during the French Revolution! Basically, that tells us morality comes from a bunch of old guys in Paris who sit around and say, ‘Well fairness would be a good idea, you know.’

David: And everyone else in the world went, ‘Oh my God, what a good idea.’

FdW: Yeah, ‘let’s spread the word on fairness; it's a good thing, you know.’ And so that's how they think we arrive, through reasoning and logic, at a point where we say fairness is a good thing and we implement it in society. But it is completely the other way around. Young children, two-year-old children, already have a sense of fairness, just like my monkeys do. And so it's an emotional process: you compare what you get with what somebody else gets.
People have done it with dogs; they've done it with crows, now, because it's a little up and coming field of inequity aversion in animals. And it's basically in many cooperative species we find this now. Our prediction is that in uncooperative species, like solitary animals – like, let's say, the domestic cat is a bit of a solitary animal – you will not find it as much. So it is related, we think, to animals needing cooperation partners with whom they need to share things, and if you're not willing to share, you're not a good partner.

David: In that experiment, did the monkey who was getting the cucumber sometimes just look at the one with the grapes and say, ‘Hey’?

FdW: No, chimpanzees will do that.

David: They do? They'll say, ‘Give me the grapes, come on hand them over’?

FdW: The monkeys have a sense of fairness that is at the level of a two or three-year-old child, which also don't look for hand-outs and things like that. But the chimpanzees, yes. In chimpanzees you may have a situation that the one who gets the grape will refuse the grape until the other one also gets the grape.  People have said, why would you call it a sense of fairness? You could call it resentment. I resent that you get more than me.

Ard Yeah, sure.

FdW: But then you still wonder, why do I resent that you get more than me and under what kind of circumstances? And certainly, when chimpanzees go further than that and share with the one who gets less – because the monkeys don't do that but the chimps do that – they have an understanding that that's also probably necessary for cooperative relationships. And so it's much than just resentment. I think it is a sense of fairness and a sense that they understand that if things are not equally divided, then cooperation’s going to fall apart. So there's a self-interested component to it. That is, for my cooperative relationships I need this kind of sense of fairness, and that's what we're seeing for humans also. Humans don't have a sense of fairness for no reason at all: they have a sense of fairness because our cooperative societies rely on it.

 

In-groups and out-groups

‘I think morality evolved for the in-group, not for the out-group. We could hack off their heads; it was fine.’

Transcript

IN GROUPS AND OUT GROUPS

Ard: Could you say that something like our empathy instinct, if you want to call it that, is like a moral compass pointing us in that direction?

FdW: Yes, if that's how you want to phrase it, yes. We have a built-in sense of fairness which relates to our cooperative tendencies. We have a sense of reciprocity which is very important in a chimpanzee society: doing each other favours and obligations and so on. We have a sense of empathy, and we're in tune with the situation and the feelings of others. And, yes, all those natural tendencies that we have, they steer us in a particular direction in our social relationships, especially social relationships with the in-group.

Now with the out-group the story is sometimes quite different, and that's why we have a lot of trouble applying our moral principles outside of our group. And at the moment, of course, in the world we're trying to do that. We talk about universal human rights, which is sort of stretching… We're trying to stretch the system basically.


Ard: So, thinking about the parable of the Good Samaritan, in some sense that's a story about care for someone who's in the out-group. Is that pushing us beyond our moral compass?

FdW: Yes, I think that is typically human, and that's where the top-down processes come in. So I think we have a lot of bottom-up morality, which basically comes from our primate social tendencies and… and that's a view that Darwin also had.  But then what we humans do – and I don't think my chimpanzees do that in any way – is we try to translate that in to justifications and in principles, and we come up with a narrative – and actually the Good Samaritan is a narrative – we come up with a narrative that justifies our behaviour. And then we are capable of applying those principles outside the usual box, which is the in-group where it really evolved. I think morality evolved for the in-group, not for the out-group. We could hack off their heads, it was fine.

But then with our mental capacities, we say, ‘Well, why is that fine? Maybe it's not fine. Let's question it.’ And, for example, the Geneva Convention, which tells us how to treat our enemies, is such an innovation, and I don't think chimpanzees would ever come up with a Geneva Convention. ‘The enemies?’ ‘Get rid of them. That's the main thing. Enemies are not there to be treated well.’

So, that's a human thing, I think, and that's a top-down process where we then use these justifications and narratives that we have and say can we apply them outside of the group, yes or no? It's a sort of intellectual experiment. And, of course, if the in-group was not doing well – let's say we're all starving and we have terrible circumstances – we might care less about the out-group. And so it is dependent on the circumstances. But nowadays we live in societies which are wealthy enough that we can start thinking about these issues.

Ard: So in some sense these are things that transcend our kind of moral instincts…

FdW: They're built on them, because we still use that bottom-up morality to arrive at the principles and justifications, but then we take it one step further, which is an intellectual step, really it's more like a cognitive step, and that's maybe why I don't see any of that in my primates.

Debating moral truths

‘I’m an expert on primate behaviour, and this level of discourse, where we look at are there absolute moral truths, I don't know what to do with that.’

Transcript

DEBATING MORAL TRUTHS

Ard: And so do you think that these moral ideas, like taking care of those in the out-group, are real truths that are out there, that we discover, or are they just something that we've kind of made up ourselves?

FdW: No, I think we have arrived at them by intellectual means. They are more fragile, I think, and that's why I'm saying that as long as your group is doing well, you can do that, but if your group is not doing so well, I think they're fragile because we always, first of all, care about our group.

Ard: But do you think that they're actually true, just like one plus one equals two?

FdW: I don't know about that.

Ard: You don't?

FdW: I don't know if there's absolute truths in the world.

Ard: Okay.

David: But does it matter for moral behaviour whether there are absolute moral truths?

FdW: No, I don't think so. Human behaviour is based on emotions and intuitions and social relationships and social strategies, and then the rationalisation is something that comes afterwards. So we're very good at rationalising afterwards. We're very good at justifying behaviour afterwards. But to make that the basis of human behaviour, or the basis of human morality, is a fundamental mistake, I think.

David: Yeah.

Ard: So what would the basis be then?

FdW: The basis is our evolved tendencies to be social, which include caring for each other, but also it includes caring for ourselves, of course.

Ard: Yeah, but those are tendencies that have evolved to care for our in-groups, right? Whereas often the difficult thing is caring for those that are outside of our group, so…

FdW: Yeah, and that...

Ard: …an argument… an argument could be you could say well, you know, Professor De Waal says this is what's evolved, morality's built up on this, so all of these tendencies we have to take care of people, they're not an in-group, we should ignore them because they're just, you know, they're not part of who we really are?

FdW: They're.. they're not part of our, let's say, biological foundation so to speak [yeah] but, and that's why I'm saying it, it's a secondary intellectual process where we say well we have extracted certain principles from how we treat each other and we're now trying to apply those principles outside of the group.

Ard: Yeah, but those are very important. That's a very important step, right?

FdW: It's a very important step, and I think we can congratulate each other as humans that we are capable of making that step – it's a wonderful step but it's a fragile step, I think.

Ard: And we often disagree on how that step should be done and... and you're not sure whether it's even the case that those thoughts like taking care of those that are outside of our group, whether that's even true, it's something that we've agreed with?

FdW: Well what we try to do under these circumstances is try to look at the out-group as if they're sort of an in-group, and nowadays in society with all the internet, and the planet has gotten smaller, that's actually easier. So if you hear about the tsunami in Japan, for example, in the old days you would read it in a newspaper, you would read it ten days later probably, and you would not be very much affected because it's basically text that you see.

Now you see video images, you see interviews with people who've lost their home, they've lost their children, they're crying on the camera, the body relation is there that you would never have from a newspaper, and as a result you feel closer to them and as a result you're going to give money or at least you worry about them. And so we are shrinking the world, and we're actually treating out-groups more like in-groups, and so we're actually getting back to these fundamental processes even for the out-group.

Ard: But isn't it slightly worrying that what makes us want to help the out-group is tricking ourselves into thinking they're part of our in-group?

FdW: Yeah.

Ard: Shouldn't we want to take care of them, regardless of whether we feel like they're part of their in-group?

FdW: No, but what we do is we manipulate their image, basically, in our mind, and instead of looking at them as strangers that we don't care about, we say but they're human, just like us, and so that's how we start to think about them. And the opposite process is of course dehumanisation which we also do with certain enemies, which is we're going to describe them as horrible and not worthy of consideration, and so we manipulate the image of others.

Ard: So how do we decide between which of those two manipulations to do? So yesterday we spoke… we had an interesting interview with Dr Gwen Patton, who is a civil rights activist, and so there was a very strong in-group/out-group feeling, or there is a very strong in-group/out-group between blacks and whites in the United States, and so what you're saying is we can manipulate our sense of the other by making them feel… dehumanising them or humanising them, and that's obviously a very powerful strategy to change people's behaviour, but it doesn't tell us whether we should dehumanise them or humanise them. Maybe we ought to dehumanise our enemies and humanise our friends? So the real… the deeper question is how do we decide which of these two strategies to take with a particular group like blacks and whites?

FdW: I feel humanisation is always better than dehumanisation.

Ard: Why do you feel that?

David: He's empathetic.

Ard: Empathetic.

FdW: I'm empathetic, yeah.

David: You want it to be grounded rationally, don't you?

Ard: I want it to be. I think it's really important! I think it's really important because I think if it's just grounded in our passions, then the minute I realise that that's what it is, I can start manipulating it in myself and in others.

FdW: But that's like a bit like Immanuel Kant: he said compassion is beautiful but it's totally useless because...

Ard: Well I don't think it's useless. I think it's really... that’s where I disagree.

FdW: It's beautiful in the sense that he wanted to give it something, but duty is really what it comes down to, and that’s the Kantian view. And Kant is as antibiological as you can get, I think. Kant thinks everything top down…

Ard: Let's say that compassion is a good thing, which I think biologically we agree with, and evolution over time has generated us towards compassion, and it's good because our moral instincts correlate with this thing which is true. But what if it was false? How do we know whether our evolutionary instincts are all correlated with the things that are true? We have to work. These are important things to worry about.

FdW: I'm not sure that there are absolute truths out there in that...

David: No, I'm not sure either.

Ard: You're not sure?

David: I don't think they're correlating with anything that's out there.

Ard: Well I think they are. I think that they are.


David: Oh no.

Ard: Well this is where we disagree.

David: Yeah.

Ard: But you think there's no… you don't believe in these things?

FdW: I'm not sure. I'm an expert of primate behaviour, and this level of discourse where we look at are there absolute moral truths… I don't know what to do with that.

The religious impulse

‘For me the question as a biologist is if all humans in the world have religions – which is true, there are no exceptions – if all of them believe in the supernatural or have some sort of grand scale ritual religion, there must be a reason for that.’

Transcript

The Religious Impulse

David: You wrote about your worries concerning people who are very, very certain. I wonder if you can tell us about that because it's something I share?

FdW: Yeah, in my book, The Bonobo and the Atheist, I talk about dogmatism, and so I felt that the neo-atheists were dogmatic. Now I'm saying that being from a country – I'm from the Netherlands – where more than 50% of the people are atheist, or say they are atheist. And so if you say you are atheist, it's no big deal. No one blinks an eye if you say that. And so I grew up in an environment where whether you're religious or not religious really doesn't matter that much. It's completely up to you.

Then I come here in the US, and all of a sudden I'm surrounded by fanatical atheists who are certain that either God doesn't exist, or God doesn't matter, or we don't need religion, or religion is entirely bad and responsible for everything that's bad in the world. And I can't get used to that kind of atheism, and so I wonder where does their certainty come from that religion is so bad?

For me the question, as a biologist, is if all humans in the world have religions – which is true, there are no exceptions – if all of them believe in the supernatural or have some sort of grand-scale ritual religion, there must be a reason for that. It must be doing something for the human species, and I don't know what that is exactly. I don't think it is the source of human morality. I think religion may play a role and add to it, and that's all possible, but I don't think religion invented morality, so to speak.

But I'm more puzzled by religion, like why do we have it and what is it good for? Because it must be good for something than that I'm so certain that God doesn't exit. Now I'm personally not religious: I don't find the question of God's existence particularly interesting because it's an unanswerable question. But I do think that religion is an interesting phenomenon, and I meet so many religious people who are not dogmatic about it. They believe this, but they don't believe that, and they have a disagreement with other people in their own religion about this or that. And so a lot of people are not fundamentalist Protestant or fundamentalist Catholic or Islam, necessarily. There are a lot of moderates in the world, and that's actually the more typical religious person that I meet. And for the atheist also, I would prefer them to be a little bit less dogmatic and be more open too.

David: Well we've encountered people who have said you cannot be a scientist and believe in science but also be religious and believe in God and...

FdW: Why would that be?

David: I don't know.

Ard: Well, I don't know.

David: But they're very certain about it.

FdW: It's true if you use God to explain certain phenomena, then you're in trouble as a scientist. Let's say I see my chimpanzees do something, and I said, ‘Well, I really don't know what, it must be God who's doing it.’ And then people would, of course, say, ‘Well you're giving up on the problem,’ which is true. I'm giving up on the problem, and that's something as a scientist we should never do: we should never give up on a problem. And so if we want to explain certain things, we don't want to involve God as an explanation.

But I think religion is much broader than just an explanatory system. That's part of what religion does for some people who read the Bible literally and take that as the explanation of the world. But a lot of people don't look at religion, necessarily, that way as an explanation of phenomena. They look at religion more as a guideline: how should I lead my life? Now, science has nothing to tell you about that. Science doesn't give you much guidance in your life. What is a good life? What is a bad life? How should I live my life? That's not an answer that you will get from science.

 

Frans de Waal is C.H. Candler Professor of Psychology at Emory University and director of The Living Links Centre at Yerkes. One of the world’s leading primatologists, he has carried out ground-breaking research into empathy, social reciprocity and conflict-resolution in primates, as well as the origins of morality and justice in human society.

His books include: Are We Smart Enough to Know How Smart Animals are? (2016); The Bonobo and the Atheist: in Search of Humanism Among the Primates (2014); Chimpanzee Politics: Power and Sex among Apes (2007); Bonobo: The Forgotten Ape (1998).

Quotes from the interview

When we're killing each other, we say, ‘We're acting like animals.’ And so all the nasty things that we do and the selfish things, I've called that ‘veneer theory’. It's like all the basic emotions of humans are bad, and then there's a little veneer of morality that we achieve, culturally or religiously or whatever, and how we achieve that. And so morality is just a little veneer over the bad human nature that we have. I don't buy into that at all.
I would never trust scientists to tell me what is moral or immoral, because they're a bit like philosophers: they're sort of narrow in how they look at things.
I always consider empathy, sort of, the foundation of morality. It’s that if I'm not interested in others, and the wellbeing of others, then I cannot be a moral being really. If you don't have any level of generosity and interest in others, then you would never be a moral being: you would be a psychopath basically.
I would say most animals cannot survive without cooperation. So cooperation occurs between single-cell organisms, between insects, all the mammals – well there's some solitary mammals, but the majority live in either troops or groups or herds. So the level of cooperation, it's all sorts of different levels, but it's everywhere.

Frans de Waal Full Interview Transcript

What motivates moral behaviour?


Ard: When we behave in a moral way, is that because we reason ourselves towards that or is it because it's something that's just instinctive inside of us?

FdW: If we had to reason ourselves to it every time we did, it would be a pretty cumbersome system, no? Each time I have a choice between being kind or not kind, I would have to go through all the reasoning why. That would be a terrible system. So I think there's a lot of intuitive and impulsive behaviour, and that some people end up on the moral side and some people won't, and I think that the justification afterwards is definitely at that point for our behaviour. At that point we're going to use all sorts of reasons and rationales, and I think philosophy has gotten it a little bit backward because they have focused on the justification part as if that's the motivation part, which of course it isn't.

Ard: So what is the motivation?

FdW: Well, there's lots of pro-social motivations that we have and that we share with other mammals and with other animals.

David: What do you mean by pro-social?

FdW: Pro-social? I would mean it's a bit more than altruistic: pro-social is sort of a motivational system. Altruistic in biology is often used in a very functional sense: I do something costly for myself that benefits you regardless of my motivation – so a bee who stings you, which is probably in an aggressive motivation, is defending the hive. We call that altruistic because the bee loses its life, and is giving its life for the hive. But we don't necessarily think that the bee has a pro-social motivation at that point. So pro-social usually refers more to the motivation part: why do I do these things intentionally. And we use that also now in the animal literature – we use that term.

David: So you think the motives, it's not to do with rationality, it's to do with this pro-social idea?

FdW: Yes, motives usually don't come from reason: reason comes later, I think. And so, yes, sometimes we sit down and take a decision, like you need to decide am I going to help my grandmother – yes or no – today? And so you may try to come to a rational decision given all the other circumstances, but most of the time I don't think we go through all these reasons, and we have just a certain motivation to do this or to do that.

Ard: Sometimes people think that our kind of instincts that we have are dangerous ones, where nature is read in tooth and claw – we're trying to beat up our enemies and win – and so we have to subjugate those instincts.

FdW: Yeah, that's a view of nature that I don't hold necessarily.

Ard: What would that view be?

FdW: Well to use nature only for the negative side of human nature: so when we're killing each other, we say, ‘We're acting like animals.’ And so all the nasty things that we do and the selfish things, I've called that ‘veneer theory’. It's like all the basic emotions of humans are bad, and then there's a little veneer of morality that we achieve, culturally or religiously or whatever, and how we achieve that. And so morality is just a little veneer over the bad human nature that we have.

I don't buy into that at all. I think humans have all these tendencies: we have good tendencies and bad ones, and they're all connected to our human nature and our primate nature. And you can recognise all of that in the chimpanzee as well. The chimpanzee can be very nasty and they can kill each other, and people have got obsessed by the killing that they do and said, ‘Well, chimpanzees are nasty animals.’ And so then when you say chimpanzees also have empathy and they care for each other, they're very surprised, because that's not consistent with what they think a chimpanzee is. But just like humans can kill each other and be very nasty, humans can also be extremely altruistic and kind to each other, and so we have that whole spectrum and many, many mammals have that whole spectrum.

Ard So, you know this veneer theory you called – which is kind of like this very thin layer of morality over this terribly dangerous animal nature – where do you think that came from historically?

FdW: Yeah, that's a very dangerous idea, because it basically says that deep down we are bad and with a lot of struggle we can be good, but as soon as something happens it disappears. It's a very pessimistic view of human nature. Huxley had that view.  So Thomas Henry Huxley, who was a contemporary of Darwin, the big defender of Darwin, he didn't really believe in human nature being any good. Darwin was much more a believer in that, and Darwin even talked about sympathy in animals, and he didn't look at humans as automatons, like the way Huxley looked at it. So Huxley had this view that goodness cannot come from evolution – it's impossible – and Darwin never said that: he disagreed with him on that.

Ard: So in fact what you’re saying is the idea that underneath we're just animals and therefore selfish or bad is not a Darwinian idea?

FdW: No, it’s not. Darwin himself didn't think like that, and he also said, literally sometimes, that selfishness is really not what explains the behaviour of certain social animals. He felt they had a social instinct and morality was grounded in that social instinct, very similar to the views that I have, even though I have more precision because I'm talking about specific behaviours of animals. Darwin had, at an intuitive level, he had that insight also.

5.37 – Scientists and morality

David: If you say that the natural part of us is going to be aggressive and selfish and bad, in some way, then you're either left… you've got to say, well, where does the good part come from? And it seems to me in religion we say, well it comes from God, and then if you're not religious, you say, well it comes from rationality. So it seems to me that rationality steps in for the atheist where God used to be.

FdW: That’s what happened during the Renaissance, the philosophers did that. They said, well religion, let's move that to the side and we philosophers, we will propose rationality as an explanation of human morality. More recently there have been proposals, like Sam Harris and people like that, that science is going to solve the moral issue: science is going to tell us what is moral and immoral.

Ard: And what do you think about those kinds of proposals, because they sound attractive… science…

FdW: I would never trust scientists to tell me what is moral or immoral, because they're a bit like philosophers: they're sort of narrow in how they look at things. And if you look narrow enough, like, for example, take the utilitarian view, which is very popular amongst philosophers – that you do the greatest good for the greatest number of people – if you follow that rule, for example, I could give a very good scientific explanation why slavery would be beneficial: slavery is actually, rationally, a very good system. What's wrong with slavery? We could have that argument and I might win it. You know, I might say slavery is good, even though we now recognise that…

David: Well, the utilitarian would say, look, if we have to enslave a few people to benefit a larger number of people, then that's the greatest good for the greatest number of people, which is in fact the argument that was made.


7.24 – The foundation of morality

Ard: So could you give us some examples of this kinder behaviour in chimpanzees that you've seen?

FdW: Well chimpanzees, we had for example an old female who died recently, but she could barely walk anymore.

Ard: What was her name?

FdW: Her name was Penny, and she could barely walk anymore, and each time she would try to get up and get to the water spigot to get some water, younger females, but adult females, they would run over to the water and suck up a lot of water and bring it to her and spit it in her mouth so that she didn't need to, because her walking across the enclosure would be an enormous effort. Or they would push her up on the climbing frame if she tried to join a group of grooming chimps and get her there.

And we've seen many of these cases. We've seen recently a case of a male who was dying of something on his stomach and others taking care of him and bringing him a wood roll that they would put behind his back so that he could lean back. And they help each other on occasions, but they only help, of course, the individuals that they like. It's just like humans. In humans we have all these moral imperatives – you should do this under these circumstances and that under those circumstances – but it really applies only, that kind of behaviour, to individuals that you are close to.

David: You sent me a paper a few years ago where you discussed some fascinating experiments which were done at the end of the 50s, the 60s. There was one in particular where the monkey had been taught to pull a lever to get food, but then the experiment was changed and when it pulled the lever, another monkey it didn't know got a shock. Would you tell us about that experiment, because I was so struck by it.

FdW: Yeah, so these are experiments on empathy or sympathy that were done in the 50s that we wouldn't do anymore. I certainly I wouldn't do it, because it's a pretty horrible experiment, in the sense that you would have, let's say, one macaque who is sitting here, and he can pull a lever and get food – each time he pulls, he gets food. Then they're going to pair the lever with a shock to the partner, so as soon as you pull the lever, another monkey who is sitting there gets shocked. The monkey will then stop pulling. Some monkeys would stop pulling for five days. They would starve themselves for five days in order not shock the other monkey. So it is an interesting idea of aversiveness to the distress of somebody else.

So, it used to be thought, twenty years ago, thirty years ago, that empathy is a sort of decision: I decide to be empathic with you, or I decide to put myself in your shoes. That's not how it works at all. It's an automated process that is a very biased process in addition.

David: Does that mean that, for you, you think that when we're talking about… well people will endlessly talk about morals, and where do morals come from, and are there moral solutions, that it's not so much to do with thinking about it, but that, in some sense, the foundation of a moral system is built into us in our emotions?

FdW: Yes, I think so. I do think that you need an interest in other people, and so I always consider empathy, sort of, the foundation of morality. It’s that if I'm not interested in others, and the well-being of others, then I cannot be a moral being. If you don't have any level of generosity and interest in others, then you would never be a moral being: you would be a psychopath basically. And, actually, interestingly enough, that whole literature on selfish genes and how we humans are overly competitive and just like the rest of the animal kingdom, that was all a literature about psychopaths, I think. It was basically describing the human species as a psychopath: all we can think about is what is good for me, and very reluctantly thinking about what is good for you.

So that view of the human was popular in the 70s and 80s, I mean, after all, there was this biologist, I think, Ghiselin, who said, ‘Scratch an altruist and watch a hypocrite bleed.’ That was a very popular saying, and basically describes a psychopath. And everyone was happy with that at the time.

Ard: And the idea would be that if somebody shows behaviour that looks like something good, fundamentally they're doing it for selfish reasons.

FdW: Yeah, of course.

Ard: And you scratch them and they're really hypocrites?

FdW: Yes, there cannot be genuine altruism, there cannot be genuine kindness because there's always a selfish agenda behind it.

Ard: And you disagree? You think that there can be genuine altruism?

FdW: Oh, of course, yeah, yeah, I absolutely think that.

11.54 – COOPERATION THROUGHOUT NATURE

Ard: So when you first made these discoveries, what did the scientific community… how did they react to this?

FdW: Well, that's an interesting historical question because when I was a student all we could talk about was violence and aggression, and in that context I discovered, by accident, it was not necessarily my intention, but I discovered that chimpanzees reconcile after their fights. So they may have a big fight, two males for example, and then ten minutes later one of them approaches the other and they kiss and embrace, and then they groom each other for an hour, and I thought this was awfully interesting. I found that more interesting than the fight itself. And so when I first presented that at meetings, scientific meetings, first of all people didn't know what to do with it, because they'd never heard of something like this, and second they would say, ‘Well, maybe chimps do that, but my animals certainly don't do it.’

So they wanted to make an exception for chimps, maybe, but they really didn't buy into it as an important problem. Now we know that lots of social animals reconcile after fights. It's actually very common behaviour, and it’s related to the fact that they have competition, but they also depend on each other, so it's just as in humans.

Ard: So they have cooperation?

FdW: I would say most animals cannot survive without cooperation. So cooperation occurs between single-cell organisms, between insects, all the mammals – well, there's some solitary mammals, but the majority live in either troops or groups or herds. So the level of cooperation, it's all sorts of different levels, but it's everywhere.
Ard: So it's like an instinct? Would you say cooperation is an instinct as much as competition is?

FdW: Yes, of course.

David: Why do you think we haven't seen it before?

FdW: I think after World War II there was this obsession with competition and aggression and selfishness that lasted until the 1980s or so. So our genes were selfish, we were selfish, and cooperation was a special case that we needed to explain, and we had a lot of trouble explaining it.

But now I think it's very well recognised that not just humans, but all sorts of animals, are cooperative, and there are all these tests of cooperation. So you can set up a test with chimpanzees, as we did… This was actually developed a hundred years ago here at Yerkes Primate Centre by Robert Yerkes and his people.
So they would set up a box that is too heavy for one chimp to pull in, and they would put food on the box. So you put two chimps who have ropes, – so they will have to cooperate – and they will have to synchronise and pull it in at the same time, and then they can share the food afterwards.

And we recently set it up in our group of chimps where, instead of just having two chimps, we used fifteen chimps. So now you have the potential of competition. So you have a box with food that they can pull into, two chimps or three chimps at a time. But everyone is present, and so the highest ranking members of the group, they can kick you away or they can steal your food, or someone can sit next to you and try to steal your food, free-loading basically, the free-loading problem.

And so we wanted to see how the chimps handle this situation where they have the options between competition and cooperation. If competition is stronger as an instinct, so to speak, then cooperation would all fall apart and there would be bickering and fighting. But actually we had 3,500 cooperative pulls in our test. So basically they were cooperating all the time, and they handled the competition very well. They were very good at either alternating positions or getting rid of free-loaders, or not working with someone who's too competitive so that he learns to become cooperative. And all these things were happening at that time.

David: That's terribly sophisticated behaviour.

FdW: Yeah, but they do that in the field also. We were not particularly surprised because in the field we know the chimpanzees hunt together for monkeys, for example, and they share the meat afterwards. So it's not like a totally new thing, but now we had sort of formalised it and shown that that's what's possible.

I wrote a book, Chimpanzee Politics, which is all about how chimpanzees compete for power, but it is cooperative, otherwise we wouldn't call it politics – then it would be just a pecking order like among chickens, and the biggest, meanest chicken is top. But that's not how chimpanzees operate.

In chimpanzees it’s very well possible for the smallest male to be the alpha-male. Now how is that possible? That's only because that male is maybe very diplomatic and grooms particular partners and gives them particular favours after he has reached the top. And so you don't need to be a big, heavy, strong male to be the alpha-male, and that is because the system is basically a cooperative system where you pick out your partners and your partners support you, as long as you’re useful for them, because why would they support you if you withhold all the benefits from them and so on?

16:40 – WHO INVENTED FAIRNESS?

David: I just briefly wanted to go back to that, that experiment of the monkey that starved itself. That seems to me an extraordinary thing for a monkey to do. I mean, did it not surprise them when they did that in the 50s?

FdW: I did that together with Sarah Brosnan, and we were doing experiments on economics in monkeys. Sort of like, how much food do you need for a task? How willing are you to work with a partner? How willing are you to share with a partner who helped you or did not help you? … and all of these things. And in that context, by accident, we discovered that the monkeys were very sensitive to what the partner was getting compared to what they were getting. It doesn't make any sense, because if you read the literature on rats pressing levers and things – all that animal learning theory – there's never any talk about this rat thinking about what the other rat is getting.
So we didn't know what to do with that, and we started to test it out systematically. And so we developed a very simple task. We would give a monkey a pebble in his cage; we would hold up a hand and he would have to give it back to us, and as soon as he gave it back, he got a reward – a very simple task.

Now, if you do that with one monkey and you use little pieces of cucumber, you can do that thirty times in a row, and he will eat a lot of cucumber. If you put a partner next to him and you give that partner cucumber, they will both do it thirty times in a row and they're perfectly fine. But if you give the partner grapes, and grapes are ten times better than cucumber, then the one who gets the cucumber still is going to refuse. And not only is he going to refuse, he's going to get agitated, and he's going to shake the cage and he's going to throw the cucumber out, and he gets agitated. He becomes very upset by the whole situation, which is irrational because the cucumber was good before, why is it not good anymore?

We called it inequity aversion, but the media immediately talked about fairness and saying the monkeys have a sense of fairness. Then a philosopher wrote to us, and said it’s impossible for monkeys to have a sense of fairness, because fairness was discovered during the French Revolution! Basically, that tells us morality comes from a bunch of old guys in Paris who sit around and say, ‘Well fairness would be a good idea, you know.’

David: And everyone else in the world went, ‘Oh my God, what a good idea.’

FdW: Yeah, ‘let’s spread the word on fairness; it's a good thing, you know.’ And so that's how they think we arrive, through reasoning and logic, at a point where we say fairness is a good thing and we implement it in society. But it is completely the other way around. Young children, two-year-old children, already have a sense of fairness, just like my monkeys do. And so it's an emotional process: you compare what you get with what somebody else gets.
People have done it with dogs; they've done it with crows, now, because it's a little up and coming field of inequity aversion in animals. And it's basically in many cooperative species we find this now. Our prediction is that in uncooperative species, like solitary animals – like, let's say, the domestic cat is a bit of a solitary animal – you will not find it as much. So it is related, we think, to animals needing cooperation partners with whom they need to share things, and if you're not willing to share, you're not a good partner.

David: In that experiment, did the monkey who was getting the cucumber sometimes just look at the one with the grapes and say, ‘Hey’?

FdW: No, chimpanzees will do that.

David: They do? They'll say, ‘Give me the grapes, come on hand them over’?

FdW: The monkeys have a sense of fairness that is at the level of a two or three-year-old child, which also don't look for hand-outs and things like that. But the chimpanzees, yes. In chimpanzees you may have a situation that the one who gets the grape will refuse the grape until the other one also gets the grape.  People have said, why would you call it a sense of fairness? You could call it resentment. I resent that you get more than me.

Ard Yeah, sure.

FdW: But then you still wonder, why do I resent that you get more than me and under what kind of circumstances? And certainly, when chimpanzees go further than that and share with the one who gets less – because the monkeys don't do that but the chimps do that – they have an understanding that that's also probably necessary for cooperative relationships. And so it's much than just resentment. I think it is a sense of fairness and a sense that they understand that if things are not equally divided, then cooperation’s going to fall apart. So there's a self-interested component to it. That is, for my cooperative relationships I need this kind of sense of fairness, and that's what we're seeing for humans also. Humans don't have a sense of fairness for no reason at all: they have a sense of fairness because our cooperative societies rely on it.

21:22 – IN-GROUPS AND OUT-GROUPS

Ard: Could you say that something like our empathy instinct, if you want to call it that, is like a moral compass pointing us in that direction?

FdW: Yes, if that's how you want to phrase it, yes. We have a built-in sense of fairness which relates to our cooperative tendencies. We have a sense of reciprocity which is very important in a chimpanzee society: doing each other favours and obligations and so on. We have a sense of empathy, and we're in tune with the situation and the feelings of others. And, yes, all those natural tendencies that we have, they steer us in a particular direction in our social relationships, especially social relationships with the in-group.

Now with the out-group the story is sometimes quite different, and that's why we have a lot of trouble applying our moral principles outside of our group. And at the moment, of course, in the world we're trying to do that. We talk about universal human rights, which is sort of stretching… We're trying to stretch the system basically.


Ard: So, thinking about the parable of the Good Samaritan, in some sense that's a story about care for someone who's in the out-group. Is that pushing us beyond our moral compass?

FdW: Yes, I think that is typically human, and that's where the top-down processes come in. So I think we have a lot of bottom-up morality, which basically comes from our primate social tendencies and… and that's a view that Darwin also had.  But then what we humans do – and I don't think my chimpanzees do that in any way – is we try to translate that in to justifications and in principles, and we come up with a narrative – and actually the Good Samaritan is a narrative – we come up with a narrative that justifies our behaviour. And then we are capable of applying those principles outside the usual box, which is the in-group where it really evolved. I think morality evolved for the in-group, not for the out-group. We could hack off their heads, it was fine.

But then with our mental capacities, we say, ‘Well, why is that fine? Maybe it's not fine. Let's question it.’ And, for example, the Geneva Convention, which tells us how to treat our enemies, is such an innovation, and I don't think chimpanzees would ever come up with a Geneva Convention. ‘The enemies?’ ‘Get rid of them. That's the main thing. Enemies are not there to be treated well.’

So, that's a human thing, I think, and that's a top-down process where we then use these justifications and narratives that we have and say can we apply them outside of the group, yes or no? It's a sort of intellectual experiment. And, of course, if the in-group was not doing well – let's say we're all starving and we have terrible circumstances – we might care less about the out-group. And so it is dependent on the circumstances. But nowadays we live in societies which are wealthy enough that we can start thinking about these issues.

Ard: So in some sense these are things that transcend our kind of moral instincts…

FdW: They're built on them, because we still use that bottom-up morality to arrive at the principles and justifications, but then we take it one step further, which is an intellectual step, really it's more like a cognitive step, and that's maybe why I don't see any of that in my primates.

24:46 – DEBATING MORAL TRUTHS

Ard: And so do you think that these moral ideas, like taking care of those in the out-group, are real truths that are out there, that we discover, or are they just something that we've kind of made up ourselves?

FdW: No, I think we have arrived at them by intellectual means. They are more fragile, I think, and that's why I'm saying that as long as your group is doing well, you can do that, but if your group is not doing so well, I think they're fragile because we always, first of all, care about our group.

Ard: But do you think that they're actually true, just like one plus one equals two?

FdW: I don't know about that.

Ard: You don't?

FdW: I don't know if there's absolute truths in the world.

Ard: Okay.

David: But does it matter for moral behaviour whether there are absolute moral truths?

FdW: No, I don't think so. Human behaviour is based on emotions and intuitions and social relationships and social strategies, and then the rationalisation is something that comes afterwards. So we're very good at rationalising afterwards. We're very good at justifying behaviour afterwards. But to make that the basis of human behaviour, or the basis of human morality, is a fundamental mistake, I think.

David: Yeah.

Ard: So what would the basis be then?

FdW: The basis is our evolved tendencies to be social, which include caring for each other, but also it includes caring for ourselves, of course.

Ard: Yeah, but those are tendencies that have evolved to care for our in-groups, right? Whereas often the difficult thing is caring for those that are outside of our group, so…

FdW: Yeah, and that...

Ard: …an argument… an argument could be you could say well, you know, Professor De Waal says this is what's evolved, morality's built up on this, so all of these tendencies we have to take care of people, they're not an in-group, we should ignore them because they're just, you know, they're not part of who we really are?

FdW: They're.. they're not part of our, let's say, biological foundation so to speak [yeah] but, and that's why I'm saying it, it's a secondary intellectual process where we say well we have extracted certain principles from how we treat each other and we're now trying to apply those principles outside of the group.

Ard: Yeah, but those are very important. That's a very important step, right?

FdW: It's a very important step, and I think we can congratulate each other as humans that we are capable of making that step – it's a wonderful step but it's a fragile step, I think.

Ard: And we often disagree on how that step should be done and... and you're not sure whether it's even the case that those thoughts like taking care of those that are outside of our group, whether that's even true, it's something that we've agreed with?

FdW: Well what we try to do under these circumstances is try to look at the out-group as if they're sort of an in-group, and nowadays in society with all the internet, and the planet has gotten smaller, that's actually easier. So if you hear about the tsunami in Japan, for example, in the old days you would read it in a newspaper, you would read it ten days later probably, and you would not be very much affected because it's basically text that you see.

Now you see video images, you see interviews with people who've lost their home, they've lost their children, they're crying on the camera, the body relation is there that you would never have from a newspaper, and as a result you feel closer to them and as a result you're going to give money or at least you worry about them. And so we are shrinking the world, and we're actually treating out-groups more like in-groups, and so we're actually getting back to these fundamental processes even for the out-group.

Ard: But isn't it slightly worrying that what makes us want to help the out-group is tricking ourselves into thinking they're part of our in-group?

FdW: Yeah.

Ard: Shouldn't we want to take care of them, regardless of whether we feel like they're part of their in-group?

FdW: No, but what we do is we manipulate their image, basically, in our mind, and instead of looking at them as strangers that we don't care about, we say but they're human, just like us, and so that's how we start to think about them. And the opposite process is of course dehumanisation which we also do with certain enemies, which is we're going to describe them as horrible and not worthy of consideration, and so we manipulate the image of others.

Ard: So how do we decide between which of those two manipulations to do? So yesterday we spoke… we had an interesting interview with Dr Gwen Patton, who is a civil rights activist, and so there was a very strong in-group/out-group feeling, or there is a very strong in-group/out-group between blacks and whites in the United States, and so what you're saying is we can manipulate our sense of the other by making them feel… dehumanising them or humanising them, and that's obviously a very powerful strategy to change people's behaviour, but it doesn't tell us whether we should dehumanise them or humanise them. Maybe we ought to dehumanise our enemies and humanise our friends? So the real… the deeper question is how do we decide which of these two strategies to take with a particular group like blacks and whites?

FdW: I feel humanisation is always better than dehumanisation.

Ard: Why do you feel that?

David: He's empathetic.

Ard: Empathetic.

FdW: I'm empathetic, yeah.

David: You want it to be grounded rationally, don't you?

Ard: I want it to be. I think it's really important! I think it's really important because I think if it's just grounded in our passions, then the minute I realise that that's what it is, I can start manipulating it in myself and in others.

FdW: But that's like a bit like Immanuel Kant: he said compassion is beautiful but it's totally useless because...

Ard: Well I don't think it's useless. I think it's really... that’s where I disagree.

FdW: It's beautiful in the sense that he wanted to give it something, but duty is really what it comes down to, and that’s the Kantian view. And Kant is as antibiological as you can get, I think. Kant thinks everything top down…

Ard: Let's say that compassion is a good thing, which I think biologically we agree with, and evolution over time has generated us towards compassion, and it's good because our moral instincts correlate with this thing which is true. But what if it was false? How do we know whether our evolutionary instincts are all correlated with the things that are true? We have to work. These are important things to worry about.

FdW: I'm not sure that there are absolute truths out there in that...

David: No, I'm not sure either.

Ard: You're not sure?

David: I don't think they're correlating with anything that's out there.

Ard: Well I think they are. I think that they are.


David: Oh no.

Ard: Well this is where we disagree.

David: Yeah.

Ard: But you think there's no… you don't believe in these things?

FdW: I'm not sure. I'm an expert of primate behaviour, and this level of discourse where we look at are there absolute moral truths… I don't know what to do with that.

31:40 - The Religious Impulse

David: You wrote about your worries concerning people who are very, very certain. I wonder if you can tell us about that because it's something I share?

FdW: Yeah, in my book, The Bonobo and the Atheist, I talk about dogmatism, and so I felt that the neo-atheists were dogmatic. Now I'm saying that being from a country – I'm from the Netherlands – where more than 50% of the people are atheist, or say they are atheist. And so if you say you are atheist, it's no big deal. No one blinks an eye if you say that. And so I grew up in an environment where whether you're religious or not religious really doesn't matter that much. It's completely up to you.

Then I come here in the US, and all of a sudden I'm surrounded by fanatical atheists who are certain that either God doesn't exist, or God doesn't matter, or we don't need religion, or religion is entirely bad and responsible for everything that's bad in the world. And I can't get used to that kind of atheism, and so I wonder where does their certainty come from that religion is so bad?

For me the question, as a biologist, is if all humans in the world have religions – which is true, there are no exceptions – if all of them believe in the supernatural or have some sort of grand-scale ritual religion, there must be a reason for that. It must be doing something for the human species, and I don't know what that is exactly. I don't think it is the source of human morality. I think religion may play a role and add to it, and that's all possible, but I don't think religion invented morality, so to speak.

But I'm more puzzled by religion, like why do we have it and what is it good for? Because it must be good for something than that I'm so certain that God doesn't exit. Now I'm personally not religious: I don't find the question of God's existence particularly interesting because it's an unanswerable question. But I do think that religion is an interesting phenomenon, and I meet so many religious people who are not dogmatic about it. They believe this, but they don't believe that, and they have a disagreement with other people in their own religion about this or that. And so a lot of people are not fundamentalist Protestant or fundamentalist Catholic or Islam, necessarily. There are a lot of moderates in the world, and that's actually the more typical religious person that I meet. And for the atheist also, I would prefer them to be a little bit less dogmatic and be more open too.

David: Well we've encountered people who have said you cannot be a scientist and believe in science but also be religious and believe in God and...

FdW: Why would that be?

David: I don't know.

Ard: Well, I don't know.

David: But they're very certain about it.

FdW: It's true if you use God to explain certain phenomena, then you're in trouble as a scientist. Let's say I see my chimpanzees do something, and I said, ‘Well, I really don't know what, it must be God who's doing it.’ And then people would, of course, say, ‘Well you're giving up on the problem,’ which is true. I'm giving up on the problem, and that's something as a scientist we should never do: we should never give up on a problem. And so if we want to explain certain things, we don't want to involve God as an explanation.

But I think religion is much broader than just an explanatory system. That's part of what religion does for some people who read the Bible literally and take that as the explanation of the world. But a lot of people don't look at religion, necessarily, that way as an explanation of phenomena. They look at religion more as a guideline: how should I lead my life? Now, science has nothing to tell you about that. Science doesn't give you much guidance in your life. What is a good life? What is a bad life? How should I live my life? That's not an answer that you will get from science.

 

Related Content