Ard: Let’s say I want you to work out the value of a human being. So what’s the value of each of us here?
PA: Well you could do that using economics, couldn’t you? If you wanted to, at the crude level.
Ard: I don’t mean just the monetary value.
PA: I’m asking you to try to clarify your question.
Ard: So I think when I say ‘value of a human being’, it’s something along the lines of do I think David has some kind of intrinsic value so that I shouldn’t kill him, for example.
PA: Well we’re on the edge of understanding the scientific basis of morality and ethics at this point.
Ard: We are?
PA: And moral principles, in my view, emerge from two sources. One is our ethological history, our evolutionary history, that we have learned to, if you like, contribute to stable societies through particular patterns of behaviour, and those have now so pervaded our behaviour that we regard them as our moral fibre.
And the second one is that we, with our big brains, can reflect, in quiet moments at least, on the consequences of our actions. So we’re not just automata in terms of our evolutionary history: we are reflective human beings. I’m not going to kill you because you might have similar views about me, so let’s compromise and not kill each other.
Ard: But why is that a scientific explanation?
PA: Well it’s a way of looking for the roots of morality. And if the roots of morality are ultimately the stability of societies, then you have to explore using the scientific method – whatever that quite means – but in terms of evidence, looking at genetics, looking at histories – what ultimately leads to stable societies.
Ard: And that will give us a science of ethics?
David: So for you that’s going to be in the genes, ultimately, isn’t it?
PA: Yes, ultimately in the genes, in the sense that we, with particular types of genes for not killing each other...
David: So you think a morality, a scientific morality, is ultimately going to have to look to biology, to evolutionary theories, genetic theories?
PA: Oh, absolutely, yes.
David: It’s going to have to be built from what we know about altruism and the genetics of altruism.
PA: Yes. And morality, if you like, is the ultimate emergent property of the gene.
Ard: So isn’t there a worry there that once I understand that this is the case, then I have this sense that I don’t want to kill David because I feel it would be a bad thing to do? But once I realise that that’s just my genes or my history that’s telling me that, there’s nothing more than that.
PA: But there is more than that, because you know that he might be thinking the same: to kill you.
Ard: That’s right.
PA: Society is a network of compromises, and we know that if we don’t go around randomly killing, then we’re more likely to survive.
Ard: That’s true. So as long as David doesn’t randomly kill, and you don’t randomly kill, I’m fine. But I can do whatever I like.
David: It sounds to me like civilisation is a very large Mexican stand-off!
PA: Well I’m afraid that’s largely true.|
Ard: You were saying that goodness is linked to fitness in evolution, but in evolution things do annihilate one another, so…
PA: Because sometimes they…
Ard: Is that good?
PA: They only have an immediate view of their fitness.
Ard: Okay, so the goodness is much more complicated than evolution?
PA: Yes, and probably more far-sighted than even we are. I mean, we are the most far-sighted of all the creatures that there are, but whether we’re far-sighted enough, who knows?
Ard: But the science of good versus evil will come out of our understanding evolution?
PA: That’s a very deep question I think, because, in a sense, the bigger our brain, the more able it is for us to transcend physical evolution.
PA: We can look to the future and see the consequences of sacrifice now. Well, in principle.
David: It sounds like, for you, what’s good and what’s bad is a human construction. We’ll agree what’s good and what’s bad. Is that right?
PA: And it changes.
David: And it changes. Whereas for you, Ard, some things just have to be good.
Ard: Yeah, I think some things…
David: Transcendently good? Have I used that word?
PA: Like what?
Ard: Like cruelty, I think, is always wrong: generosity is good, whether we agree on it or not. There are societies that think that cruelty is… They advocate cruelty towards certain other groups.
Ard: I think they’re wrong. And I think they’re wrong regardless of what…
PA: So inhibition of the aspirations of others is bad? Is wrong?
Ard: I think that, for example…
PA: Whereas encouragement of the aspirations of anyone is good? Even Hitler?
Ard: No, that’s not what I’m saying. I’m saying that cruelty towards others can be bad, irrespective of whether the society thinks it is or it isn’t a good thing. So a classic example would be slavery, which is a kind of cruelty towards others. I would say that’s wrong, regardless of whether society itself thinks it’s a good thing.
PA: It depends what you mean by slavery, doesn’t it? We’re all slaves, in a certain sense.
PA: We are all employed, so we are slaves under the masters: our paymasters.