Ever read a book, or learnt a concept that just refuses to leave you alone like a mystery stone in your shoe? No matter how much you try to shake it out, it comes back to irritate you.
I recently finished reading The Righteous Mind: Why good people are divided by politics and religion by Jonathan Haidt. I know, I'm late to the party. It was first published in 2012 and most good political tragics would have read it ages ago. But I'm a firm believer in reading books when the time's right. I think the time is right to ponder this book.
This book has a lot going on, but I want to focus on two key insights that stood out to me.
We are all just press secretaries justifying our reckons
Ugh. This idea has ruined me. Now that I've seen it I can't stop seeing it.
Basically, Haidt's argument is that intuitions on moral matters come first, strategic reasoning comes second. Rationality always comes second to our gut instincts.
Haidt uses the metaphor that the mind is divided into two parts: an elephant (automatic intuitive thinking) and a rider (controlled, rational processes). The tiny rider serves the powerful elephant: not the other way around. You can see the elephant in action when people have strong gut feelings toward a moral issue. The rider then gets called into action post-hoc to rationally justify the response.
For example, you might start with the moral judgment that income inequality is bad. Rational reasoning kicks in second to try and justify that view.
Now, I read enough philosophy and psychology that this shouldn't have been terribly shocking. Intellectually, I've understood for a long time the limitations of rationality and reasoning. But I've never fully understood the implications.
This idea was eye opening because it helps explain why some political reasoning (I consider political arguments and moral arguments interchangeable) is just so bad. To be honest it's like some people don't even try. Or some people will come up with a rational justification, but when you challenge that argument, they come up with a different argument. And then another, and another. They won't let that gut feeling go. And at some point you realise: this isn't actually a conversation based on reason.
I know NZ libertarians aren't supposed to talk about incest, but Haidt uses it as an example and I'm going to repeat it because it does illustrate the point. Imagine if you asked someone whether it was morally acceptable for a brother and sister to have sex, if no one was there to witness it and they used two forms of contraception and it was consensual. There's a gut reaction to this example, which is why I think it's effective: a lot of people will claim that it is still morally wrong. Then you ask them why. And that's when things unravel. They'll come up with lame excuses like "someone could've walked in", or "the contraception may fail". Even when you tell your responder that their reasons aren't relevant, they refuse to change their mind.
If you've ever been in a political or moral debate with someone with strong priors, the exchange will feel familiar.
Why do people do this?
Basically, because inside all of us is a mini press secretary that automatically seeks to justify our moral sentiments. The press secretaries don't just function to convince others, they are so effective they can often work to help us convince ourselves. Ever told yourself a lie so good that you eventually convince yourself it is truth? You can thank your press secretary for that. When we want desperately to believe something is true, all we have to ask ourselves is 'can I believe that?' Conversely, when we desperately don't want to believe something, we ask ourselves 'must I believe that?' The reasons aren't purely selfish: as social creatures, we want to make 'our team' look good. As Haidt describes:
In moral and political matters we are often groupish rather than selfish. We deploy our reasoning skills to support our team, and to demonstrate commitment to our team.
Why does all this matter?
First, because it helped me understand the limitations of appealing to rationality and reason when hoping to change someone's mind.
Second, this idea can also help explain why people believe things, even when their reasons are really bad. Hear me out here: could this help us understand anti vaccine disinformation? A common narrative is that disinformation and misinformation is to blame for brainwashing people (I don't buy this characterisation, but it's a narrative). But what if the directions of causality are all wrong? What if the moral sentiment comes first (bodily autonomy), and the reasoning acts as backfill? Sure, we can spend effort on tackling the disinformation and misinformation. And it is often bad reasoning, for sure. But if reasoning is simply playing backfill, then you're not really getting to the heart of the matter.
Once you see it you can't un-see it.
Next time you engage with someone who you know strongly disagrees with your view, take a second to check whether you're trying to battle a Hydra. Sure, not all moral and political arguments are like that. Some are entered into in good faith and open minds.
But what do you do when it seems like you're talking to a brick wall?
This brings me to my second point.
Arguing within the moral matrix
A useful takeaway from Haidt's book is his conceptualisation of moral tastebuds. We might all have the same tastebuds, but we don't all respond the same way to different flavours. When these tastebuds are triggered, they invoke a rapid and intuitive response. The six tastebuds Haidt identifies are:
- Care/harm
- Fairness/cheating
- Loyalty/betrayal
- Authority/subversion
- Sanctity/degradation
- Liberty/oppression
For, say, libertarians, we might react strongly to the liberty/oppression tastebud but we nevertheless will lean towards the other tastebuds to varying degrees too. Liberals (the American use of the term) and Conservatives can be understood the same way: we all share the same moral tastebuds but can differ quite significantly in the weight we give them. If you want to appeal to people with views different to yourself, you need to appeal to the moral tastebuds they react most to.
As Haidt puts it
Once people join a political team, they get ensnared in its moral matrix. They see confirmation of their grand narrative everywhere, and it's difficult -- perhaps impossible -- that they are wrong if you argue with them from outside of their matrix.
This characterisation is simultaneously super obvious and rather counterintuitive. For example, it's really easy to forget that some people don't care about liberty as much as I care about liberty.
You make the most progress in political debates when you appeal to the values that the other person holds, rather than defending the values that you hold.
To conclude
More than anything, the theory has also made me more appreciative of times when people do change their minds. And it makes me think, why does this happen? An obvious answer is that sometimes people just don't have a moral stake in the matter, making them more open to rational reasoning. I can certainly identify cases where I personally just don't care enough about an issue, I'm totally willing to have my mind changed based on a good argument because I have no moral and instinctive position on the matter.
There are also situations when the moral lens in which you a view an issue can shift. Say you care about the government assisting the most vulnerable. And you will passionately defend a policy because you believe it will help them. But then you learn that it doesn't actually help the people you are most concerned about. So you change your policy position. Your moral foundation hasn't changed, you still care about the same group of people, but your views on this particular policy might shift.
Of course, the above only works if people are truly honest about their moral underpinnings, and what they claim to care about.
So to wrap this all up, if you haven't already read this book, I'd go in with eyes wide open. If you think you can change the world simply by providing good evidence and a good argument, then you'll probably find the book quite cynical. However, if you're already in a space where you have an inkling that maybe there's something else driving political and moral views that is difficult to shift with evidence alone, then this is a good book to pick up.
Not to sound dramatic, but this book has kind of ruined my life, and I'm OK with that. That is, until I jump onto the next grand theory.