Troy Swanson: Dr. Mercier, welcome to Circulating Ideas. I am so happy to talk to you today. As you know, the past few years I’ve been doing interviews with librarians, journalists, other writers about misinformation and disinformation, and I’d like to get librarians out there to be thinking about it. the roles that they may play in our healthy use of information and to combat misinformation and disinformation.
So, I love your books and I’m very happy to have you here to talk to you. To get us started, can I just ask you to tell us a little bit about your research and your background?
Hugo Mercier: Yes. Thank you for having me. It’s a pleasure to be here. So I am a cognitive psychologist, which means that I study how humans in particular process information, how we evaluate information, how we base our decisions on information that we have acquired, and in particular, I have studied reasoning and argumentation as well as how we evaluate communicated information. So when you read stuff, when you hear stuff, how do you decide what to believe and who to trust? And so as a clinical psychologist, the main tool we use is the experimental methods. So we take a bunch of participants and we give them different stimuli and we see how they react, but I’ve also been fortunate in being able to kind of read more broadly in the literature on influence and kind of propaganda and advertising, and political campaigns and all of these things that are particularly, at least partially treated by different disciplines.
Troy Swanson: You are the perfect person to have to talk about our current information environment, I think.
Hugo Mercier: Thank you.
Troy Swanson: Well, I want to ask you something that librarians see almost every day. People visit our libraries, they access information, maybe in a book, maybe they watch a video, maybe they read an article and this new information context, their memories and their experiences, ideally they learn something. Can you describe how, from the perspective of a cognitive scientist, how you view this interaction?
Hugo Mercier: In a very short amount of time, obviously the baggage, you know, prior knowledge, our experiences are going to play a huge role in how we acquire new information.
So one dimension of that is how we understand new information. And, obviously for every person, there’s going to be a level of understanding that they have based on their prior knowledge, what they already know, and if it’s too complex or too simple they won’t find that enjoyable. They will lose interest pretty quickly, but that is not something that I really am an expert in. The thing that I have studied more is not how we understand information, but how we evaluate it. And so whether you decide once you’ve understood it, or at least once you think you’ve understood the information, how do you decide how much weight to put on it compared to your priors?
Assuming the information is not constantly in agreement with what you believed, how would you decide whether you should change your mind and how much you should change your mind? So the first thing that you notice is that, as I kind of hinted at already there is that the main factor that will decide whether you accept something is whether it fits with your prior beliefs.
If you already think that somebody is a good person, if someone tells you something good that person has done, it’s quite plausible. If they tell you that they’ve done something horrible, you might be maybe a bit more skeptical and that’s all very rational, taking our priors into account makes sense. Usually most of our beliefs are sound. And so it makes sense to use them to evaluate information but we have to be careful not to fall prey, to biases such that we tend to mostly seek information that will confirm our priors. And then we run the risk of becoming kind of polarized and overconfident.
Troy Swanson: I love the point that you made , how do we engage with the information, if it’s too complicated, we’re going to let it go. If it’s too simple, it may not catch our attention. Even like those first steps are things that I think we don’t often recognize when we’re dealing with this in our libraries.
And then the next step is how does this connect with what we already know? And I’m always interested in those. I think all of us in libraries have seen those kinds of aha moments. Like the light bulb goes off and sometimes that’s the most magical time when someone comes in thinking “A” and they read something new and then they go to “B” because they’ve interacted with something that they hadn’t thought about before. So it’s such a powerful step.
Hugo Mercier: I can imagine it’s very pleasant to, as a feeling for ourselves and it’s nice to see it in others as well. That’s for sure. Especially if we’ve played a role in it. One thing that makes this kind of difficult, as you’re saying, it’s finding the right level to either talk to people, ourselves, or to find the right resource for them is kind of difficult in part, because we tend to suffer from what is sometimes called the “curse of knowledge,” which means that it’s really hard to put ourselves in the shoes of someone who knows less than us. Once we’ve acquired knowledge, it’s really hard to pretend that we don’t know it. And to pretend that where can I now you’ve learned, it’s like, well, how would I process things? Even if we had been in that position a few years ago, you know, before it’s really hard to remember how we got to the superior state of knowledge and we tend to assume that people know too many things and it’s really hard. That’s one of the reasons why teaching can be so difficult.
Troy Swanson: Yeah, absolutely. Let’s talk about Enigma of Reason, which is a book that you did with Dan Sperber. It was really kind of eye-opening, a big step forward recognizing that we’re not as rational as we think we are. You know, I think most people think of themselves as very rational, logical decision makers, but you show, you really kind of turn reason and rationality upside down. So can you tell me how you view reason, especially through the lens of that book?
Hugo Mercier: Yes. So I will start by what you’re describing as the traditional view of reason in which people tend to think of themselves as rational, and on the whole, I think we are kind of rational, but more specifically people think that, many people think anyway including many academics think that by reasoning on their own, they can improve on their decisions and their beliefs.
So, you know, before making a decision in buying a car, or deciding who to vote for, people like to think that if they really think very carefully about the decision, they will come to a better decision. And there’s a lot of evidence, whether it’s been experimental or historical observational evidence suggesting that in many cases, that doesn’t work and that reasoning on our own, we mostly think of reasons why we were right. And so, let’s say you’re trying to think who to vote for. If you already have a preference for one of the candidates, it’s likely that your solitary reasoning will just lead you to become even more confident in your choice or even more paralyzed in your choice of candidate, which is not ideal.
So this is well known and well accepted in psychology. And when we went over, the things we do in that. Is there trying to explain why that is the way people reason and how they can reason better and our hypothesis is that reason would have evolved to be using social contexts and in particular in order to argue, not to have a shouting match, but to extend arguments with others and to justify our beliefs, and so in that context, it is okay to be biased. And if I want to convince you of something, it makes sense that I should mostly give you arguments for my side or against your side. I’m not going to convince otherwise, but in that context, if my arguments aren’t good enough, then you will shoot them down, you’re going to give me arguments for your point of view. We can actually get a better beliefs in better decisions. So not because reasoning evolved to be used in that kind of context and things work well in that kind of contexts, in a social context when you’re integrating with others, and it cannot, it’s the first from, from a number of kind of flaws. If you’re trying to reason on your own or actually all with people who agree with you, in which case you also do not have this kind of back and forth of a debate.
Troy Swanson: So as the argument is happening, the reasoning is happening in the group context, it seems like there’s some pieces that really are important then like trusted people that you’re arguing with, stronger ties like you’re not thinking of like the Facebook argument.
Hugo Mercier: There are many features of the extensive arguments that will make it more or less effective. So ideally it’s good if it’s in a smaller group, because if there’s more than typically about five people at the dinner party, for instance, the conversation breaks down, like you have either kind of smaller groups that form or you have people that take turns telling stories essentially. Ideally you want to have a small group of people. You want people who have some common incentive, like if you’re playing poker, you can’t convince someone else to fold or whatever, because, they don’t have the same incentives at all. When you’re using an argument, you’re leveraging something that the other person already agrees in order to get them to agree with something else that you think is right. And so if there is no common ground, then you can’t really find any efficient arguments.
And as I was mentioning, you need for there to be some disagreements. Like if you just agree about everything, argumentation might actually be a bad thing, and this trust that you are mentioning what matters is that either you trust each other or you trust the same sources because many of the arguments…. If you’re talking about logics and mathematics, you don’t need trust because, just kind of perfectly demonstrate the arguments. It’s just, is the argument good enough, but in everyday life, when you talk about politics or, family or health, or most of these issues, trust will come into the picture.
So for instance, if I tried to convince you that you should get vaccinated, My argument is not going to be a logical argument. It’s going to be essentially an argument from authority saying, well, look, it’s going to be safe, that there hasn’t been any major side effects and all that.
And if you don’t trust your authorities, if we don’t share the same background of trust, then my arguments, you will understand them, but you won’t accept them because they won’t be any good to you.
Troy Swanson: Can we dive a little bit deeper into this idea of intuitions? In the Enigma Reason you described this interaction between the unconscious mind and conscious reasoning. And I think you’re pretty convincing that decision-making often comes from these intuitions, which you call “knowing without knowing how one knows.” And then there’s like an interaction between these unconscious intuitions and the conscious mind. Can you help us understand a little bit more? What are these intuitions and then how does that interaction look?
Hugo Mercier: So the vast majority of what our mind does is largely unconscious. So in psychology and cognitive psychology, if you’re just interested in seeing things, you have photons that hit your retina, and it’s organized into two dimensional space and then this is translated into this rich, colored, three dimensional space that we have the impression of seeing. And we are absolutely not aware at all of how that happens. We open our eyes magic happens and we see things. And that is true for the vast majority of the things that happen in our brains.
So, you make a decision and you don’t think about why you make the decision. You don’t ponder things, you just do it either because it’s routine, because it seems obvious in the moment. And so that is how animals work. That is how humans work most of the time. Then in some cases, humans are able to reason about their decisions or about their beliefs, in which case they will consciously ponder reasons for making decision “A” or “B”, as I was talking about earlier.
So, if you want to decide which candidate to vote for you, you may think, well, Candidate A has such and such strength and such and such weaknesses and same thing for Candidate B. And in that case, once you’ve reached the decision , you can tell people, he’s the better, or she’s the better, the better candidate for such and such reason, but even then, it seems as if we’re the individual who’s making the decision and pondering the reasons, and you’re doing all this, but in fact, if we dig a little bit deeper, that is also largely unconscious, for two reasons, one is that as I was mentioning earlier, if you have a preference to start with, you will tend to find the arguments for that person, thinking that you’re objectively thinking about the decision, but that’s not what’s happening.
And the other reason why it’s more unconscious than feels is that we don’t know why we think some reasons are good. So you may think, well, I’m going to vote for that candidate because they are very pro -choice and you know, we can’t really articulate why it being pro-choice is something that’s very important to us.
Even if we can articulate when degree, then we can’t articulate why that’s a good reason. So there’s always some kind of unconscious inferences going on in the background. But in some cases we’re aware of more of them than others,
Troy Swanson: Sometimes reasoning becomes the excuse for the intuition that’s already there, right?
Hugo Mercier: Yeah, that’s typically what happens. In most cases when reasoning affects our decisions or our beliefs is when someone else gives us an argument that we find compelling and that leads us to change our minds. It’s much rarer that it happens really within our own minds completely on our own.
Troy Swanson: So, after reading the Enigma Reason, you could walk away with a dispirited view of decision-making sometimes. It really does deflate this kind of purely logical brain that’s out there. But I was happy with your next book, Not Born Yesterday, because it actually presents a whole level of hope that we actually aren’t as bad at getting fooled by other people as some of the research might suggest. And so I was hoping we could talk a little bit about that and maybe a nice place to start with would be with the idea of plausibility checking.
Hugo Mercier: So I’m just going to backtrack a little bit. People are better at reasoning together than they often think, and then if people reason together in the right context, typically things will work well and everybody will end up making better decisions or having better beliefs. And because we are led to reason with others, a lot of the time in our professional, our personal lives it’s seen as quite a good thing. So the message on the whole is, I would hope positive, but it’s certainly not ego boosting for many people there.
And then moving on to the other book Not Born Yesterday, the main argument I tried to make it in that book, and it’s also based on an idea that other colleagues had developed, back 10 years ago is that humans are endowed with cognitive mechanisms that allow them to communicate information in a way that that is quite efficient. And the basic argument behind this is that if that were not the case, we would be in huge trouble. So, you know, over evolutionary time our ancestors if, you know, some of them had been perfectly gullible and they could have been going you should give me all your stuff and you should, you know, go do what I want, but they would not be our ancestors. And so we had to evolve mechanisms to ward off attempts at manipulation and lying and all that.
I think one of the most basic and robust of these mechanisms is plausibility checking, which is something I alluded to earlier here, which is a mechanism by which we evaluate what people tell us in the light of what we already think or what we already know, and our first reaction typically is to reject things that seem impossible. It makes us very conservative. And sometimes you might tell me something that’s implausible, but implausible in light of my beliefs anyway, but that is actually true. And that is why, fortunately, we are also equipped with mechanisms that allow us to overcome this initial rejection of information that we deem implausible.
And in particular, we use trust. So if you’re someone who I really trust and I believe that you’re competent and you have good information or access to something, then I might revise my beliefs, or we can use argumentation, as I mentioned before, if you give me a good argument, I’m liable to change my mind if the conclusion was really implausible.
Troy Swanson: And that idea of trust gets at that social nature of knowing and related to that, you note that we’re equipped with mind reading mechanisms that help us understand and anticipate the minds of other people. Can you help us understand what that means?
Hugo Mercier: Yeah, so that’s one of the things that humans are amazingly good at, just figuring out what other people are thinking. So obviously the main way with which we do it is with language. Most of the way we know about what other people are thinking is that they tell us, or that they can ostensibly express through their very clear and emotional or non verbal communication. And that is already, that’s amazing and it’s fantastically complex computationally, but oftentimes mind reading refers to the ability to understand what people are thinking just based on their behavior.
And that is something that we can also do. So, for instance, if I see my wife going to the kitchen just shortly before lunch time, I can infer that probably she’s thinking of having a snack or making lunches and things, and that is something that other animals in particular nonhuman primates are able to do to a limited extent as well.
So they can infer what other animals know or what other animals think to some extent based on their behavior or based on where they can see for instance, and that is really one of the crucial building blocks of our social lives. That’s what allows us to anticipate what people might do so that we can help them, or stop them from doing something stupid and without that, we’d be in big trouble.
Troy Swanson: On the flip side of that anticipation, or maybe related to that anticipation is the idea of open vigilance mechanisms, the idea of being vigilant to protect ourselves from dishonesty. So, maybe they’re connected, you can anticipate what people are doing. You can also anticipate when people are trying to fool you a little bit or take advantage of you. Can you unpack these ideas for us a little bit?
Hugo Mercier: Yes. So often these vigilance mechanisms are the mechanisms that I was referring to earlier that allow us to evaluate what people tell us. So this kind of plausibility checking each one of them, and then the others are essentially mechanisms related to the source of the information so you decide whom to trust and who to believe. And mechanisms related to the content of the information. To relate that to mind reading. one of the major cues that these mechanisms will take into account in order to decide whether to believe someone or not is what we think their incentives are.
So, that’s why, when you’re playing poker, the incentive of the other player is just to win and for you to lose. And that’s why you don’t believe them, even if they’re your best friend or your spouse or whomever, you won’t believe them in a game of poker, because you know what their goals are and you know how that is going to impact their behavior and what they might tell you.
And these mechanisms, there’s now a lot of evidence that these mechanisms work remarkably well. We have the ability of doing all that extremely well of making fine-grained distinctions between people of different levels of expertise or people who are more or less trustworthy and that even young children have these abilities, even infants to some extent can turn to someone who is more expert as someone who is more trustworthy.
Troy Swanson: How well do these mechanisms translate over to information sources that aren’t people like an article or a book, is there evidence that they transition over?
Hugo Mercier: So what’s interesting, first of all, is that we will use the same mechanisms when it comes to two institutions, for instance, so when you read an article in the New York Times, oftentimes, you don’t really pay too much attention to which journalist wrote the article, you will just say, well, this is from the New York Times, ergo, it’s probably mostly reliable, at least they can have the factual description of what’s going on.
And so, essentially we have these mechanisms that evolve presumably mostly to deal with people and to help us decide based on our experience with these people who we could trust and in what domain. And we use the same mechanisms when it comes to two institutions or kind of groups.
And so we can learn through experience that the New York Times is reliable in such and such areas and maybe less reliable in other areas, that it has biases and whatnot. So we use the same mechanisms and one of the potential shortcomings of this is that there are institutions that work in such a way that can jar with how we understand how things should work in new ways.
If we look Wikipedia, to most people, their intuition is that it’s going to be very unreliable because anybody can intervene and it’s free for all and all that. And as it turns out, Wikipedia is incredibly reliable. It’s far from perfect, but it’s very, very reliable.
And it’s something that our creative mechanisms have a bit of problem dealing with, because there seems to be such a mismatch between the process and the outcome. And we tend to, as a result, underestimate the value of Wikipedia.
Troy Swanson: It seems like the awareness of what that process is behind the scenes really matters, right? Like when we get the New York Times in our hands, most people have a rough understanding of what journalists do. I think we could always learn more, but being handed a scientific journal with a method that you don’t have any idea what that means, that becomes the bigger stretch, right?
Hugo Mercier: The methods in themselves can be kind of technical but even the understanding of things that are much easier to grasp in terms of what are the incentives of scientists and what kind of punishment do they face if engaging in fraud, the hoops you have to go through in order to get a paper published and all of that, I mean, the process is very far from perfect, there’s no question about that, but you know, when you see that in some domains, you have a consensual agreement on something. You have like 97% agreement. I mean, getting 97% of scientists to agree on anything is quite challenging, and so once they do, you can pretty much put your money in the bank and that’s the best estimate you’re ever gonna get.
Troy Swanson: Right, right. I think the big thing that we’re dealing with these days is the interaction between our identities and self image and how we interact with the messages around us. It seems like as we’re living in an increasingly polarized world, I think, especially in the U.S., who we are dictates more and more what we believe. And, so I really wanted to ask, as we’re getting to the end of our time here, how you see this relationship between identity and decision-making,
Hugo Mercier: Well, at least in terms of identity and information, communication, and what we decide to accept or to share, I think the issue is that when it comes to evaluating information, we still have some measure of objectivity, in that if you see a piece of information that seems like complete garbage, even if it’s right by your side, you might not believe it and vice versa. The issue is more in terms of what information we decide to share. We tend to think, well, you know, I’m a Democrat, I’m a Republican, and that’s very important to me, and if I do anything that appear to betray that cause or that group, I might lose some friendships and some relationships, then we will be very careful to not share information that might make us appear to be, traitor is a strong word, but someone who’s not fully committed to the cause, and vice versa that might prompt us to share information that is congenial in order to display our standing within the organization or things that show that we are a good person morally. So, you know, it’s complicated because you know, things like political parties are extremely useful and they have to exist for politics to happen in a way. So it’s not just black and white and all of that is bad. Ironically, actually in broadly, the mid-20th century, the American Association for Political Science called for more polarization because they thought that too few people felt represented by the parties, and so they thought people should have stronger political identities. And to some extent it is a good thing. You can want people to be interested, you want people to be invested, but ideally you want that without all the nastiness that increasingly unfortunately comes with it.
Troy Swanson: It’s like polarization could be healthy because it represents a wider range of views. As long as you have that ability to compromise and seek action. Unfortunately like we’re hitting a point where we can’t even move forward on anything because we live in different worlds.
Hugo Mercier: Well, the ideological polarization isn’t that large. Democrats and Republicans agree on many more things than they think, but their views of each other is deteriorating very quickly. And clearly that’s the political problems that politicians are themselves increasingly polarized in their, in their voting decisions and their behavior.
Troy Swanson: I don’t know if there’s any final thoughts that you’d like to share with librarians across the United States and around the world.
Hugo Mercier: Oh, I’m just extremely thankful to you guys because I’ve always, I love libraries and when I was in the U.S. in particular, I had access to a fantastic library and to amazing librarians and it was pretty awesome and I kind of miss that..
Troy Swanson: Well, thanks. So I hope that all the librarians listening to this then go out and pick up your books, the Enigma of Reason and Not Born Yesterday. Are there other books on the way or other projects that you want to share with us?
Hugo Mercier: So actually one project I’m in the early stages, I like to work on nerds all of us, essentially, and trying to understand why we get so interested by that very esoteric, abstract stuff, science and history and all of these things that have no practical consequences, for you as an individual that doesn’t really affect your life usually and some of us clearly do care usually about all this. And we invest a lot and we want to explain things and to argue about things and to know a lot of stuff and it’s just kind of weird as a behavior. I mean, we’re all like this, we take it for granted, but especially from an evolutionary point of view, it’s a bizarre behavior. And I’m just trying to understand why we are we’re so fascinated by information and explanations and arguments and all that.
Troy Swanson: Well, you will have a big audience for the listeners of this podcast. I’ll tell you that. So that’s fantastic. I love it. If our listeners wanted to connect with you online where can they find you?
Hugo Mercier: I’m on Twitter @hugoreasoning and I have a webpage that you will find by Googling my name.
Troy Swanson: Okay. Well, thank you so much for your time and thank you so much for your research.
Hugo Mercier: Thank you, Troy, that was fantastic.