With the pandemic, climate change, and science, anything can be politicized in today's environment. People need to understand what misinformation is so that they can avoid fake news. Join Dr. Alaina Rajagopal and her guest, Dr. Sander van der Linden, in this discussion on how to fight misinformation and learn what is real and what isn't. Dr. van der Linden is a Professor of Social Psychology at the University of Cambridge. He is also the Director of the Cambridge Social Decision-Making Lab. Learn all about medical misinformation, fake news, bias, how social media helps inflate misinformation, and the role science has in decision-making and information. Discover what Dr. van der Linden is doing to help people spot misinformation. Tune in and learn what you can do to avoid getting duped by fake news and stop spreading of incorrect information.
Listen to the podcast here:
The Science Of Misinformation And How To Prevent It With Dr. Sander van der Linden
We're talking to Dr. Sander van der Linden who's an expert on misinformation. Over the last few years, it's become increasingly apparent how the spread of incorrect information can be damaging on a societal level. We'll talk a little bit about where misinformation comes from, what it is and how you can stop it. Welcome, Dr. Van der Linden.
It’s a pleasure to be on the show.
Dr. Van der Linden is a Professor of Social Psychology at the University of Cambridge and Director of the Cambridge Social Decision-Making Lab. Prior to Cambridge, Dr. Van der Linden was a Postdoctoral Research Associate and Lecturer in the Department of Psychology at Princeton University. Before that, he was a visiting Research Scholar at Yale University. He received his PhD from the London School of Economics and Political Science.
His research centers around the psychology of human judgment, communication, and decision-making. He's also interested in the psychology of fake news, media effects, and beliefs systems such as conspiracy theories. As well as the emergence of social norms and networks, attitudes and polarization, reasoning about evidence, and the public understanding of risk and uncertainty. Let’s start with the basics. What is misinformation? How does misinformation differ from disinformation? Let’s get some definitions.
There are a lot of debates on these definition concepts. I use a definition that I think is okay. It's pretty decent as far as definitions go. I view misinformation as anything that's incorrect. It's the broadest overarching concept. It could be a simple error. Whereas, disinformation is misinformation coupled with some psychological intentions to harm or deceive other people or to manipulate them.
Disinformation is misinformation plus some intent to deceive people. People use the term fake news for both misinformation and disinformation. Sometimes also propaganda, which is to me, disinformation plus a political agenda. You're deceiving people in the service of a political agenda. I can’t distinguish between those three layers because it's not perfect.
One of my favorite examples is a headline from a reputable outlet like The Chicago Tribune, which ran with the story that a doctor died after receiving the COVID vaccine, which was two independent events but it was framed as if there's some connection there, which was unknown at the time. Is that disinformation? Is that misinformation?
If it's an error and they weren't thinking about it, maybe it's misinformation. Was it an explicit clickbait attempt to get people into clicking on the story? Maybe it's disinformation. Trying to discern the intent can be quite difficult. Just keeping that distinction in mind is potentially useful because I'm much more concerned about disinformation than I am about misinformation.
Especially, in looking at vaccines, a lot of people will draw an association with an event that has happened one time to one person and say, “I got my COVID vaccine then I got sick after. I must have gotten COVID.” There's a lot of danger in drawing those associations. That newspaper article highlights that and how common that is.
It's a big issue. Scientific or medical misinformation is particularly tricky, especially as our understanding of science has evolved. Some things you can say are categorically true or untrue regardless of what stage in the scientific process we're in. Other things that are emerging can sometimes be a bit trickier. Does the virus originate from a lab?
How does the study of misinformation relate to behavioral science? You've already touched on this a little bit.
There are two sides to it. One is we can try to understand the psychology of it. How does the brain process information more generally, but misinformation in particular? Why do we make errors when it comes to judging the veracity of news media content? What's going on there in terms of the psychological mechanisms of what makes us think that something is true versus false? That's a very interesting line of study.
There are also the consequences of individuals believing, endorsing and sharing misinformation. Both consequences to the individual as well as to society at large. Some of those consequences range from not supporting action on climate change to not vaccinating yourself or your children, which also has larger societal effects. If not enough people are vaccinated, it compromises herd immunity.
Misinformation has led to people ingesting harmful substances. Here in the UK, people have set phones and masks on fire because they think that 5G is somehow connected to the spread of COVID-19. Viral rumors on WhatsApp led to mob lynchings in India. There are behavioral consequences to misinformation as well.
Misinformation spreads faster, further, and deeper in social media than factual information.
The pandemic has been an interesting example of how humans interact with one another, balance personal risk, and personal independence with societal needs. Do you have any insight into the choices people are making? You named some examples.
In social psychology, we call it a social dilemma. One of its key examples is the decision to vaccinate. The logic behind the social dilemma is that what seems to be in your personal interest or a rational thing for you to do is not to take maybe any risk or maximize your own personal preference. If everyone does that, then collectively we're all worse off. That's the case with vaccinations. If people say, "I don't want to get a jab. I don't want to take a small risk. I don't want to expose my child to potential risks."
When we take an aspirin, we also take risks. There's a risk with everything. The key point is that if everyone takes a tiny amount of risk, then we're all going to be protected. That’s what's in the public interest. That’s the nature of social dilemmas and you see those everywhere. It's vaccination, climate change and recycling. Anything we do is a trade-off between, “Am I going to do what's in my interest or am I going to do something that's also going to be an interest of the collective, the planet, your family or your neighbors? It's interesting to study how people make these trade-offs.
With vaccination, what we've seen is that people always think about the pro-social angles or what we call the benefits to other people. We explained to people what the benefits are of herd immunity. If you take a tiny risk, you can protect people who are vulnerable, who can't get a vaccine, maybe because of a condition or elderly individuals. You can help other people by getting the vaccine and keeping them safe. We found that people find that framing to be quite persuasive.
Very few people intentionally want to be a bad person or not care about other people. When you frame the decision to vaccinate as something that has benefits to other people and you're being kind because you're vaccinating, maybe not for yourself, but to protect other people. That works, but not for everyone. It works for a lot of people, explaining the benefits of herd immunity and the pro-social nature of the decision. That's been an interesting insight for me in terms of the research.
We found that for people who score higher on pro-sociality, there are traits that range from pretty much doing only what's in your interest versus also caring about other people. People who are quite high towards the "I care about other people" spectrum are much more likely to see vaccination in that context. Helping people shift their thinking around to the societal benefits can be quite useful.
It doesn't always work. When you're in a society that's highly polarized. There are very strong expressions of individual freedom and counter-narratives that suggests that it's all about executing your individual rights. It gets more complicated. For the most part, that kind of message has been helpful. I'm based in the UK. There’s been a lot of pro-social framing throughout the pandemic. Why do you have to wear the mask? Why do you have to keep your distance? If you're young and healthy, why do you have to take on that burden? It's because you're helping vulnerable people. Most people are receptive to that.
Speaking of polarization, it does seem like society or at least in the United States and potentially also in the UK is becoming more and more polarized about topics that haven't typically been that polarizing. Do you have any insight as to why this is happening? Why are we becoming so polarized?
At the beginning of the pandemic, I was wondering whether COVID-19 was going to become a topic that’ll be politicized. In a way, you start out by thinking, “This is a virus. It's a pandemic. It's real. Nobody's going to deny that it's real, because it’s so obvious. Everyone knows it's a threat. We’re all going to get on board and have bipartisan support.” A few months in, you start to see the politicization happening. People become more and more polarized. People are denying that it's real. They’re denying the extent to which we should do something about it. It becomes another way for people to express their political identity by either endorsing or protesting against COVID-19 measures.
It’s interesting that sometimes, even though we study it, you wouldn't expect it to happen but in hindsight, it makes sense. There are a few factors that can help explain it. One is that the inherent uncertainty of science is used as a vehicle to politicize science. You see that with other issues like climate change, for example, where there have been concerted campaigns to cast doubt on the link between fossil fuels and global warming. The same technique was used by the tobacco industries to cast down the link between smoking and cancer.
It's a strategy that's clever because they're not necessarily spreading fake news but they're saying, "We don't know enough yet. Science is emerging. It's evolving. Science is very uncertain. We should wait and see." That sounds much more plausible to an otherwise reasonable individual who might go and say, “Maybe we don't know everything yet. Maybe it's over-hyped. Maybe we should wait and see.” That is the more dangerous disinformation or what we call manipulation tactics because it's not as obvious. It’s much more subtle and it sounds much more reasonable.
The uncertainty of science is used as a weapon against science and that's a well-known technique. That's definitely been happening with COVID-19. We've seen them on masks. Especially, when science is rapidly evolving, it's also very hard for scientists to talk about uncertainty. Governments get very flustered about whether or not to admit uncertainty. You get the problem that if you don't communicate enough uncertainty, people are going to get upset because you've miscommunicated or you said one thing and it turns out to be another thing. That's a real big issue.
The other thing, especially, in the United States and also here in the UK is political elites. They have a real influence on the formation of public opinion. You could see polarization in the tweets. If you looked at social media data, which some people have analyzed, you could see polarization happening in the tweets that they were sending out about COVID-19. In fact, Bernie Sanders also said some polarizing things. Early on, you can see the political elites debating it, which then trickles down into public opinion and it polarizes people. The elites are sending cues.
The third factor is the spread of misinformation that feeds into existing polarization. Fox News was quite instrumental in disseminating misinformation at least at the beginning of the pandemic. Social media has been a huge source of misinformation about COVID-19 and the pandemic. That also plays into it. At the end of the day, you have a situation where people want to express their social identities in certain ways. The issues become less important.
It becomes more about expressing the group that you belong to. If you're a Conservative, you want to express the values that Conservatives are endorsing and that the elites are signaling. If you're a Liberal, you want to do the same. When you already have that environment in place and you're already polarized in so many issues, it becomes very easy for the next issue to fit into that category, “This is going to be another one of those issues that's a Liberal hoax or an alarmist thing.” We’re going to counter-campaign against that narrative. Now you get two opposing camps.
It's surprising because your neighbors in Canada, there was bipartisan support for COVID-19 pretty quickly. It has a lot to do with the unique political context of the United States. Although, here in the UK, there have also been lots of misinformation and polarization on COVID. It's definitely not unique to the United States, but some of the Commonwealth countries like Australia, the UK and the US are fairly similar in terms of the emergence of polarization.
You've mentioned social media several times as far as tweets of some of the political leaders and the role of WhatsApp in some violent crimes in India. Social media has made it so much easier for people to peddle misinformation. Can you talk a little bit more about the role of social media in some of the misinformation campaigns?
It’s a difficult question. In a way, everyone assumes that social media is a huge vector in terms of spreading misinformation. It is but the causal question is not so easy. Are people more polarized because of social media or were people already polarized, then they were handed access to social media which amplified the existing polarization. It's the same with misinformation. Although, there's more evidence that some of the misinformation emerges on social media, where it didn't exist before. There's been research and it quite clearly shows that falsehoods spread at many times the rate than factual stories on social media.
There are a lot of good studies into that, showing that misinformation spreads much faster, further and deeper in social media than factual information. In fact, that Chicago Tribune story was one of the top stories on Facebook at the beginning of the quarter. Misinformation gets five or six times as much engagement on the platform than factual information. They can test these claims because they say that we don't have access to all of the data, which is true. That's part of the problem in trying to make these inferences.
We did a study where we looked at millions and millions of data points on both Twitter and Facebook, of both US Congressional accounts, as well as all of the popular media accounts. One of the things that were already well-known is that emotional content tends to go viral more. If you're highly emotional in your language, it tends to rile people up both positive and negative.
In addition to that is was what we call moral emotional language. Things that tap into moral transgressions that get people excited. It'd be a story like, “Old lady assaulted on the streets.” Even if it's fake, it gets people really worked up because it was an innocent old lady and there was a violent criminal. These types of stories like, “Baby died from a horrific side effect of the new formula.” Some of these fabricated stories are supposed to get people upset. That was pretty well-known.
One of the things that we added in the study was we replicated these patterns in our data. We coded whether the language in the tweets and the Facebook posts were from Liberals or Conservatives. What we found as the top predictor of engagement on these platforms is the amount of group derogation. The more you talk about the other group and the more negative things you say, the more likely you are to get engagement on your posts.
This is on top of using moral or emotional language. The number one predictor was trash talk about the other group. That gets the most likes and shares on these platforms. It illustrates that social media amplifies political tensions. When you have that, it becomes easier for misinformation to spread as well because people use it as a way to reaffirm their political identity. If you're angry at the Liberals and there's a story that's fake and you know it's fake, but it says something negative about Liberals, you're more likely to want to share that and forget about the accuracy or the motivations of it because you have an overriding social motivation, which is to join the bandwagon and express your identity. That's where social media comes in and harnesses some of these effects.
There are also filtering and things like that. Fact-checking and corrections are all useful. When on social media, you have segregated information patterns, what happens is that people who are like-minded tend to share content with each other. That creates a very biased flow of information. People who are more similar like you are retweeting similar content. If you want to get the facts across, then you're not penetrating those echo chambers. You can't get across to the audiences that you need to reach because the flow of information is biased towards like-minded others.
People who are spreading misinformation and engaging with that content are not going to share a factual story. You don't get traction with the facts in that way on these platforms because the groups are polarized and the flow of information is interrupted. It’s heavily biased towards a particular type of content. It makes it very hard to actually debunk and fact-check content because of the way that social media is structured. That partly explains why it goes viral and why there's so much engagement.
That leads to the question that we fundamentally need to redesign social media. I should say to the audience that I'm a consultant for Facebook. I work with WhatsApp. I do a lot of work with Google. I try to help them counter misinformation within their institutional confines. At the same time, we do a lot of work that's critical of social media companies. It gives me also some insight into how these companies work and what the issues are.
What's most interesting to me is that they don't envision these platforms as a wonderful place where people just share accurate and factual content. That's not their mission. They're quite explicit about that. They say, “This is not a platform that's meant to promote accurate content. We want people to have whatever conversations they want to have. We have our policies. You can’t violate the terms of service and use of policies, hate crimes and things like that. As long as you don't do that, you can talk about anything.”
People don't realize that it's a fundamental shift in thinking. I'm thinking, we can redesign these platforms to be really useful tools to disseminate all kinds of important factual, accurate and constructive conversations. The people running these platforms, that's not their goal. Even though they want to counter misinformation and they don't want bad things to happen on the platform, their goal is not necessarily to promote accuracy or facts. Their goal is to promote conversations of all kinds.
You can spot a conspiracy theory by how it uses emotions to manipulate people.
It seems like it's easier for people to have angrier conversations through these social media platforms compared to what they would do in real life. If you run into somebody in a grocery store line, you're probably not going to end up saying the same things that I see people say to one another on social media platforms. Do you think that played a role in this information bias, where people just latch on to ideas that they choose to support?
I think so. There are gradations of this. If you looked at the early days of the internet in the '90s and beyond, there were a number of studies that were quite interesting that had some insights about anonymity. People expressing themselves in different ways online because they could assume any identity. They weren't afraid of the consequences of saying something because it's not face-to-face. You could say anything. People wouldn't know where you live, who you are, and so on. That was very prevalent then.
On social media, unless you're a troll or a bot, you're not anonymous. The incentive there is a bit reduced in the sense that people do care about their reputation on social media. They are a bit more careful than if you were a completely anonymous blogger or netizen. It's still the case that it's not face-to-face. People still feel emboldened to say things on social media that they wouldn't say to somebody face-to-face. That still holds true. That leads people to be more emotional and aggressive. That's why you see more of these flame wars and things like that on social media than you would see face-to-face.
Some research shows what happens when people tune out from social media for a week. They found that people get less polarized when they're not online. That's causal evidence, which we didn't really have for a long time on social media. People are going to say, “Even though they’ve tuned out, they've had social media before.” You can never do a clean experiment that maybe you can have medicine in some ways with social media. There are all these confounders that are always going to be present. It's a social system. It's difficult to study.
What we can do is ask people to tune out on social media for a week, people get less angry and less polarized. At least, that says something about what might be going on. That should send a signal to these companies that they might want to rethink some of the ways in which their platform works. They're very notorious for putting up defensive press releases. They say, “Polarization existed before social media.”
There's a lot of evidence that says that it's correlational and not causal. You can't know for sure because you don't have access to the platform. We have different data. You get it back and forth. Some of the points are worth considering. We don't have access to all the data. From what we can see from the outside, with public data, there are lots of reasons to be concerned about it.
Just thinking about the issue, it makes sense that if you're seeking out somebody who agrees with what you're saying, it's much easier to connect with those people through online platforms rather than at your town’s grocery store or a restaurant or a bar where people may have connected in the past. You may run across more people that don't necessarily agree with you in real life. Whereas online, you can instantly connect with other people who have similar biases as you do. That affirms those biases as correct.
In all fairness, some of these counter-arguments are not completely bogus. They would say that "There are offline echo chambers too, which suggests that we are not the cause of echo chambers." Which is the distortion of what's going on. That's partly true. We have interesting studies that show even in New York City, in areas close to Central Park, Democrats are closer to each other than Republicans in terms of where they live. They might not even know it, but they segregate. Even in the same buildings, there will be segregation.
You look at an election map of the United States and you can see areas where people predominantly have a political leaning toward one side versus the other.
You can see that on the map very clearly. This is true even when you zoom in at the city level. It's prevalent. They use this as a counter-argument but I don't think it's necessarily a counter-argument. It's true but social media still makes it easier for people to also do this online. You have online echo chambers mapping onto geographical echo chambers.
You have a feedback system where it enhances polarization. It speeds things up and it makes it worse. You can quibble about how much they make it worse. They like to say that, “We don't have any counterfactual.” If they had not existed, would it have been worse? It's very clear that they're not adding positive value to that conversation about polarization. It's true that we come to the platforms with our own biases but we don't expect them to be amplified to such an extent that we get into a space that's very polarizing and negative that we may not have anticipated or signed up for. That's the difference.
I wanted to ask you about a study that you and your colleagues did regarding the Oregon Petition and climate change. This was an interesting bit of research.
In our field, people quibble about the effect of misinformation. They say, "How can we know that misinformation is the cause of it?" We did a very controlled experiment and this was several years ago. It was quite fundamental in establishing that misinformation has a very negative causal effect on people. What we did is we expose one group of people to the facts about climate change. That was pretty easy. We just said, “Most scientists agree that climate change is happening and humans are the cause.”
One condition we showed people is this petition that you mentioned, the Oregon Petition. It’s a very strange project. It started in the '90s. A bunch of politicians and former pseudoscientists started a petition claiming that thousands of thousands of scientists have signed this agreement saying that global warming isn't real. When you examine this petition, there were all kinds of strange things happening. The signatories on it were Spice Girls and Charles Darwin. People were fooling around, signing up for this bogus petition. There's no quality control on this petition.
There are some people on there with degrees, but they're not inclined with science. Their expertise is totally irrelevant to making claims about climate science but it's been very influential. It formed the basis of the most viral story on Facebook in 2016 saying, “Scientists have declared global warming as a hoax.” Even though it's from the ‘90s, it's being recycled all the time. It went viral on social media. It's being kept up-to-date, this petition. The purpose of this petition is to cast doubt on the scientific consensus by suggesting that there are thousands of scientists who say that they don't agree with the science.
In this experiment, we first wanted to see what effect that has on people. The second question was whether we can inoculate or vaccinate or immunized people against this content. What we found was that if you expose people to this petition, they're very confused about the scientific consensus on climate change. Whereas, they were already a bit confused but they are downgrading their perception of whether or not this is a settled issue in terms of climate change.
We know that matters because if people think this science isn't settled, they won't want to take action. That's the strategy that this petition plays into. In fact, it's called the fake expert technique. You use fake credentials to try to convince people that you have expertise. The petition is modeled off of what the tobacco industry did where they had coughed out fake doctors and saying that their favorite brand of cigarettes is Camel or Lucky Strikes. They would run campaigns. They would call it The White Coat Project.
As a doctor, you know probably like no one else that they use the white coat to fake expertise and dupe people. From research, we know doctors are the most trusted expert, more so even than college professors. The issue here is what can you do about it? What we found is that it did have a very negative effect on people. Instead of fact-checking and debunking, which was the main strategy that's been used to try to counter misinformation, we wanted to see, “Could we preemptively vaccinate people against this technique?” It follows the vaccination analogy exactly. What we did is we preemptively injected people with a weakened dose of the virus.
We told people, “There are some petitions online. It's going to try to convince you that global warming isn't real. You should know that Charles Darwin signed it. It's totally bogus. The people on there don't actually have any expertise in climate science. Be warned.” Later on, with the experiment, we let people browse the website and look around. What we found was that if you preemptively inoculate people, it doesn't fully immunize them.
The misinformation has no impact on people. It did still make them doubt the science a little bit more but the effect was tiny compared to when we didn't inoculate people. When we didn't inoculate people, there were massively confused. Whereas, when we inoculated people, they were so onboarded with the science just a little bit less than before.
That was the idea of, “We can psychologically inoculate people against these techniques.” Even though it's not a real vaccine like Pfizer where you get 95% dominance for misinformation or the exact percentage. There is probably some range here. It was still pretty useful. This was true for whether you were Conservative or Liberal. It didn't really matter what your affiliation was. We found that to be useful. That was what the Oregon study was about.
By preparing people for the fact that there may be misinformation present, they were more aware of it and able to combat it like a vaccination.
There are a few important psychological mechanisms here. One of the things, I realized over the years is that the main response of debunking and fact-checking is based around the notion that people don't know enough, that we don't have enough facts, and that we don't understand the science. Although education is important, this is true to some extent, the total misperception is that somehow bolstering this is going to protect people from misinformation. It doesn't because it's not specific enough. It's not targeted. It doesn't give people the right antibodies they need.
What is useful is simulating the attack for people in the weakened dose and giving them the specific refutations they need beforehand so that they can retrieve them and access them from their memories and rehearse them in advance. When they're challenged on their beliefs, they can retrieve the right information. They've been warned before and had time to think about it. This is what we call resistance to persuasion. People can now resist attempts to persuade them of false information.
You don't get that when you try to retroactively debunk something or fact-check. I'm not saying that that’s bad. The problem is that once you're exposed to falsehood for a few years like “The vaccines cause autism,” it settles into your associative memory networks. We make claims with other concepts. The longer it sits, the stronger the links became melded in our memories. It becomes very difficult to get rid of it.
I can tell you something is wrong and your brain will mark it as incorrect but that doesn't do much in itself because it's still linked to all sorts of other things. People continue to make inferences based on false information, even when they acknowledged that they've seen the correction. One way of illustrating this is that if I told you, “I went to your favorite restaurant down the street. I got food poisoning. I had a terrible night. Don't go there.” A month from now, I'll tell you, “It wasn't your street. I totally got that confused. It was somebody else's street.”
Every time, you're going to walk past that restaurant, even though you know that was false, you going to think of food poisoning. It works the same with misinformation. This is the brain's way of keeping track of things. That's why it's so difficult to get rid of it after the fact, even though it can be helpful. There are ways to do that. We found that there is so much value in the prevention metaphor as with vaccines. That prevention is better than cure. It does depend on the incubation period. If we follow the viral analogy of the misinformation pathogen, you can pre-bunk or inoculate. We call it pre-bunking because easier for people to understand. That works pretty well. There's a timeline.
Sometimes the inoculation is more therapeutic like a therapeutic vaccine. People have already been exposed but hasn't settled into your brain yet. You've heard of it but you're not infected yet to the degree that you're sharing misinformation. There's therapeutic value in boosting your immune response with inoculation. It's better if it's totally prophylactic in the sense that we can get there before the misinformation arrives in the first place. In the end, we're just debunking after the fact if we come very late to the story. That's the spectrum that we deal with. What we found is that it's so much more effective to try to go the vaccination route.
For people to perceive a communicator as trustworthy, they have to admit uncertainty provided in ways people can understand.
One thing that I've been asked repeatedly is how people know that a source of information is valid or trustworthy. When I'm looking at an information source, I will look at the primary scientific literature. I'll look at the methods, evaluate the methods of the studies, and decide whether it's an article that's reached its conclusions appropriately. That’s a tough set of skills to learn. Usually, it takes at least two to four years of graduate education or in my case, eleven years of graduate education to learn to do properly.
I'd like to be able to tell people that they can trust physicians, but I've also seen a lot of physicians out there peddling misinformation which probably started in the era of Andrew Wakefield. Do you have any advice or good ways that the general public can validate information to determine if it comes from a reliable source and is trustworthy?
On one hand, there are useful tips like check the source. Make sure that there is a context provided. Is the source credible? Can you identify it? Try to find other lines of evidence that support the source? There are these tips that are useful to follow. What we found in all of our research is rather than trying to tell people what's true or false, we've trained people to recognize the techniques of manipulation in spotting information that may not be true. The reason for that is because when the heuristic is at the level of the publisher, it works pretty well but not always.
Take the Chicago Tribune, for example. If you use the heuristic, "Is the source reliable?" Chicago Tribune has independent fact-checker ratings that are very high but then they publish a headline like "Doctor died after getting COVID vaccine." That's not working because even though it's a reliable source, the headline is misleading. What we tend to do is try to familiarize people with the most common techniques that are used to spread misinformation.
Some of these we've already talked about that includes polarization. Is the headline polarizing? You should probably be suspicious. What we try to do is get people in the mindset of not, “Is this true or false?” but calibrate your judgment in terms of how reliable something is. We can chat more about evolving science and how we as scientists are trained to constantly update our opinions about the weight of evidence on something.
It's useful for the public to not see something as true or false like, “Can you drink wine during pregnancy? A few years ago, you can’t. Now, you can. It's one glass, two glasses. This doctor says, no. This doctor says, yes.” It's all very confusing. A model where you say, “There's evidence out there. We want to know what the weight of the evidence is to inform my judgment of how trustworthy or reliable this is. I can adjust it as I grow and learn more things about the world.” That's a different frame that most people are in like, “Is this yes or no? Is this true or false?”
We have games that we entertain people with but we have also some educational values. We train people on polarization by going through lots of these headlines. We'll say, “Look at this headline. New studies show massive IQ difference between Liberals and Conservatives.” We asked people, “Rather than, is this true or false, how reliable do you think this is?” At of the day, what people take away from it is that headline was meant to polarize people. Regardless of whether there are some grains of truth to the study that it's quoting, "That's a polarization technique. I should not be more suspicious of what they're trying to tell me."
We have one module on conspiracy theory. How to spot a conspiracy theory? The use of emotions to manipulate people. We've talked about that as well. Trolling, discrediting or denial, fake experts, impersonation. What we do is try to get people to pay attention to the techniques rather than the only source or the context or the numbers.
How do you do this practically? For example, with social media companies, we explained, “Not just why it's false but what the technique that is being used to dupe people.” We find that people react less negatively to that. If you're a doctor, you're talking to a vaccine-hesitant person. They say, “This individual with lots of credentials is telling me that the vaccines are dangerous or that they cause autism.”
Instead of saying, “No, that's not true,” which is leaving them with little understanding of why that's true. Instead of explaining the science, which you might do anyway. It’s not a bad thing. What we would do is say, “These people are trying to dupe you with a technique known as the fake expert technique. This person doesn't have any credentials. They're being used to fake expertise so that you can manipulate people into believing so and so.”
The vaccine-hesitant people and the conspiracy people are very averse to being manipulated. They liked this idea of being in the truth or unveiling the attempts of manipulation. You wouldn't necessarily say that they're wrong. You would say, “I'm unveiling the techniques of manipulation. It's up to you what you want to believe.” We found that it gets them much more interested in finding out the truth than saying, “You're wrong. Here's the science. Go home and study it,” type of thing.
If you're in a professional setting, it's important to make sure that people have safe information about vaccinations. As an addition, you could explain the technique and that's what we found useful. People don't have to come back for every specific piece of information but they can now recognize the technique when it's used in a different context.
Is there anywhere that our audience can access your training or your games?
If you want to learn about these techniques in hopefully, not a boring way, all of our interventions are free and publicly available. Our main one is called Bad News, which is a pun. It's about a fifteen minutes simulation into all the bad news that's out there. It's GetBadNews.com and it's free. You can play and share it.
We have one on COVID-19 specifically. It’s called GoViralGame.com. Some of these were developed with some support from the World Health Organization and so on. It's part of the stop the spread campaign. Go viral is much shorter. It's about 5 to 7 minutes. It goes with the three techniques that are prevalent when it comes to COVID misinformation. We go through conspiracy theories, fake expertise, naturalistic fallacies, and the use of emotions to manipulate people.
You get lots of examples. You can rate things. People respond to you when you tweet things. It's interactive. We get you in the mode of tweeting out stuff that's false and then get people to respond to you. It's all in a controlled environment, so you can learn what happens. How the sausage is made in the hopes of maybe steering people towards vegetarianism once they know that the sausage is made out of lots of not so tasty things. That’s the goal of it. Once you know how it's produced, you're not going to be duped by it again. That's the goal of these interventions.
Let’s talk a little bit about evidence-based medicine. We touched on this a little bit earlier. I often hear patients talk about how it doesn't seem like anybody knows what's going on regarding the pandemic because the recommendations change so frequently. I always try to explain this in the sense that the scientific community gets more information and collects data. We then update recommendations based on new data and the understanding that we get from that. Can you define for the audience and discuss how evidence-based medicine factors into many of the decisions that doctors, epidemiologists and agencies like the NIH make?
One of the most important things for medical and public health professionals is to use evidence-based communications when talking to patients. That can be useful in not only increasing trust, also generally helping people get a better grasp of science and how the process works. We can talk about vaccine hesitancy specifically because we do a lot of work with doctors on how to talk to people who are vaccine-hesitant.
More generally, we've developed some principles. We published a short piece in nature on five rules of evidence communication that's based on our insights on how to communicate science and evidence in a way that is supported by insights from the behavioral sciences. Pre-bunking is one of them. You pre-bunk myths for people. That is a very proactive approach that a lot of doctors and journalists are not necessarily used to.
The whole point of the pre-bunk is that you do it preemptively. When somebody comes into the practice, you might want to inoculate them against myths about vaccinations, even when they don't bring up vaccinations. That's maybe a slightly odd way for the conversation to go or to steer. You're going off what the patient comes in for. Doctors are busy and they have limited time. When you have a moment to pre-bunk, for example, influential myths that's evidence-based because otherwise what you're doing is debunking.
They come in with a concern. They've already been exposed. Now, they're concerned and they want to talk to you about the issues. It's not too late. We're further in the “infection stage” than we want to be. The evidence-based thing would be to try to preempt that by giving people the tools they need to withstand these kinds of misinformation attempts that people find.
It’s like when weather forecasters start talking about climate change and making the link for people between extreme weather events and climate change. It's tricky because not every single weather event is related to climate change. The science is complex. They're professionals so they can do it in a way that's scientifically justified. The doctors can do the same thing. They want to say anything about randomly bringing up vaccines or hygiene measures when it comes to COVID-19 but these things can be done.
The other important thing is to talk about uncertainty. A lot of professionals are averse to talking about uncertainty. The baseline belief is that people don't understand certainty. They're going to get more hesitant when I give them uncertainty. The most difficult type of uncertainty for people to deal with is medical uncertainty. Let’s say you have cancer or serious disease and the prognosis is unsure or uncertain. People struggle with that, which is understandable. They want to reduce uncertainty.
Most people are uncertainty minimizers. We want to reduce the uncertainty. That's the whole point of communication. It's to reduce uncertainty about what other people are thinking and doing in a way.
Doctors are often put in a position where people demand certainty just like that they demand certainty from scientists in terms of what the answer is.
What we found in our studies is that when you can provide quantitative precise estimates of likelihoods and probabilities, people are fighting with the uncertainty that way. When it's verbal and vague, people don't like that. It doesn't help them. If you said to someone, “You can come out of this alive or not. I'm not sure.” That's not the uncertainty people appreciate or know how to deal with.
That might not be the best way to communicate that type of uncertainty. If you can say to people, “Out of 100 people who undergo this surgery, 90 of them don't have any complications. Two of them have major complications.” If there's evidence that's available of the incidence rate of things and you can say, “With some degree, we're not sure, but out of 100 people, 90 of them do pretty well with this type of intervention.” You allow people to make an informed decision about whether that's something they want to do.
The framing here is interesting. As a doctor, you could say ten people don't do so well or you could say on average, 90 people come out fine, or you could say on average, ten people come out with some pretty bad complications. What would people take away from that is they wonder why you choose a particular frame. It's the same statistical information. If you say, “Ten don't come out so fine,” maybe people tend to see it as a cue to discourage them from the surgery, “They’re focusing on the negative. Maybe I shouldn't do this.” Whereas statistically, it's the exact same information.
You motivate people towards a decision by respecting their values and operating within their frame of reference.
We need to be aware of how framing techniques influenced people and the way they think about it. This is where I'm getting to my point. One way to do that is to provide balanced information. When you use a frame, also use the other frame. That's a holistic way of explaining it to people. That means 90 people come out fine, 10 might have some complications. Not use one or the other frames because that influences people in a certain way.
That will give people a balanced overview of both the benefits and harms, which most doctors are aware of but it's not always communicated. COVID-19 is a huge thing with the harms and benefits discussion of social distancing, mask-wearing, and whether or not only benefits are being communicated. We’re seeing lots of harm and others justified or not. Some of these harms are political restrictions or freedom that people are factoring in, “I find it inconvenient to wear a mask.” We have to talk about the benefits and the risks.
The quality of evidence is an interesting one. Most communication, even in health care settings, doesn't talk about the quality of the underlying evidence. We had a baby a few months ago. We're talking to our obstetrician about complicated studies. My wife had some values that were slightly off in her liver. We're trying to figure out the risk factor for when exactly to have the baby. It turns out a lot of these studies were of low quality. They were done with ten people. It wasn't double-blind. It wasn't a placebo. Even the doctor was telling us, "I don't know what we can conclude from these studies."
Usually, they have some intuition. They say, "This is the rule. If the values are above this threshold, then you have an intervention. Otherwise, we're just going to wait it out." We asked, "What is that based on?" You look at the underlying studies and this rule is from the 1990s where the quality of the studies was pretty poor in terms of causal terms. There’s a lack of studies so we're none the wiser.
That reminded a lot of us that often we don't know much about the quality of the underlying studies. Is it the highest standard of evidence or is this an association? Eating burnt toast might increase your risk of cancer. There's some association. How does that work? What's the quality of the underlying evidence in terms of these factors? It's interesting that this is often not communicated. With COVID, the issue is that it was low. The quality of evidence was low because we don't have a lot of studies. We didn't have a lot of randomized trials yet. That wasn't being communicated.
Wrapping this all up, what we found is that for people to perceive and communicate it as trustworthy, they have to admit uncertainty and provide it in ways people can understand. We don't have to unnecessarily concern people when we can express uncertainty in useful ways. When we can, people actually don't mind. They handle it fine in their decision-making. We want to pre-bunk myths for people. We want to be clear about the quality of the evidence. We want to be balanced about the way we frame things. Those are some key principles that we've derived for evidence-based communications.
The key thing is not necessarily to get people to trust you. We’ve noticed with a lot of parties that people think that these are tricks you can use to then increase trust. What our point was is that if you use these things, you demonstrate trustworthiness as a communicator. Which is different from using them to gain people's trust. What we want in the end is to demonstrate that we're trustworthy by giving people all that they need to make informed and evidence-based decisions.
I like how you have laid out all of that information. Intuitively, it makes sense. Intuitively, you always want to present a balanced approach. You want to use statistics. You want to use evidence. A lot of times, I don't think we've necessarily thought through what those specific steps or criteria are for trustworthiness. I appreciate your approach to that. Those are great tips for physicians. Are there any tips that you have for the general public or anyone who's not a scientist or clinician on how they can communicate better and correct misinformation?
If you want to correct misinformation, one of the biggest services you can do is to try to protect your friends and family from misinformation. Inoculate them when you can. When you see it in your WhatsApp group, proactively go out and tell your family that there's some nefarious information that they shouldn't buy into.
That's actually quite useful. Even though it takes some effort on your part, there's a huge benefit to other people. It prevents it from spreading further. We need to break the virality of this content that only happens when people stop sharing it. The best way to get people to stop sharing it is they're not receptive to it in the first place.
The other is if you want to debunk something, if you’re talking to somebody who’s already bought into a narrative, one of the interesting things that we've found is that debunking doesn't work very well when you don't have an alternative for people. What happens in our memories is that if you say something is wrong, there's a gap. If that gap isn't filled, people are going to continue to retrieve what they thought was true. You need to provide people with a plausible alternative explanation. If the virus wasn't leaked from a lab, then what's going on?
Telling people that that's not true isn't going to effectively debunk the myth. It's going to just sit there. People are going to continue to think that it's true. You need an alternative. When there's no alternative, there's little you can do. For example, we’d say, “The World Health Organization have concluded that this is the most plausible explanation at the moment for how and where the virus originated from.” That gives people an alternative explanation that they can now consider, put in their memories, and update their beliefs. Give people an alternative.
The third is more about talking to people who are conspiracy theorists or vaccine-hesitant, the people far down the rabbit hole. I've taken steps. Pre-bunk when you can, when nobody has been exposed. You can debunk when you have conversations with regular people. Then there’s talking with people. We all know maybe one or two individuals who were off the deep end here.
The best way that we've found on how to do that in our research is to use a gateway. Some people call this pre-suasion. Pre-Suasion is the idea that people need to be ready to be persuaded by you. If they're not in the right mindset, they're not going to listen to anything that you have to say. How to get that pre-suasion going is you need some gateway. It's quite interesting because of the degree of polarization, people become less and less likely to want to understand each other and step into the shoes of the other people. This is why it’s often not implemented.
What we ideally do is find some common ground, sum in with somebody, then pivot from there. With conspiracy theorists, we go over a technique called worldview validation. For conspiracy theorists, it's important that their worldview is being validated. A lot of people don't want to do that because they think that's ridiculous. If you seriously want to have a conversation with somebody, you have to operate within their frame of reference first and not just be in some separate silo because it isn't going to work.
What you might say is, “Some conspiracy theories were real. Julius Caesar was stabbed in the middle of the theater. They plotted this. It was a big conspiracy. I totally get that some conspiracies have happened in the past. However, COVID-19 is very real though. That's not a conspiracy.” They feel that you validated part of who they are and part of their beliefs. Now, they can have a more reasonable conversation with you about whether or not COVID-19 is a conspiracy theory. When it comes to vaccine hesitancy, motivational interviewing is an evidence-based technique that a lot of healthcare practitioners work. I like it because it's very non-confrontational. It respects people's values. Instead of saying, “You're an idiot. You need to get vaccinated.”
Conversations that start way always go well.
“You don't know anything. You don't understand the science.” Start by saying, “I'm interested to hear your viewpoint. Why are you concerned about vaccines?” Sometimes it turns out people have real concerns. They just don't understand it. Once they get a different opinion, they might change their mind. Sometimes people have been duped by misinformation. From there, you can determine how to pivot it.
For example, if somebody has a legitimate concern. Legitimate in the sense that they just don't know how it works. They say, "I don't know about the COVID vaccine. I'm religious. I hear that there are animal proteins in it." You say, "That's very important to consider. That's obviously very important to you because it's part of your religion." You ask for permission to say something. You say, "Would you mind if I express my views? My doctor assures me that there's no animal product in vaccines. In fact, there are official resources you can go to that show you that there are no animal products in vaccines."
You don't push people. The goal of motivational interviewing is to get people toward an action. You would say, “Would you be open to having a look at this pamphlet? Would you be open to visiting this website?” If you're talking about getting the vaccine, you might say, “Would you be open to making an appointment with your doctor to discuss it. It’s your decision. We are not forcing anything on you but maybe you want to listen to what the doctor has to say about it then make an informed decision.”
You motivate people towards a decision by respecting their values and operating within their frame of reference and not coming out too strong with debunking. Even when you debunk something, instead of “It’s wrong,” you would say “Have you thought about people who float these kinds of ideas? Do you think they could make money off of you sharing it? Could they profit off of you? Maybe this is a technique that's being used.” Unveil the techniques. Let people figure it out on their own whether or not they're being duped. It’s like the Socrates method a little bit.
I found that those approaches work better. They're more indirect. They're slower and they take more time. You might have to do it repeatedly. Unfortunately, in this polarized environment, everyone's running out of patience. Very few people are implementing this approach. Long-term, it's the better path towards getting everyone on board.
That's all fascinating. I feel like you've shared so many incredible and very valuable techniques to help curb the spread of misinformation, and how to have more productive discussions rather than these emotional online shouting matches. Thank you so much. I feel like I have kept you over time though. Do you have any final thoughts or advice that you'd like to share?
We've pretty much covered it.
I appreciate your time.
It’s my pleasure. Hopefully, the audience finds some useful nuggets in there. That was great.
Thank you, Dr. Van der Linden. That's it for this episode. Thank you for reading. If you like what you read, please like, subscribe or connect with us on Instagram, @TheEmergencyDocs or on our website at www.TheEmergencyDocs.com. This episode was supported by the National Geographic Society’s Emergency Fund for Journalists. Until next time.
Important Links:
About Professor Sander van der Linden
Sander van der Linden is Professor of Social Psychology in Society in the Department of Psychology at the University of Cambridge and Director of the Cambridge Social Decision-Making Lab. Before coming to Cambridge, he held posts at Princeton and Yale University. His research interests center around the psychology of human judgment, communication, and decision-making. In particular, he is interested in the influence and persuasion process and how people gain resistance to persuasion (by misinformation) through psychological inoculation. He is also interested in the psychology of fake news, media effects, and belief systems (e.g., conspiracy theories), as well as the emergence of social norms and networks, attitudes and polarization, reasoning about evidence, and the public understanding of risk and uncertainty. In all of this work, he looks at how these factors shape human cooperation and conflict in real-world collective action problems such as climate change and sustainability, public health, and the spread of misinformation. His research spans from social psychology to cognitive science using a variety of techniques, from virtual reality to survey and lab studies to computational social science and large-scale (online) interventions.
His research is regularly featured in the popular media, including outlets such as the New York Times, the BBC, CNN, The Economist, NPR, the Washington Post and Time Magazine as well as TV, including BBC Newsnight, the Today Show, CBS Inside Edition, and NBC Nightly News.
For recent profiles on Dr van der Linden's research see here, here, and here as well as this PNAS & BPS feature or popular research profiles in Rolling Stone, Discover Magazine, PBS Newshour and ScienceNews. He is currently writing two books: THE TRUTH VACCINE (WW Norton/4th Estate/HarperCollins) and The Psychology of Misinformation (with Jon Roozenbeek, Cambridge University Press). A TED-ED video that centers around his research on how to spot disinformation can be viewed here.
Dr van der Linden has given many keynote lectures and gives talks and consults regularly about his research for the public, industry, and government, including venues such as The Hay Festival, Behavioural Insights Team, the BBC, Microsoft, Bank of England, Edelman, Festival of Ideas, United Nations, WhatsApp/Facebook, TEDx, Google, UK Cabinet/Foreign Office, EU Commission, and the US State/Defense Department. You can read more about his partnerships with Facebook, Google Jigsaw, and the 'Infodemic' Coalition IRIS.