Jumping to Superstitious Conclusions

Stuart Vyse

By this past fall, vaccines had been widely available for months, yet a sizable group of Americans were still expressing doubts and not getting their shots. Just in time for Halloween, a number of “I Did My Own Research” memes surfaced on the internet. In Vancouver, one homeowner drew media attention for hanging a skeleton on the porch with a sign around its neck, “I DID MY OWN RESEARCH.” Of course, the only research data anyone should have needed was the percentage of COVID-19 deaths that were among unvaccinated people (see Figure 1), but the vaccine rejecters seemed to be focused on other things.

Figure 1. Deaths from COVID-19 in Switzerland per 100,000 population by the patient’s vaccine status. (Our World in Data)

Most of us understand that, when it comes to medical questions, it is best to leave the research to people who’ve spent several years in graduate training on the topic, but as Thomas Nichols suggested in his 2017 book The Death of Expertise: The Campaign Against Established Knowledge and Why It Matters, we have entered an era when everyone and no one is an expert. At least when it comes to vaccines, many people who’ve never opened a medical textbook think they know more than their doctors.

Doing Your Own Research—But How Much?

As it turns out, there is substantial evidence that people who endorse weird beliefs are not very good at research. For example, a recent publication in the journal Nature Scientific Reports showed that, when asked to test a simple theory, people who were pseudoscience believers required less evidence than non-pseudoscience believers (Rodriguez-Ferreiro and Barberia 2021). In the first of two experiments, fifty-nine online participants were asked to determine which of two jars of beads the participant was drawing from—one with sixty red beads and forty blue or a second with forty red beads and sixty blue (see Figure 2). Players were told that one of the two jars had been dumped into an imaginary box out of sight, and their job was to draw beads one at a time from the box until they were ready to guess which of the two jars the beads had come from. Each time they drew a bead, they were given the choice of either providing an answer on the jar question and ending the task or continuing on to draw another bead. After each participant gave an answer and stopped playing, they filled out two questionnaires: a pseudoscientific beliefs scale and a sheep-goat scale that measured paranormal beliefs. 

Figure 2. The two jars in Study 1 by Rodriguez-Ferreiro and Barberia (2021). Online (Spanish-speaking) participants were told that the contents of one of these jars was placed in a box out of sight, and they were asked to draw beads out of the box until they were ready to guess which of these two jars was in the box (Creative Commons license).

As they drew virtual balls out of the virtual box, all the participants encountered the same fixed sequence of red and blue beads in a random order. The sequence allowed up to fifty draws and included a total of thirty blue and twenty red beads. Although this procedure involved at most only half the number beads in the jars—and as a result, either jar remained a possible correct answer at the end of the sequence—only one participant finished the sequence and drew all fifty beads. The results are presented in the scatterplot in Figure 3. The relationships were far from perfect, but in general, participants with higher belief in pseudoscience and the paranormal quit the task sooner than their more skeptical colleagues, and the correlations were statistically significant.1

Figure 3. Scatterplot of the results of Study 1 by Rodriguez-Ferreiro and Barberia (2021). Each participant is plotted twice on the graph, with an “x” for their Pseudoscience Endorsement Scale (score PES) and a dot for their score on the Sheep-Goat Scale. In general, participants with stronger pseudoscience and paranormal beliefs stopped drawing beads sooner than those who ranked lower on those beliefs (Creative Commons license).

In a second experiment, psychology students played a mouse trap computer game (see Figure 4). Students could move the mouse through the maze with the arrow keys, and when they visited the trap box, sometimes the computer reported “The mouse got the cheese!” and on other trials the computer said, “The mouse got trapped!” The students were told that their job was to figure out the rule that determined how the mouse could avoid being trapped and get the cheese. Importantly, the participants were told that they could play the game for as many trials as they wished—up to one hundred trials—before stating the rule. 

Only seven of the sixty-two students in the study discovered the actual rule, which was that, if the mouse entered the trap square at least four seconds after the trial started, the mouse would get the cheese, but quicker entry of that square got the mouse trapped. So, in general people had trouble figuring out the game, but one of the strongest findings of the study was that the number of trials attempted was negatively correlated with paranormal and pseudoscientific beliefs: the more people believed in weird and unsupported ideas, the quicker they stopped playing. 

Figure 4. Mouse trap game used by Rodriguez-Ferreiro and Barberia (2021). Participants could move the mouse through the maze using the arrow keys. (Creative Commons license)

It is important to remember that these findings are correlational. We can’t say what causes either the shorter game play or the belief in unsupported ideas, but if you think of pursuing trials of either the bead task or the mouse trap game as “doing your own research,” the findings suggest that believers in pseudoscience and the paranormal are less motivated to find the truth. They do their own research but not very much of it. Some researchers have referred to this as a jump-to-conclusions bias, a “tendency to draw an inference on the basis of very limited information” (Irwin et al. 2014, 70).

Jumping to Conclusions and Causal Illusions

This jumping-to-conclusions bias is also related to seeing false cause-and-effect relationships. In another recent study, experimenters asked participants to complete the same jars-and-beads task used in the Rodriguez-Ferreiro and Barberia (2021) study and also a test to determine whether a fictional medicine was effective in treating a fictional disease (Moreno-Fernández et al. 2021). In the medicine task, participants looked at a number of “patient files” in which the patient either took the new medicine or did not and either got better or did not (see Figure 5). The participants could look at as many as forty-five patient files, but they were also given the option to quit and express their judgement about whether the medicine worked. Unknown to the participants, the experimenters established a fixed series of files that often had the patient taking the medicine and getting better but actually showed no correlation between taking the medicine and patient outcomes. As shown in Figure 6, for each group of nine trials, the chances of getting well were the same whether you took the medicine (four out of six, or 67 percent) or didn’t take the medicine (two out of three, or 67 percent). 

Figure 5. An example of a trial sequence in the contingency judgement test. In this case, the “patient file” shows that the patient did not take the medicine but got better anyway. After each of the forty-five trials, participants were given a choice between answering the question about whether the medicine worked or continuing on to look at more patient files (Moreno-Fernández et al. 2021, Study 2). (Creative Commons license 4.0)

Figure 6. The distribution of results for the fictional patient files in Moreno-Fernández et al. (2021, Study 2). Every sequence of nine trials produced this overall pattern of results that reflects no causal relationship between taking the medicine and getting better. The probability of getting better is the same whether the patient took the medicine or not.

Interestingly, although they were two separate tests, the participants who bailed out early and jumped to a conclusion on the bead-drawing problem reported a stronger effect of the medicine in the second task. Of course, there was no effect of the fictional medicine on the fictional patients, so any detected relationship between taking the medicine and getting better was an illusion. Again, the results were not overpowering, but they were statistically significant. Based on this research, people who jump to conclusions are more likely see causal relationships that don’t exist. If these results generalize to situations outside the laboratory, they suggest that people who are not very thorough with their “research” are likely to see things that aren’t there, be that a positive effect of peppered soup on COVID-19 (World Health Organization n.d.) or of a lucky sweatshirt on your score at Minecraft.

Doing Your Own Research

So, people who jump to conclusions are more likely to believe in weird things and see cause and effect relationships that aren’t there. Of course, we, too, must be cautious about what causes these relationships because we can’t really know. It seems unlikely that being a paranormal believer makes you someone who is easily convinced. The opposite relationship seems more plausible. But it is also possible that some third thing not measured by the researchers causes people to have both a jump-to-conclusions bias and superstitious or pseudoscientific beliefs. No matter what the true causal relationships may be, the pattern of results in these studies does not bode well for all the people out there “doing their own research” on important medical questions—as well as those who come in contact with them. For this and many other reasons, unless you are trained in the science, it’s best leave the research to those who are. 

Note:

1. To check whether the extreme score of the one person who went through all fifty trials had thrown off the results, the authors removed that person’s scores and recalculated the correlations. The results were similar and still statistically significant.

References

Irwin, H.J., K. Drinkwater, and N. Dagnall. 2014. Are believers in the paranormal inclined to jump to conclusions? Australian Journal of Parapsychology 14: 69–82.

Moreno-Fernández, María Manuela, Fernando Blanco, and Helena Matute. 2021. The tendency to stop collecting information is linked to illusions of causality. Nature Scientific Reports 11(1): 1–15. Available online at https://doi.org/10.1038/s41598-021-82075-w.

Nichols, T. 2017. The Death of Expertise. The Campaign Against Established Knowledge and Why It Matters. New York: Oxford University Press.

Rodríguez-Ferreiro, Javier, and Itxaso Barberia. 2021. Believers in pseudoscience present lower evidential criteria. Nature Scientific Reports 11(1): 1–7. Available online at https://doi.org/10.1038/s41598-021-03816-5.

World Health Organization. N.d. Covid-19 mythbusters. Available online at https://www.who.int/emergencies/diseases/novel-coronavirus-2019/advice-for-public/myth-busters. 

Stuart Vyse

Stuart Vyse is a psychologist and author of Believing in Magic: The Psychology of Superstition, which won the William James Book Award of the American Psychological Association. He is also author of Going Broke: Why Americans Can’t Hold on to Their Money. As an expert on irrational behavior, he is frequently quoted in the press and has made appearances on CNN International, the PBS NewsHour, and NPR’s Science Friday. He can be found on Twitter at @stuartvyse.