Social Psych’s Replication Problem Just Got Thornier

Much ado lately about the challenges of replicating key findings in social psychology. Almost everyone agrees it’s a good idea to test these key findings to see how they hold up; foundation stones should be solid.

Cambridge University social psychologist Simone Schnall, for instance, thought it was a good idea, and cooperated when a high-profile replication project attempted to see if they could replicate the findings in one of her papers. Then things got interesting. She wrote her experience up in detail at her department website.

Recently I was invited to be part of a “registered replication” project of my work. It was an interesting experience, which in part was described in an article in Science. Although the replicaton paper was published on 19 May 2014, my commentary describing concerns is still unavailable but it can be downloaded here.

Some people have asked me for further details. Here are my answers to specific questions.

Question 1: “Are you against replications?”

I am a firm believer in replication efforts and was flattered that my paper (Schnall, Benton & Harvey, 2008) was considered important enough to be included in the special issue on “Replications of Important Findings in Social Psychology.” I therefore gladly cooperated with the registered replication project on every possible level: First, following their request I promptly shared all my experimental materials with David Johnson, Felix Cheung and Brent Donnellan and gave detailed instructions on the experimental protocol. Second, I reviewed the replication proposal when requested to do so by special issue editor Daniel Lakens. Third, when the replication authors requested my SPSS files I sent them the following day. Fourth, I offered the replication authors to analyze their data within two week’s time when they told me about the failure to replicate my findings. This offer was declined because the manuscript had already been submitted. Fifth, when I discovered the ceiling effect in the replication data I shared this concern with the special issue editors, and offered to help the replication authors correct the paper before it goes into print. This offer was rejected, as was my request for a published commentary describing the ceiling effect.

I was told that the ceiling effect does not change the conclusion of the paper, namely that it was a failure to replicate my original findings. The special issue editors Lakens and Nosek suggested that if I had concerns about the replication, I should write a blog; there was no need to inform the journal’s readers about my additional analyses. Fortunately Editor-in-Chief Unkelbach overruled this decision and granted published commentaries to all original authors whose work was included for replication in the special issue.

Of course replications are much needed and as a field we need to make sure that our findings are reliable. But we need to keep in mind that there are human beings involved, which is what Danny Kahneman’s commentary emphasizes. Authors of the original work should be allowed to participate in the process of having their work replicated. For the Replication Special Issue this did not happen: Authors were asked to review the replication proposal (and this was called “pre-data peer review”), but were not allowed to review the full manuscripts with findings and conclusions. Further, there was no plan for published commentaries; they were only implemented after I appealed to the Editor-in-Chief.

Various errors in several of the replications (e.g., in the “Many Labs” paper) became only apparent once original authors were allowed to give feedback. Errors were uncovered even for successfully replicated findings. But since the findings from “Many Labs” were already heavily publicized several months before the paper went into print, the reputational damage for some people behind the findings already started well before they had any chance to review the findings. “Many Labs” covered studies on 15 different topics, but there was no independent peer review of the findings by experts in those topics.

For all the papers in the special issue the replication authors were allowed the “last word” in the form of a rejoinder to the commentaries; these rejoinders were also not peer-reviewed. Some errors identified by original authors were not appropriately addressed so they remain part of the published record.

Get the rest at Schnall’s post at the blog at the Cambridge University Department of Psychology.

Hat tip to the great Vaughan Bell channeling the magnificent Sarcastic_F. Founts of great leads the both of them.

Leave a Reply

Your email address will not be published. Required fields are marked *