9.401 | Fall 2022 | Graduate

Tools for Robust Science

Week 4 Challenge: The Literature Is Biased

The Challenge

A defining feature of scientific knowledge is that it clearly and explicitly describes both an estimate of an effect and our confidence in that estimate. But the description of scientific knowledge in the published literature is full of inflated and overconfident estimates. One reason is that experimenters can choose which data points and analyses to report, based on which produces the “best result” (called experimenter degrees of freedom). This problem is exacerbated by publication biases that make it more difficult to publish null results (in later weeks we will discuss incentives that lead to this). Today we focus on the challenge of how to decrease experimenter degrees of freedom and inflated effect sizes in the literature. 

Readings

  1. Nosek, B. A., Beck, E. D., Campbell, L., et al. (2019). “Preregistration is Hard, and Worthwhile.” Trends in cognitive sciences, 23(10), 815-818.
  2. Watch this talk or Read this paper: Breznau, N., Rinke, E., Wuttke, A., et al. (2022). “Observing Many Researchers Using the Same Data and Hypothesis Reveals a Hidden Universe of Uncertainty.” PNAS.
  3. Scheel, A. M., Schijen, M. R., & Lakens, D. (2021). “An Excess of Positive Results: Comparing the Standard Psychology Literature with Registered Reports.” Advances in Methods and Practices in Psychological Science4(2), 25152459211007467 or Watch this talk.

When experimenters can choose which data points and analyses to report based on which produces the “best result” (called experimenter degrees of freedom), the literature gets filled with inflated and overconfident effect sizes. In part 1 of your response paper, describe your experience and perspective on this challenge. Have you encountered an example of this problem? What makes it particularly hard in your area of science?

Course Info

Instructor
As Taught In
Fall 2022
Level
Learning Resource Types
Tools
Readings
Written Assignments