Sunday, January 15, 2012

Illustrating the importance of multiple comparisons corrections ...

... with a dead fish.

For readers not up on the latest in applied statistics, the multiple comparisons problem arises when a researcher  performs a large number of statistical tests, say 100, using some conventional p-value cutoff for "statistical significance." With 100 tests and the traditional cutoff of 0.05, we would expect five statistically significant findings out of 100 tests even in a world in which the null hypothesis of no effect is true in all 100 cases. These five findings would then be reported in the New York Times and all heck would break loose. Multiple comparisons corrections adjust the statistical procedure to reduce the number of false positive findings and thereby to make the New York Times less interesting but more accurate.

Hat tip: Brian Kovak

No comments: