Gelman, Andrew and Hal Stern. 2006. "The Difference Between `Significant' and `Not Significant' is not Itself Statistically Significant." American Statistician 60(4): 328-331.
It is common to summarize statistical comparisons by declarations of statistical signiﬁcance or insigniﬁcance. Here we discuss one problem with such declarations, namely that changes in statistical signiﬁcance are often not themselves statistically signiﬁcant. By this, we are not merely making the commonplace observation that any particular threshold is arbitrary—for example, only a small change is required to move an estimate from a 5.1% signiﬁcance level to 4.9%, thus moving it into statistical signiﬁcance. Rather, we are pointing out that even large changes in signiﬁcance levels can correspond to small, nonsigniﬁcant changes in the underlying quantities.
The error we describe is conceptually different from other oftcited problems—that statistical signiﬁcance is not the same as practical importance, that dichotomization into signiﬁcant and nonsigniﬁcant results encourages the dismissal of observed differences in favor of the usually less interesting null hypothesis of no difference, and that any particular threshold for declaring signiﬁcance is arbitrary. We are troubled by all of these concerns and do not intend to minimize their importance. Rather, our goal is to bring attention to this additional error of interpretation. We illustrate with a theoretical example and two applied examples.The ubiquity of this statistical error leads us to suggest that students and practitioners be made more aware that the difference between “signiﬁcant” and “not signiﬁcant” is not itself statistically signiﬁcant.This article is a few years old but I just ran across it. It is a quick read, and yet one more illustration of the many conundrums that arise when one takes classical statistics too literally. I am still working on a way that I am really happy with to teach undergraduates to have a sophisticated understanding of classical significance tests.