Publication Exclusive: Breaking down the sham of statistical significance

What does P < .05 actually mean? I am asking you, the reader, right now. This is not a rhetorical question. Take just a moment, and really try to formulate an answer.If you said: “P < .05 means that there is less than a 5% chance that a study’s results are due to randomness alone,” then you are wrong. And not just wrong in a semantic, pedantic or quibbling sort of way. You are spectacularly, blindingly and egregiously wrong; wrong in a way that reflects a complete misunderstanding of the statistical concept.You are also wrong if you said: “There is a 95% chance that the study’s findings are true/the null hypothesis is false/the observed results will be replicated.” In fact, according to how the average experiment is powered, less than half of all studies with P < .05 will have their results replicated if run a second time.All of this should concern you, that — despite years of accumulated encounters with this mathematical index and its ubiquitous presence in nearly every major ophthalmology journal — it is totally unclear, to basically everyone, what this metric actually means.