My students repeatedly ask about setting the critical values or interpreting p-values in statistical hypothesis testing. My stock answer is they should do their tests at the 5% level, since this is the most common and accepted practice in the biomedical community (my translation: it’s what all the KooL KiDz do.)
But now some upstart Bayesian Aggie (who’s only published 122 papers) has taken a closer look at p-values and significance levels, and claims the critical values are too loose, and need tightening up. Good-bye 5%, hello 0.5% (for slackers) or 0.1% (for “real” researchers). I suspect this would eliminate entire forests of bullshit journal articles with p-values of 0.05 minus epsilon, and otherwise wreak havoc in academia.
My only grumble is that I need bigger samples for many of my teaching examples. I just wrote up a neat demo of the Breusch-Pagan test for homoskedasticity, which rejected with a p-value of 0.0308. That ain’t gonna cut it in the New World of Your-Evidence-Ain’t-Good-Enough World Order. #@$*&++@#!, twice.
Tip from Briggsy, the Bayesian Bomb-Thrower.
Leave a comment