Go to content

Andrew Gelman - Solve All Your Statistics Problems Using P-Values

Solve All Your Statistics Problems Using P-Values By Andrew Gelman Abstract: There's been a lot of hype in recent years about Bayes, machine learning, etc., using statistics to solve problems from protein folding to survey weighting, from reading CAT scans to recognizing cat pictures, prediction and causal inference. But can we really trust any of these claims? Only if p is less than 0.05. In this series of slides, we present a method for determining statistical significance for any problem in statistics or machine learning, and we discuss how the so-called replication crisis in science could be resolved, if people would just treat all statistically significant results as real, and all non-significant results as zero. Bio: Andrew Gelman is a professor of statistics and political science and director of the Applied Statistics Center at Columbia University. He has received the Outstanding Statistical Application award from the American Statistical Association, the award for best article published in the American Political Science Review, and the Council of Presidents of Statistical Societies award for outstanding contributions by a person under the age of 40. His books include Bayesian Data Analysis (with John Carlin, Hal Stern, David Dunson, Aki Vehtari, and Don Rubin), Teaching Statistics: A Bag of Tricks (with Deb Nolan), Data Analysis Using Regression and Multilevel/Hierarchical Models (with Jennifer Hill), Red State, Blue State, Rich State, Poor State: Why Americans Vote the Way They Do (with David Park, Boris Shor, and Jeronimo Cortina), and A Quantitative Tour of the Social Sciences (co-edited with Jeronimo Cortina). Twitter: @StatModeling Presented at the 2019 New York Conference (May 10th, 2019)

May 10, 2019