Jonathan Hersh - Does Interpretable Machine Learning *Really* Matter?
Does Interpretable Machine Learning *Really* Matter? (Aka How Learned to Stop Worrying and Love the Bayes with rstanarm) by Jonathan Hersh Visit https://rstats.ai/nyr/ to learn more. Abstract: After decades spent trying to teach computers to think, we now face the problem that AI and ML models often know more than they can communicate to us about why they make certain predictions. Interpretable machine learning, such as LIME or Shapely values, tries to shift that balance, by presenting a view towards the inner working of our complex models. My collaborator Selina Carter built some machine learning models and I did some interpretable AI, and I somehow convinced her to let me run a randomize controlled trial with 685 employees at a large firm, with half of them receiving an interpretable AI treatment. Now before you go all Andrew Gelman, I want to say that YES of course I used Bayes to analyze the data. I hadn’t used Bayes since the JAGS days and I want to say rstanarm is fantastic and the people who created it should be showered with praises. Why am I talking here? They’re the ones who should be celebrated. Bio: Jonathan Hersh is a recovering Stata user, a failed professional surfer and lead guitarist, and believes every model should be regularized. He works as an assistant professor of economics and management science at Chapman University Argyros School of Business and has been an advising scientist at the intersection of machine learning and economics for the World Bank and the Inter-American Development Bank. His work has appeared in MIS Quarterly, Proceedings of the National Academy of Sciences, and The World Bank Economic Review among other fine journals. Twitter: https://twitter.com/DogmaticPrior Presented at the 2021 New York R Conference (September 10, 2021)