Statistics, Probability, Machine Learning, Data Science
1. Multiple comparisons
Finally had a look into multiple comparisons beyond Bonferroni’s correction. Didn’t yet get around to read Gelman’s Why We (Usually) Don’t Have to Worry About Multiple Comparisons (pdf).
2. Robust regression, Quantile Regression
Do you know Warren Buffet’s adage “You get, what you incentivize for”? Well, in machine learning:
You get what you optimize for.
After weeks of arguing for MAE instead of RMSE for model evaluation in a project, I finally had to eat my own dog food: not only to evaluate a model on MAE, but actually optimize for it. This opened a new world for me with Robust versions of regression (Huber loss functions) and quantile regression.
There is a ton to learn here, looking forward to it.
3. Sorting through my thoughts on Knightian uncertainty
Here. Still some work left to do on extensions of expectation maximization.
- Good formulation – something to live by, if I ever return to academia.
- Oh, the sweet, sweet naivety. I’d enjoy working like this, but one should read up on suboptimisation.
Productivity, Life Advice
- Not sure if I’ll read the book, but the summaries have good formulations of some well known thoughts.
- Habbit building strategies are dime a dozen, but this one I haven’t heard before. It sounds cute and I can absolutely imagine it works for ugh-mine-fields.
- Pretty good overview of the multiple comparison corrections
Videos / Lectures
- No, I just got some mote in my eye.
- Cowen did a good job with the questions and steering the conversation. Highly recommended.