I’ve been reading Daniel Kahneman’s Thinking, Fast and Slow recently, and came across this passage in chapter 21, Intuitions vs. Formulas:

The surprising success of equal-weighting schemes has an important practical implication: it is possible to develop useful algorithms without any prior statistical research. Simple equally weighted formulas based on existing statistics or on common sense are often very good predictors of significant outcomes.

He cites Robyn Dawes on marital stability and Virginia Apgar’s score for assessing the health of newborns, among other examples. The context is the fallibility of intuitive judgment relative to math. Specifically, predictive intuition “works” in the hands of an experienced practitioner in a regular environment. It fails when the judge either doesn’t have the necessary experience, or when the environment being predicted is too chaotic to benefit from it. In fact, it double fails, because on the inside both circumstances feel the same.

If the “experience” clause seems obvious, the non-obvious part is figuring out what experience is actually relevant. The example Kahneman gives here is clinical psychology, where shrinks are very good at making intuitive predictions about patients in the clinical setting, but poor at predicting their long-term outcomes. Both are psychological questions. The difference comes from feedback, or the lack thereof. The psychologist has lots of interactions with patients that carry immediate feedback; they have a very small amount of extremely slow feedback on long-term results. Hence you can have someone with a lifetime of experience who still sucks at predicting things that are superficially within their field. Experience has a less than stellar functional range. This suggests that I should think twice about making technical predictions outside my immediate job or hobby work, I think.

For environments too chaotic to predict, just look at punditry, or the stock market. The success of index funds (the simple formula) over active managers (the experienced pickers) is well documented.

Which brings me back to the point: Developing simple empirical formulas to inform otherwise-difficult decisions, rather than trusting my own intuition, is a Good Idea. I’m noting the technique here because, while I’m reading the book out of simple interest in the subject matter, this strikes me as something that I could directly apply. Past experience suggests that I will probably forget it when I need it; I’m not very good at recalling things I don’t have in cache. This is an attempt to put it in cache.