“What made these economists love the efficient market theory is the math was so elegant. And after all, math was what they’d learned to do. To the man with a hammer, every problem tends to look pretty much like a nail. The alternative truth was a little messy, and they’d forgotten the great economist Keynes, whom I think said, ‘Better to be roughly right than precisely wrong.” –Charlie Munger
As the world’s professions and businesses get ever more specialized, there is still considerable value to a broad, multi-disciplinary frame of mind. It is especially helpful in avoiding man with a hammer syndrome.
“Just as many smart people fail in the investment business as stupid ones. Intellectually active people are particularly attracted to elegant concepts, which can have the effect of distracting them from the simpler, more fundamental, truths.” – Peter Cundill
False precision is an especially dangerous kind of error, as we’ll see below in a couple of cases. Just think of all the nonsense that gets spewed in models, forecasts, and target prices, always down to the last degree or two decimal places. Not only does it convey confidence that shouldn’t be there, and build up that overconfidence internally, it also destroys credibility among anyone on the receiving end who happens to know better.
Hardly a week goes by that I don’t marvel at some case of man with a hammer syndrome. It is especially prevalent in the investment business. What do analysts know how to do? Create models, run DCFs, and write memos. That is the kind of rote behavior – and what Howard Marks would call “first-level thinking” – that is and should be replaced in large part by algorithms.
Weather and climate is an interesting area in this regard. The long-term climate models have not, by and large, held up well over the past 20 or 30 years. They’ve improved a lot and they’re directionally right, but because they called for exactly X degrees of warming by some specific date, and the result was far less than X, the whole framework is being cast into doubt. Had the scientists made more of an effort – some of them may not want to make an effort, while others’ efforts may have been thwarted by noise in the world – to convey reasonable uncertainty and avoid false precision, the result would have been more credibility and a better outcome.
Think of just meteorology on a local scale. Contrary to popular belief, weather forecasting (as distinct from long-term climate modeling) has improved by leaps and bounds in the past few decades. With plenty of rigor, a 24-hour forecast is now as accurate as a three-day forecast was a decade or two ago. Hurricane tracks can be reliably forecasted down to a few dozen miles up to a week in advance; more than 24 hours was a miracle 30 years ago. But these facts are lost on the public for many reasons, and a big one in my opinion is the communication of forecasts. Meteorologists have math/science backgrounds and now have access to supercomputers that can produce pinpoint forecasts for any location down to a certain zip code. And so what do they do? They issue precise, pinpoint forecasts even though they are totally useless. The average forecast reader/listener is clueless – they often don’t know the first thing about what the forecast is even trying to say – but they’re not that dumb. The forecast itself is communicated with certainty when no such certainty exists, and so most people lose faith in the forecaster. Why on earth does almost no one issue a forecast that has a range (even a small range) of temperatures throughout the day along with a range of probabilities for storms/precipitation?
Buffett in 1997 wrote a reinsurance policy for the California Earthquake Authority, risking about $600 million of BV and ~1.5% of Berkshire’s market value. Buffett’s notes that its securities portfolio has subjected the company to far greater volatility. The deal involved $1 billion of risk after $5 billion of aggregate insured losses had been incurred, and was being shopped at a price of 5 times the estimated actuarial value, but it found no takers. That is because the quantitative analysis at other firms ran into a dead end upon realizing the true odds could not be precisely estimated. The 1994 Northridge quake also loomed large in many people’s memories: it had “laid homeowners’ losses on insurers that greatly exceeded what computer models had told them to expect. Yet the intensity of that quake was mild compared to the ‘worst-case’ possibility for California.” Unsurprisingly, Buffett stepped in. “So what are the true odds of our having to make a payout during the policy’s term? We don’t know – nor do we think computer models will help us, since we believe the precision they project is a chimera. In fact, such models can lull decision-makers into a false sense of security and thereby increase their chances of making a really huge mistake… Even if perfection in assessing risks is unattainable, insurers can underwrite sensibly. After all, you need not know a man’s precise age to know that he is old enough to vote nor know his exact weight to recognize his need to diet.”
Extrapolation is a derivative of this tendency too. As Kahneman and Tversky have found, people often use heuristics to help them make decisions, but a special problem develops when people often extrapolate heuristics from situations that are appropriate to situations where they simply do not work. As always, there must be a robust and multi-disciplinary latticework of mental models – the use Munger’s preferred phrasing – if there is going to be any success in the real world. It doesn’t take much search to find business leaders who attach themselves to one idea or framework and try to apply it in the wrong area.
There may not be a single trait more effective in countering man with a hammer syndrome than simple intellectual curiosity. And “Curiosity Tendency” is its own tendency in Munger’s revised framework. The curiosity trait may be partially or mostly innate, but it can also be cultivated. I remember going to the library in college and, to the detriment of my grades, spending four, six, sometimes eight hours completely lost in something that had absolutely nothing to do with my courses. Ted Weschler recently said, “I spend the vast majority of my day reading. I try to make about half of that reading random.” I don’t know how to quantify the benefit of such an approach, but I think we’ve all experienced the opposite when a strict, narrow, stifling atmosphere acts as a constraint.
“The secret to doing good research is always to be a little underemployed. You waste years by not being able to waste hours.” – Amos Tversky
 There’s Always Something to Do: The Peter Cundill Investment Approach by Christopher Risso-Gill: https://goo.gl/aMK2A0
 https://www.hks.harvard.edu/fs/rzeckhau/InvestinginUnknownandUnknowable.pdf and http://www.berkshirehathaway.com/letters/1996.html
 The Undoing Project by Michael Lewis.