Imagine two groups of investors. The first makes their investments on an ad-hoc basis. Whenever someone has an idea for a new investment, they hold and meeting and decide whether to act on it. The second makes decisions by using formal models to regularly assess the value of their current and potential investments.
Who would you trust with your money?
You would probably go with the second group, trusting their disciplined approach. But while the second group undoubtedly would have a better day to day grasp of their investments, their consistent approach to investing may lead them to be overconfident.
This is the implication of a forthcoming article in the Journal of Personality and Social Psychology, “The hobgoblin of consistency: Algorithmic judgment strategies underlie inflated self-assessments of performance.” The paper investigates how taking a consistent, algorithmic approach to problem solving can lead to overconfidence (abstract).
The authors studied participants solving problems in areas like financial investing and logical reasoning. Afterward, the participants rated their satisfaction with their performance and confidence in their solution. The professors found that the less ad-hoc, and the more standardized their approach to solving a problem, the higher participants rated their performance, regardless of whether they solved the problem correctly. The study concludes that using formalized approaches to solving problems seems to lead people to be overconfident in their solution and performance.
How does this play out in the real world?
A major, disastrous illustration of the principle is the Vietnam War. Secretary of Defense Robert McNamara, who believed in the value of statistical analysis, used metrics like body counts to make judgments about America’s ability to win the war. While data-driven analysis can be very valuable, McNamara’s formal analysis (of flawed metrics), made him overconfident in his predictions of progress and victory.
A more recent example comes from the use of formulas and models by Wall Street. The above image from Wired depicts a Gaussian copula function created by David Li, a quantitative analyst who worked in finance until 2008. Described as “the formula that killed Wall Street” by financial journalist Felix Salmon, it is credited as a significant cause of the financial crisis.
While Salmon breaks down the problems with the formula at great length, the basic premise is that the formula oversimplified the process of calculating the risk of pools of bonds. It ignored the nearly impossible work of analyzing the risk of each bond and the correlation between the risk of each bond (in effect, what is the likelihood that if one bond fails, another will as well - due to say, a massive financial crisis). Instead, it looked at the prices of each bond's corresponding credit default swap (essentially a bet on whether the bond would default or not), and took price changes on the credit default swap market as a sign of whether risk was increasing or decreasing.
Among other simplifications, this rested on the assumption that markets could price the risk of default correctly. As Salmon notes in his article, although people recognized the limitations of Li's formula, Wall Street embraced it to create a massive new market in pools of bonds, seduced by the simplicity of Li's formula. Just like the participants in the professors' study, they were overconfident because of the standardized and formulaic nature of their analysis of risk. As a result, they amassed massive risks in the bond markets, completely unprepared for the day when their model would fail.
Standard processes for problem solving like formal models and A/B testing are useful tools, but they always have limitations. Solving hard problems always has an element of uncertainty. It is easy to escape the worry of whether you made the right decision by trusting a model or SOP. But if you outsource your decision making to SOPs, formulas, and models, your tools limitations will become your own.