Limits of information for trading and investing

Bad advice to avoid

“It is consistently taught in business schools across the world that the more information we absorb, the better investment decisions we will make.”

It makes sense, right?

But it’s wrong.

The above quote is a paraphrase of Adam Robinson when asked the question: “what is bad advice to avoid?”

Enter Paul Slovic and Bernard Corrigan, circa 1974

Robinson mentions a paper written in 1974 by Paul Slovic. However, I had trouble finding it and only encountered references to an unpublished paper of his in which Slovic and Bernard Corrigan conducted an experiment with very intriguing and bloggable implications for investors and traders. And heck, for business managers in general.

I also found a series of notes written by Slovic himself titled “Toward Understanding and Improving Decisions” that contain a plethora of thought-provoking findings. Link here.

But let’s focus on the question of bad advice. The pair went to an arena of culture, cult, apogee and ruin: the racetrack.

Improving your odds at the racetrack

Slovic and Corrigan chose eight expert horserace gamblers (they call them “handicappers”) and asked them to predict the results of 45 races.

Their task: to forecast the order of the top five horses in each race.

Their tools: information and their own judgment.

How much information? They were presented a set of 88 variables taken from the horses’ past performance. They were asked to pick the five variables that would guide them in making their predictions. Five only.

Then they were asked to select 10 variables, then 20, and 40.

The results. Here is the awesome part.

The researchers found that the accuracy of the experts’ predictions was the same regardless of the number of variables they used.

What’s more fascinating is that, when presented with the five-variable set, the confidence level of the experts was in line with their prediction accuracy*. In other words, they were well calibrated. Say, for example, that a gambler’s prediction was accurate 20% of the time and their confidence level was also 20%.

(This alignment between prediction accuracy and confidence level serves as a risk management mechanism that keeps the gambler alive. If your calibration is way off the mark, you’ll inevitably end up placing bets that are too large, and your chances of survival are zero.)

Caffeinating confidence

When the same experts were using the 10-, 20-, and 40-variable sets, their confidence level progressively and significantly increased, yet their prediction accuracy remained unchanged.

WTF?

Extrapolating to global capital markets

This phenomenon not only exists in the world of horse racing, but is (likely) very much alive in the financial markets and in any area of expertise in which large quantities of data have to be processed by skilled human judgment.

Slovic concludes that:

These results should give pause to those who believe they are better off getting as much information as possible prior to making a decision.

The questions for us are: what is our five-variable set? how can we know if we are past the level at which our predictions are not going to improve?

If these are unanswerable, perhaps then our focus should lay on the realms of risk management.

(*according to Robinson, as I didn’t find this information explicitly in Slovic’s papers)