Businesses rely on numbers. Some use more than others, but at the heart of every company is a steady stream of numbers. There are two types of numbers (aka, metrics) we rely on in business: 1) those that measure past/current performance, and 2) those that forecast future performance. The past/current metrics are often difficult to gather and precisely measure. Those of us specializing in customer consulting and analyzing customer feedback have to measure and craft strategies based on customer perceptions and attitudes, which are notoriously difficult to measure and track.
While companies rely heavily on current metrics to run and evaluate their business, we have become extremely reliant on forecasted metrics for our strategic planning and decision-making. And if it's difficult to get accurate current metrics, how much harder is it to get accurate future metrics? However, executives, business leaders, and investors everywhere want, even demand, forecasts of key operating metrics with little regard for the certain inaccuracies of the forecast.
The extremely intriguing book, The Black Swan: The Impact of the Highly Improbable by Nassim Nicholas Taleb, which took me all last summer to read, argues that humanity is awful at understanding the causal mechanisms that produced current or past events and is, therefore, incapable of accurately understanding the likelihood of future events except in a few small and somewhat meaningless domains.
He also discusses the impact of a forecasted number on human decision-making. Citing experimental results from a couple of psychologists, he illustrates a mental mechanism called anchoring. In the experiment, subjects were asked to spin a wheel with numbers on it and then focus on the number, which they knew was completely random. The subjects were then asked to estimate the number of African countries in the UN. Those who had spun a low number estimated a significantly lower number of countries than those who had randomly spun a higher number. In another experiment, subjects were asked for the last four digits of their social security number and then asked to estimate the number of dentists in Manhattan. Their estimates were strongly correlated to last four digits of their SSN.
In other words, providing someone with a number, regardless of how strong the caveats are concerning its accuracy, immediately changes how they think. In essence, people lower their anxiety about future uncertainty by anchoring to a number that is purported to "predict" the future, and it doesn't matter how many caveats you put around it. If people's thinking is so easily affected by numbers they know to be random, think of the impact of a number produced by fancy algorithms that decision-makers have some reason to believe in.
Obviously, a lot more can, and has been, written on this topic, but I want to provide a few ideas that can help us approach the process of forecasting, and the communication of those forecasts, in a more honest way.
- Acknowledge the potential inaccuracy of forecasts. They say acceptance is the first step to recovery.
- Forecast ranges instead of single numbers. Acknowledge the variability of the outcome. I'm currently preparing for a trip to Sonoma, CA. If I only considered the daily high temperature or the average daily temperature when packing clothes, I would only take shorts and t-shirts. However, I know to consider the daily range of temperatures and to pack a light coat for the cool evenings. However, in my home state of Indiana it is currently ranging from incredibly hot and humid to just mildly hot and humid, so I don't even know where my light coat is! All this to say, you should give your audience a range of likely outcomes based on the variability of the outcome, not just one number.
- Actually look at the accuracy of past forecasts of the metric. It's amazing how seldom people look back at the accuracy of past forecasts before running the forecast for the next year - often using the same forecasting method even though the last forecast was horribly inaccurate. Why? Sometimes the last forecast has already been forgotten. It was used to craft a strategy and then never looked at again. Other times it is because we dismiss the inaccuracy as the result of unexpected events or causes that are "outside of the model" (exogenous variables for you quants). Well, if those events happened last time, why do we think other "unexpected" events won't happen again?
- Always be mindful of the potential for a large, random event that completely changes the conditions of the forecast. Even if a metric has been relatively stable in the past, don't assume it is immune to the effect of a highly improbable event.
- Don't produce a forecasted number that you know to be inaccurate just because "it's your job." This is at best unethical, and at worst criminal. There is ample recent evidence that unthinking, or unethical, forecasters can cause substantially more damage to society than criminals.
So be responsible when you forecast. Remember that other people's livelihood often relies on the forecasts you produce and the impact those forecasts have on decision-making.
As a parting side note, you'll be interested to know that the accuracy of weather forecasters far outperforms economic forecasters' accuracy. Think of that the next time you're watching the rain out your window with a fully packed picnic sitting next to you.Troy Powell, Ph.D.
Vice President, Statistical Solutions