“THE only function of economic forecasting is to make astrology look respectable,” John Kenneth Galbraith, an irreverent economist, once said. Since economic output represents the aggregated activity of billions of people, influenced by forces seen and unseen, it is a wonder forecasters ever get it right. Yet economists cannot resist trying. As predictions for 2016 are unveiled, it is worth assessing the soothsayers’ records.
Forecasters usually rely on two different predictive approaches. One is theory-based, shaped by how economists believe economies behave. The other is data-based, shaped by how economies have behaved in the past. The simplest of the theoretical bunch is the Solow growth model, named for Robert Solow, a Nobel-prize winning economist. It posits that poorer countries should generally invest more and grow faster than rich ones. Central banks and other big economic institutions use far more complicated formulas, often grouped under the bewildering label of “dynamic stochastic general equilibrium” (DSGE) models. These try to anticipate the ups and downs of big economies by modelling the behaviour of individual households and firms.
The empirical approach is older; indeed, it was the workhorse of government forecasting in the 1940s and 1950s. Data-based models analyse the relationship between hundreds or thousands of economic variables, from the price of potatoes to snowfall in January. They then work out how zinc sales, for example, affect investment and growth in the years that follow.
Both strategies have faced withering criticism. DSGE models, for all their complexity, are typically built around oversimplifications of how markets function and people behave. Data-based models suffer from their own shortcomings. In a paper* published in 1995 Greg Mankiw of Harvard University argued that they face insurmountable statistical problems. Too many things tend to happen at once to isolate cause and effect: liberalised trade might boost growth, or liberalisation might be the sort of thing that governments do when growth is rising, or both liberalisation and growth might follow from some third factor. And there are too many potential influences on growth for economists to know whether a seemingly strong relationship between variables is real or would disappear if they factored in some other relevant titbit, such as the wages of Canadian lumberjacks.
In practice, most forecasters combine the two approaches and inject, when necessary, a dose of common sense. The IMF, for instance, relies on a global model, built in part on economic theory and in part on data analysis. The global projections generated by that hybrid model are combined with country-specific details to produce country-level forecasts. The country forecasts are then checked for consistency against the global projections and adjusted when necessary—to make sure, for example, that most countries do not show strong trade growth when the global projection heralds a decline in trade. A recent analysis of the IMF’s forecasts by the organisation’s Independent Evaluation Office concluded that their accuracy was “comparable to that of private-sector forecasts”. But how accurate is that?
Not very, Lant Pritchett and Larry Summers of Harvard University argued in 2014. Forecasters overestimate the extent to which the future will look like the recent past, they reckon. It is assumed that fast-growing countries will keep speeding along while the economic tortoises continue crawling. The IMF, for instance, reckons that China’s GDP growth will decline gently to 6% a year by around 2017, and then accelerate slightly. That is highly unlikely, say Messrs Pritchett and Summers: “Regression to the mean is perhaps the single most robust and empirically relevant fact about cross-national growth rates.” In other words, booming countries slow down and slumping ones speed up.
The IMF publishes forecasts for 189 countries twice a year, in April and October, for the year in question and the following one. The Economist has conducted an analysis of them from 1999 to 2014, and compared their accuracy with several slightly less sophisticated forecasting methods: predicting that a country will grow at the same pace as the year before, guessing 4% (which is the average growth rate across all countries during the period) and picking a random number from -2% to 10%. For each method, the absolute difference between the actual and predicted growth rates is calculated and then averaged. The lowest average is taken to be the best performance.
Encouragingly, the guesses produced by our random-number generator performed worst (see chart); it yielded predictions that were off by 4.4 percentage points on average. Predicting the previous year’s growth rate came last-but-one, as Messrs Pritchett and Summers might have foreseen. The projections the IMF made in October of the year being forecast, which were off by an average of 1.5 percentage points, unsurprisingly did best; by that point plenty of actual economic data are available. Yet the quality of the IMF’s forecasts deteriorates surprisingly quickly the further from the end of the year in question they are made. Those from April of the preceding year are only slightly more accurate than those generated using the average growth rate.
No one expects the Spanish recession
An important caveat is in order. Forecasts of all sorts are especially bad at predicting downturns. Over the period, there were 220 instances in which an economy grew in one year before shrinking in the next. In its April forecasts the IMF never once foresaw the contraction looming in the next year. Even in October of the year in question, the IMF predicted that a recession had begun only half the time. To be fair, an average-growth prediction also misses 100% of recessions. One model does better, though. Our random-number generator correctly forecast the start of a recession 18% of the time.
““, Gregory Mankiw, Brookings Papers on Economic Activity, 1995.
““, Lant Pritchett and Lawrence Summers, NBER Working Paper 20573, 2014.