The Soothsayers of Macroeconometrics
Monday, September 19, 2011
Applying macroeconometric models to questions of fiscal policy is the equivalent of using pre-Copernican astronomy to launch a satellite.
How many jobs will the latest stimulus package create? What will it do to GDP?
For answers to questions like these, the press always turns to the usual suspects: the proprietors of macroeconometric models, which are maintained by some economic consulting firms and by the Congressional Budget Office. (The Federal Reserve Board also maintains a model, but the Fed tries to refrain from injecting its model into fiscal policy debates.)
I think that if the press were aware of the intellectual history and lack of scientific standing of the models, it would cease rounding up these usual suspects. Macroeconometrics stands discredited among mainstream academic economists. Applying macroeconometric models to questions of fiscal policy is the equivalent of using pre-Copernican astronomy to launch a satellite or using bleeding to treat an infection.
The History of Macroeconometrics
Until the press, the public, and policy makers understand the utter unreliability of macroeconometric estimates of the impact of policies on employment and growth, the answers provided by the usual suspects are worse than nothing.
Macroeconometrics is the fitting of equations suggested by economic theory to historical data. For example, John Maynard Keynes famously suggested that consumer spending would follow a “consumption function,” in which spending would rise with income in a predictable fashion. Macroeconometricians will a fit a consumption function to historical data in order to try to forecast the effect of, say, a tax cut on consumer spending.
Macroeconometrics was once at the cutting edge of economic research. The very first Nobel Prizes in economics, awarded in 1969, went to Jan Tinbergen and Ragnar Frisch, two pioneers of macroeconometrics. Macroeconometrics was honored again in 1975 and 1980, when Tjalling Koopmans and Lawrence Klein were awarded Nobels.
These and other pioneers overcame a number of technical hurdles facing macroeconometricians. However, in subsequent decades new problems emerged, and some of these problems proved to be insoluble.
In 1976, Robert Lucas suggested that economic behavior could respond to policy changes in ways that would cause macroeconometric models to make systematic errors. Lucas was awarded a Nobel in 1995. It has since become standard in economic research that empirical work must be able to meet “the Lucas critique.” Because traditional macroeconometric models fail to do so, they have disappeared from peer-reviewed journals in economics.
I personally am among a minority of economists who did not take the Lucas critique as definitive. I continued to treat macroeconometric modeling as valid when I was in graduate school in the late 1970s and when I worked with macroeconometric models at the Fed in the early 1980s. However, over time I became aware of other problems that I came to believe are even more devastating for the macroeconometric project.
Economists must be more forthcoming about what they can and cannot estimate.
One problem affects a broad spectrum of econometric research. This is the problem of “specification searches,” emphasized by Edward Leamer in 1983 in a famous article and a subsequent book.
Econometric modeling is supposed to serve the same function as a controlled experiment. In macroeconometrics, this means that the fourth quarter of 1988 can be made equivalent to the first quarter of 2009, once the irrelevant differences have been taken into account. The process of telling the computer how to identify and remove irrelevant differences is called specification.
The problem is that when an econometrician feeds data into a computer in order to fit an equation, there are a large number of plausible specifications. This means that each researcher can, in effect, “coach” or “prompt” the computer in order to get that researcher's desired results. Leamer charged that rather than being driven by the data, as might be supposed, results were being driven by the investigator's interventions.
As the import of Leamer's criticisms sank in, the practice of empirical economics changed. Instead of trying to use specifications to control for unwanted confounding factors, researchers look for “natural experiments” that do not require subjective interventions on the part of the investigator. The new approach has been described by Joshua D. Angrist and Jörn-Steffen Pischke as “the credibility revolution in econometrics.” Steven Levitt (of Freakonomics fame) and a host of other young researchers have used these newer techniques to produce more reliable results.
Given that the models have no credibility among researchers, why is it that they are used by policy makers, such as the president's Council of Economic Advisers and the Congressional Budget Office?
But not in macroeconometrics. As Angrist and Pischke point out, macroeconomic data does not lend itself to the same opportunities to avoid specification searches. Sadly, the credibility revolution has passed macroeconometrics by.
Another devastating insight into macroeconomic data was provided by Charles R. Nelson and Charles R. Plosser in 1982. One of the challenges with trying to use the fourth quarter of 1988 as a controlled experiment for the first quarter of 2009 is that the size of the economy changes over time. What Nelson and Plosser showed is that the then-standard method of adjusting for this, known as “de-trending” the data, was improper. Instead, the data had to be “differenced,” meaning that the researcher should look at the changes from one quarter to the next. Unfortunately, once the data are “differenced,” very little signal remains, and the relationships that macroeconometricians wish to quantify can no longer be found.
The proprietors of macroeconometric models have not taken into account the Lucas Critique. They have done nothing about the influence of specification searches. And they have not ended the improper use of “de-trending” instead of differencing. They persist in using methods that are vulnerable to all of these major criticisms, and more. That is why the models cannot be taken seriously by academic researchers.
Good enough for government work?
Given that the models have no credibility among researchers, why is it that they are used by policy makers, such as the president's Council of Economic Advisers and the Congressional Budget Office? Greg Mankiw, who has been both a researcher and a policy maker, says that it is because the researchers have failed to come up with a better alternative.
Imagine if somehow we knew how to launch satellites but still believed in pre-Copernican astronomy. We would have no choice but to send satellites into space using calculations that assumed that the earth is the center of the universe.
The models cannot be taken seriously by academic researchers.
In the case of fiscal policy, I think that there is a better alternative than to pretend that macroeconometric models work. Instead, economists must be more forthcoming about what they can and cannot estimate. For example, the Congressional Budget Office can reasonably estimate the effect of economic and policy scenarios on components of the government's budget, including taxes and spending. However, it cannot reasonably estimate the effect of tax and spending changes on the overall economy. The CBO adds value to policy makers by “scoring” the impact of policies on the budget. However, the “scoring” of policies in terms of GDP growth or jobs saved is of no value. The CBO should simply refuse to do it, and the consulting firms that purport to provide such estimates should be regarded as the charlatans they are.
Until the press, the public, and policy makers understand the utter unreliability of macroeconometric estimates of the impact of policies on employment and growth, the answers provided by the usual suspects are worse than nothing. They give policy makers the illusion of precise control over the economy, based on methods that are no more reliable than soothsaying or entrail-reading.
Arnold Kling is a member of the financial markets working group at the Mercatus Center at George Mason University. This article is adapted from “Macroeconometrics: The Science of Hubris,” published in Critical Review, Volume 23 (1-2).
FURTHER READING: Kling also writes “Prosperity, Depression, and Progress,” “Putting Mr. Market on the Couch,” and “What’s Stalling the Next Economic Revolution?” John H. Makin discusses “The Limits of Monetary and Fiscal Policy.”
Image by Rob Green | Bergman Group