Predicting a Fiscal Crisis, Part 1

Consider Figure 1. It compares two outlooks on the federal budget, both published in 2019:

Figure 1

Sources of raw data: OMB, CBO

The OMB forecast at the time did not stretch beyond 2024; the extension is a simple extrapolation based on “backward engineering” of the forecast.

At the time, the OMB had its own staff of economists with all their technical resources for quantitative analysis at their disposal. They were also heavily backed up by the forecasters at the Heritage Foundation, who have comparably generous access to computerized rigor in their work.

The Heritage Foundation relied in part on its contacts with the OMB when it marketed itself as one of the most influential think tanks in the world. More than that, though, the combination of the OMB and Heritage resources should have been enough to produce a substantially higher level of accuracy in budget forecasting than what the CBO mustered. After all, the CBO can reasonably be said to come up short in terms of technical sophistication. For one, they do not rely on econometrics in their forecasting.

Yet as Figure 1 explains, the two forecasts are as different as they can be when analyzing the same variable.

How is this possible? Before we approach this question, it is worth noting that in other outlooks or forecasts, the CBO and OMB agree on key variables pertaining to the federal budget. However, this only reinforces the question: how can they differ so dramatically as in Figure 1?

We are not going to answer this question from a technical viewpoint; it would not be possible to do anything of the kind without full access to the resources that all the outfits involved have at their disposal. Instead, we are going to approach the question from a more fundamental angle, namely the very nature of forecasting itself.

In a 2016 paper published by the Tax Policy Center, Rudolph G Penner explains:

Forecasting is a perilous activity. Forecasters often make big mistakes, whether tyey are forecasting the economy, the weather, or the outcome of the World Series. Budget forecasts are no beter than any other and revenue forecasts are particularly difficult. Errors occur even when forecasting with a one-year time horizon.

He then asks what the consequences are of this for long-term forecasts: are those that run over a couple of decades “totally worthless”?

Plainly speaking: yes, they are. Penner notes that long-term forecasters have an advantage over those doing short-term predictions:

In the short term, the economy is buffeted by the turbulence of business cycles, oil shocks, political upheavals, droughts, etc. Over the longer term, however, more fundamental forces exert themselves and there is some tendency for variables to return to long-term trends. Regression to the mean is the long-term forecaster’s best friend and policy makers often react to surprises with policy responses that keep variables within bounds.

He notes that there is no higher rate of accuracy in long-term forecasting, but at the same time he offers an example of why the long term should be easier to predict: the ratio of tax revenue to GDP which is characterized by “remarkable constancy”. However, it is actually not a good example – on the contrary. It is an arithmetic truth that tax revenue varies with GDP, for the simple reason that taxes are proportionate to economic activity. In other words, it is an institutional constant and therefore essentially pointless to forecast.

It is important to note this, because it helps explaining the perils with economic forecasting in general and with fiscal forecasting in particular. While the revenue-GDP ratio is close to trivial, the actual numerical values of the two variables are close to impossible to predict with any reasonable level of accuracy, especially when that ratio is used in the forecasting of the government budget. The counterpart variable, spending, is defined independently of revenue and, at least to some degree, independently of GDP.

This is not the place to delve more deeply into the details of how these two variables are defined. The point, instead, is one of the perils of forecasting for the purposes of policy making. It is often expected among legislators, governors and presidential staff with fiscal-policy responsibilities, that highly sophisticated forecasting methods – the processing of large amounts of statistics through highly complex models – will yield highly accurate predictions.*

This is not true, in part for the reasons that Penner alludes to. There are two challenges with forecasting fiscal policy, the first of which is to forecast the interaction between the economy and – yes – fiscal policy. We know a great deal about the reaction patterns in the economy from any given change in taxes or government spending; the challenge is to take account of a series of changes over a longer period of time. Fiscal policy (and its de-facto appendix, monetary policy), often changes as a direct or indirect result of the effects of previous policy changes.

The second challenge is to take into account the non-economic incentives for fiscal-policy changes. The foremost among those incentives is known as “ideology”. Economists run away from this one like the plague, which limits their ability to understand and explain – sine qua non forecasting – the government budget.

At the heart of the forecasting problem is the pursuit of high numerical accuracy. For reasons mentioned above, that pursuit is a self-defeating endeavor, but this “cat tail” problem in forecasting is not limited to the government budget. It is in fact built into the very models that forecasters use. They can account for it to some degree, but only after the fact; updating models to account for self-generating errors is costly and time consuming, with the cost increasing exponentially with the level of sophistication.

This is one reason why highly complex, costly econometric models do not exhibit a higher level of accuracy than simple forecasting based on methodology from traditional political economy. The trick, instead, is to be content with a certain level of forecasting approximation. Or, as John Maynard Keynes once said:

It is better to be approximately right than exactly wrong.

He made this comment in response to a question as to why he was no fan of econometrics, at the time an emerging branch of economics. While the foundations of econometrics were laid in the 1920s, it did not gain much interest in economics until after the Great Depression. The sudden onset of a catastrophic economic crisis made many politicians criticize economists for not having warned them about the crisis; econometricians promised an exactness that would insure political leaders against the perils of the unforeseen.

Keynes was among the economists who refused to fall for the lure of econometrics. He was not alien to forecasting himself – on the contrary, being the father of macroeconomics, he explained how multipliers and accelerators, essential mechanics of the economy, actually worked. To do so he needed to rely heavily on statistics. He just wasn’t a fan of the kind of closed-system rigor that econometrics forced upon the dismal science.

There were many occasions where Keynes was proven right in his skepticism of econometrics. During a conference in Cambridge, UK, he was approached by Jan Tinbergen, one of the world’s foremost econometricians at the time. Tinbergen, who later won the first Nobel Memorial Prize in economics, commented on Keynes’s estimate of the import elasticity in private consumption in the British economy. Explaining that he had matched Keynes’s number using his own methodology, Tinbergen congratulated Keynes on his estimate. Keynes’s response was a classic British smile: “I am glad to hear you found the right number.”

Keynes’s forecasting method was founded in traditional political economy, which takes into account economic institutions and the purposes of those institutions. There is no way to account for those purposes in regression-based forecasting. Therefore, it is also next to impossible for the econometrician to explain the variables, trends and economic activity he is forecasting. To bring us back to the federal budget, this is why Penner tells us that:

The most important single force driving the debt-GDP ratio upward in essentially all long-term projections is the rapid growth of the elderly population. It propels the growth of three large spending programs—Social Security, Medicare and Medicaid—to the point that their spending growth exceeds that of the rest of the budget and the GDP. 

The point about a growing elderly population, while visible in demographic data, is not obviously deduced from the growth in Social Security, Medicare and Medicaid costs. To begin with, the share of total federal spending that these three programs account for does not correlate with the deficit. In 1970, when we were just at the beginning of our permanent budget deficit, Social Security, Medicare and Medicaid accounted for 23.1 percent of total federal spending. In 1980 that share had risen to 30.9 percent, where it remained for the next decade.

By 2000 the share had risen to 36.9 percent, but that correlates in large part with the SCHIP expansion of Medicaid under Clinton. By 2010, Social Security, Medicare and Medicaid consumed 37.2 percent of the federal budget.

The next leap in budget share came with Medicaid Expansion, which helped push this program trio above 40 percent of the federal budget.

In other words, it is not the aging population that causes the budget deficit. To further demonstrate this, let us do an experiment. Suppose we adjust Social Security spending per eligible capita – every person 65 or older – for inflation, but we do it in the “opposite direction”: we allow Social Security benefits to increase strictly along the lines of the consumer price index. After calculating Social Security spending per eligible person, we replace its growth rate since 1981 (the year from which we have reliable, appropriate demographic data) with a growth rate identical to annual CPI.

The outcome is striking: in 2020, when Social Security paid out $1,142.4 billion, the CPI-based method would have landed the cost of the program at $874 billion. This is a reduction of 23.5 percent.

This means, bluntly, that either the standard of living on Social Security, or the entitlement base has expanded. Given the direction of Social Security reforms in the past 40 years, the latter explanation is unlikely. This leaves us with the former, namely that retirees cash out bigger checks, adjusted for inflation, than Social Security retirees did 40 years ago.

There is a technical explanation for this, which I have accounted for in one of the chapters in my book about how to reform away the welfare state. The chapter, which started its life as a paper I presented at a couple of seminars for Republican members of Congress and staffers, as well as retirement investors, back in 2006 and 2007, demonstrates why Social Security is inherently insolvent. I present both technical and analytical evidence of this, with the short story being that individual benefits grow faster than the tax base supposed to pay for them.

In other words, the reason why the program is bound for insolvency is to be found inside the program. Even if we had a child-birth explosion, it would not change the fact that the program is insolvent by design.

As for Medicare, its costs have increased on par with the rising cost of medical technology. Figure 2 shows actual total Medicare costs since 1981 and the same costs if they had grown strictly on par with the cost increases for medical technology:

Figure 2

Sources of raw data:
Office of Management and Budget (Medicare); Department of Health and Human Services (NHE expenditures)

Assuming that the same is true for Medicaid, it is clear that the costs of these two federal health-insurance programs are driven by the costs of the services that eligible citizens are entitled to. In other words, the cost problem is the entitlement program, not the demographics of the eligible population.

The same holds true for Social Security: eligible individuals earn the entitlement to benefits based on a formula written into the program itself. So long as that formula remains intact, the costs of the program will outpace its tax base regardless of what the American population pyramid looks like.

Which brings us back to the forecasting problem. If we understand demographics to be the driving force behind the budget deficit, we will predict its path based on that premise. If we understand entitlements to be the driving force, we will predict the trajectory of spending – and therefore of the deficit – based on the cost-driving mechanisms in those entitlement programs.

In conclusion: the difference in forecasting that we saw in Figure 1 is an example of how institutional variables and policy preferences can make all the difference in the world to how a forecast predicts even the not-so-distant future. Coming articles will address this in more detail.

*) One of my professors in grad school, a former Bank of England chief economic forecaster, explained that the laptop he had in front of him allowed him to process as much data as his 20 employees had done in the 1960s, and much faster. But the forecasts that he made with the laptop were not more accurate.