On Tuesday, the Bureau of Labor Statistics will report the CPI index (along with endless other data) for March. Currently, the consensus estimate calls for +0.1%, and +0.1% ex-food-and-energy. This release will generate the usual irritation among conspiracy theorists who believe the government is monkeying with the inflation numbers for their own nefarious ends. I have previously explained why it is that inflation tends to feel faster than it actually is, and I have regularly debunked the claim by certain conspiracy-minded individuals that inflation has been running about 5% faster than the “official” mark since the early 1980s. However, today I want to point out another reason that right now we will have a tendency to recognize that inflation is not rising at 0.1% per month, and that involves the issue of seasonal adjustment.
The point of seasonal adjustment is to remove regular, cyclical influences so that we can see if the underlying trend is doing anything interesting. Consider temperature. Is it particularly helpful for you as a meteorologist to know that the average temperature in April has been higher than the average temperature in January? Of course not, because we know that April is always warmer than January. Hence, with temperature we ask whether April was warmer than a typical April.
Closer to the point, consider gasoline. The national average gasoline price has risen in 61 of the last 66 days, as the chart below (Source: Bloomberg) illustrates.
Yes, if you’re noticing that gasoline prices have been rising you are not alone, and it is not an illusion! But should we worry about this rapid acceleration in gasoline? Does this necessarily presage spiraling inflation? Bloomberg offers an easy way to look at the seasonality question (we formerly had to do this by hand). The following chart shows the change in gasoline prices (in cents) since December 31st for each of the last four years, for the 5-year average (the heavy, yellow line) and for this year (the white line).
You can see that the rise from late January into April is not only normal, but the scale of the increase is just about the same this year as for the prior four years – what was unusual was that prices didn’t start rising until February.
Now, this particular seasonal pattern is important to inflation-watchers and TIPS traders because the volatility of gasoline prices is an important part of volatility in the overall price dynamic. In fact, it is important enough that if I take the average line from the gasoline chart above and overlay it with the official CPI seasonal adjustment factors from the BLS, you can see the ghost of the former in the latter (see chart, source Enduring Investments).
Now, the seasonal adjustment factors for the CPI as a whole are less dramatic (closer to 1, in the chart above, if you look at the right-hand scale compared to the left-hand scale) than are the factors for gasoline, but that makes sense since gasoline is only a small part – albeit a really important part – of the consumption basket of the average consumer. And the BLS methodology is a lot more sophisticated than the simple average-of-the-last-x-years approach I have taken here. But this should be good enough for you to grasp the intuition.
What this means is that when the BLS reports tomorrow that gasoline prices didn’t add anything to overall inflation in March, you should recognize that that does not mean that gasoline prices didn’t rise in March. It means that they didn’t rise significantly more or less than the average factor the BLS is assuming. Most of all, it doesn’t mean that the BLS is monkeying with the data to make it seem lower. The product of the seasonal adjustment factors is (approximately) 1.0, which means that what the BLS takes away in the springtime, to report inflation numbers lower than would be anticipated given a raw sampling of store prices, they will give back in the late fall and winter, and report inflation numbers higher than would be anticipated given a cursory glance of store shelves. What is left, hopefully, is a more-unbiased view of what is happening with the price level generally.
Where you can see this effect most clearly is in the difference between the seasonally-adjusted number that is reported and the rise in the NSA figure that is used to adjust inflation-indexed bonds like TIPS. While the consensus calls for a +0.1% rise in headline CPI, the forecasts expect the NSA CPI (the price level) to rise from 234.781 to 236.017, which is a rise of +0.5%. So yes – if it feels like inflation is suddenly rising at a 6% annualized pace, that is because it is. But fear not, because that will slow down later in the year. Probably.
 The summary of that argument: we know that wages have increased roughly 142% since the early 1980s – average hourly earnings was $8.45 in April 1984 and is $20.47 now, and this “feels about right” to most people. Against this, the CPI has risen 128%, meaning that our standard of living “should” have improved a little bit since then, but not much (although any individual may be doing somewhat better or worse). But if prices instead of rising at 2.8%/year had risen at 7.8%/year, prices in aggregate would have risen 851% versus a 142% increase in wages, and we would all be living in absolute squalor compared to our parents. This is offensively and obviously wrong.
Whether the evaporation of popular Bitcoin marketplace Mt. Gox (which may have nothing to do with the Gox in Dr. Seuss’s beloved One Fish, Two Fish, Red Fish, Blue Fish) is due to fraud, hacking, incompetence, or some combination of all three – it appears it may have been hacked three years ago, and have been insolvent since then before vanishing from the Internet last night – doesn’t really matter. Either way, investors/speculators with money at Mt. Gox got MFGlobaled. The money wasn’t segregated (if it was money at all, and if it can be segregated at all), there was no audit (if there can be an audit trail for something that doesn’t have a known origin or destination), and the firm was not overseen in any fashion (if it is even possible to oversee something that exists mainly because it is difficult to oversee).
Like Schrödinger’s cat, it was kinda there, until someone actually looked and discovered it was dead.
I have carefully eschewed writing about Bitcoin in the past, though people have asked me to do so. I chose not to write about it because I had no wish to be filleted by one side or the other in the argument. But what I would have said would have been a series of simple observations that have nothing to do with how Bitcoin is mined, managed, or mishandled:
- This is hardly the first currency that has been outside of government control. Currencies existed outside of government control before they existed under government fiat.
- Historically speaking, there is a reason that government-sponsored currencies won, and it wasn’t because they were backed with gold. It was because people trusted the government when it said the currency was backed with gold.
- Trusted banks were issuers of currency for a long time. The coin of the realm has always been trust – and even if a currency is limited, or backed by limited metal, or whatever, you still need trusted institutions through which the coin flows, or it doesn’t work. Where is the trusted institution in Bitcoin’s case?
- So what’s the big deal?
This isn’t schadenfreude. I don’t care if Bitcoin succeeds or not; I don’t think its success or failure has anything to do with whether fiat currencies succeed or blow up. I don’t think Bitcoin is a “safe haven” any more than gold is a safe haven.
But at least I can touch gold. At least I know that gold will have some value in exchange, whereas I don’t know that Bitcoin will, tomorrow. And now, indeed it may not. Surely no institutional investor can now invest in Bitcoin deposits without answering the following question to the satisfaction of its board: “How can we be sure that our money won’t go the way of Mt. Gox?” And institutional acceptance is a huge hurdle for the future success of this substitute currency. Ditto firms using Bitcoin for transactions – a daylight overdraft that can go to zero overnight is a big risk for a bank.
And so, what I think was always the not-so-subtle problem for Bitcoin or any crypto-currency remains: for it to succeed, a trusted institution needs to be involved. Trust can’t be distributed across a network. And if an institution is involved, then the idea of a “people’s currency” loses weight. Bitcoin wasn’t the first of these attempts, and it won’t be the last, but in my mind that is the challenge. You can’t make money that only is used by the credulous and the gullible. It must be used by the incredulous and the suspicious. It is adoption by those people which defines the success or failure of a currency.
(Unfortunately, this puts certain elements at my alma mater in the former category. In our January 2014 alumni magazine was an article on Bitcoin. In the information bar “Bitcoin Dos and Don’ts”, the first point was “Do your research first! More information is available on Bitcoin.it, a wiki maintained by the bitcoin community. For Americans, the most popular and trustworthy place to buy and sell Bitcoins has historically been mtgox.com.” Whoops! Do your research first – popular does not imply trustworthy unless the thing is popular with people whose trust is hard to win!)
 “I like to box. How I like to box! So, every day, I box a Gox. In yellow socks I box my Gox. I box in yellow Gox box socks.”
Here is a post from Sober Look that has some really good charts on the changing asset mix at US banks. I was a little surprised that they didn’t point out the obvious connection in the charts, although they do make some key points in a previous post.
To summarize: the charts show that the loan-to-deposit ratio in the banking system recently hit a 35-year low, and that the proportion of cash on the balance sheet of banks has gone from maybe 5% to around 20% (eyeballing it) in the last ten years.
Obviously, these two facts are not unconnected, since loans and cash are both assets to banks. The reason for the shift from loans to cash is very simple: QE. Banks don’t want to hold as much cash (reserves) as they are carrying, but the alternative is to lend it to people in sub-optimal loans – that is, where the interest rate charged does not compensate for the risk that the loan will not be paid back, so that the lending has a negative NPV. Moreover, the cash itself has a positive return because the Fed is paying interest on excess reserves, so that the lending has a higher hurdle to achieve than it would if this was just “normal” cash or reserves.
Understanding this dynamic is really important. So here’s how this works: if interest rates rise, but reserves have the same yield, then lending becomes more profitable and loans will increase – that is, the money multiplier will rise, with less money in the vault and more money in transactional accounts. If, on the other hand, the Fed raises the interest on excess reserves while lending rates stay unchanged, then even fewer loans will be made and banks will hold more cash relative to loans. This is one mechanism by which higher interest rates initially encourage higher inflation.
(And yes, while the total amount of reserves in the system is fixed, the total amount of loans is not, so while the Fed controls the former they do not control the latter except indirectly).
So, consider the “exit” strategy. As interest rates rise, the multiplier will increase unless the Fed hikes interest on excess reserves. But since interest rates move more flexibly, more rapidly, and often further than do policy rates, this probably means the multiplier will be determined mostly by the market (I wonder if the Fed declared the IOER to be “10-year yields minus 250bps” if that would change things?). The gap is the thing. And, if Yellen actually cuts the IOER to zero, as she has intimated is possible, then the multiplier would rise…and we don’t know by how much.
On the flip side, if the Fed tapers QE to zero, and lending rates fall, then the multiplier would tend to fall further because that gap narrows. In that case, you really could get a disinflationary scenario…though I am skeptical that long rates can fall very much when public debt is so high and the Fed is withdrawing its support for the bond market. Still, a crisis could do it. To be clear: you’d need the Fed to stop adding reserves, to neglect the IOER – or increase it – and long rates to decline substantially (at least 100bps, say). So if you are a deflationist, there are your signposts. I don’t anticipate that any of that happening, except that I imagine they will screw up the IOER strategy and they could screw that up in either direction.
And by the way, I don’t think any of that would affect inflation much in 2014, since higher housing prices are already going to be pressing core inflation higher. But it could affect 2015.
However, I digress from the other point I wanted to make that was suggested by the Sober Look article, and that is this: it continues to amaze me how well bank stocks are trading. I’ve been saying this for years – which helps to illustrate that I am a strategic investor, not a twitchy tactical guy. Return on equity equals gross margin (profit/revenue), times asset turnover (revenue/assets), times leverage (assets/equity), and for banks all three of these components are under pressure. Gross margin is under pressure from the movement of more products to electronic trading and from increasing legal bills at banks (the FX trading scandal is the latest threat of multibillion-dollar fines, adding to the LIBOR scandal and probes of the gold and silver price fixing system as sources of legal headaches for banks). Banks have been forced via the crisis to shed leverage, as a chart I recently ran illustrated. And low interest rates combined with large amounts of cash compared to loans on the balance sheet pressures the asset turnover statistic. So it isn’t surprising that bank ROEs are low (see chart of the NASDAQ bank index ROEs, source Bloomberg). What is surprising is that they even got this high, and market pricing seems to anticipate that they’ll keep rising. Bank stocks are actually outperforming the S&P since late 2011, and their P/E ratios are essentially where they have always been, excluding the spike when earnings collapsed in the crisis, causing P/Es to skyrocket (see chart, source Bloomberg).
The biggest surprise of the day on Tuesday did not come from new Fed Chairman Janet Yellen, nor from the fact that she didn’t offer dovish surprises. Many observers had expected that after a mildly weak recent equity market and slightly soft Employment data, Yellen (who has historically been, admittedly, quite a dove) would hold out the chance that the “taper” may be delayed. But actually, she seemed to suggest that nothing has changed about the plan to incrementally taper Fed purchases of Treasuries and mortgages. I had thought that would be the likely outcome, and said so yesterday when I supposed “she will be reluctant to be a dove right out of the gate.”
The surprise came in the market reaction. Since there had been no other major (equity) bullish influences over the last week, I assumed that the stock market rally had been predicated on the presumption that Yellen would give some solace to the bulls. When she did not, I thought stocks would have difficulty – and on that, I was utterly wrong. Now, whether that means the market thinks Yellen is lying, or whether there is some other reason stocks are rallying, or whether they are rallying for no reason whatsoever, I haven’t a clue.
I do know though that the DJ-UBS commodity index reached its highest closing level in five months, and that commodities are still comfortably ahead of stocks in 2014 even with this latest equity rally. This rally has been driven by energy and livestock, with some precious metals improvements thrown in. So, lest we be tempted to say that the rally in commodities is confirming some underlying economic strength, reflect that industrial metals remain near 5-year lows (see chart, source Bloomberg, of the DJUBS Industrial Metals Subindex).
One of the reasons I write these articles is to get feedback from readers, who forward me all sorts of articles and observations related to inflation. Even though I have access to many of these same sources, I don’t always see every article, so it’s helpful to get a heads up this way. A case in point is the article that was on Business Insider yesterday, detailing another quirky inflation-related report from Goldman Sachs.http://www.businessinsider.com/goldman-fed-should-target-wage-growth-2014-2
Now, I really like much of what Jan Hatzius does, but on inflation the economics team at Goldman is basically adrift. It may be that the author of this article doesn’t have the correct story, but if he does then here is the basic argument from Goldman: the Fed shouldn’t target inflation or employment, but rather on wage growth, because wage growth is a better measure of the “employment gap” and will tie unemployment and inflation together better.
The reason the economists need to make this argument is because “price inflation is not very responsive to the employment gap at low levels of inflation,” which is a point I have made often and most recently in my December “re-blog” series.
But, as has happened so often with Goldman’s economists when it comes to inflation, they take a perfectly reasonable observation and draw a nonsensical conclusion from it. The obvious conclusion, given the absolute failure of the “employment gap” to forecast core price inflation over the last five years, is that the employment gap and price inflation are not particularly related. The experimental evidence of that period makes the argument that they are – which is a perversion of Phillips’ original argument, which related wages and unemployment – extremely difficult to support. Hatzius et. al. clearly now recognize this, but they draw the wrong conclusion.
There is no need to tie unemployment and inflation together …unless you are a member of the bow-tied set, and really need to calibrate parameters for the Taylor Rule. So it isn’t at all a concern that they aren’t, unless you really want your employment gap models to spit out useful forecasts. Okay, so if you can’t forecast prices, then use the same models and call it a wage forecast!
But the absurdity goes a bit farther. By suggesting that the Fed set policy on the basis of wage inflation, these economists are proposing a truly abhorrent policy of raising interest rates simply because people are making more money. Wage inflation is a good thing; end product price inflation is a bad thing. Under the Goldman rule, if wages were rising smartly but price inflation was subdued, then the Fed should tighten. But why tighten just because real wages are increasing at a solid pace? That is, after all, one of society’s goals! If the real wage increase came about because of an increase in productivity, or because of a decrease in labor supply, then it does not call for a tightening of monetary policy. In such cases, it is eminently reasonable that laborers take home a larger share of the real gains from manufacture and trade.
On the other hand, if low nominal wage growth was coupled with high price inflation, the Goldman rule would call for an easing of monetary policy…even though that would tend to increase price inflation while doing nothing for wages. In short, the Goldman rule should probably be called the Marie Antoinette rule. It will tend to beat down wage earners.
Whether or not the Goldman rule is an improvement over the Taylor Rule is not necessarily the right question either, because the Taylor Rule is not the right policy rule to begin with. Returning to the prior point: the employment gap has not demonstrated any useful predictive ability regarding inflation. Moreover, monetary policy has demonstrated almost no ability to make any impact on the unemployment rate. The correct conclusion here is a policy rule should not have an employment gap term. The Federal Reserve should be driven by prospective changes in the aggregate price level, which are in turn driven in the long run almost entirely by changes in the supply of money. So it isn’t surprising that the Goldman rule can improve on the Taylor rule – there are a huge number of rules that would do so.
Since I wrote a blog post in early December on “The Effect of the Affordable Care Act on Medical Care Inflation,” in which I lamented that “I haven’t seen anything of note written about the probable effect of the implementation of the Affordable Care Act on Medical Care CPI,” several things have come to my attention. This is a great example of one reason that I write these articles: to scare up other viewpoints to compare and contrast with my own views.
In this case, the question is not a trivial one. Personally, I approach the issue from the perspective of an inflation wonk, but the ham-handed rollout of the ACA has recently spawned greater introspection on the question for purely political reasons. This is awkward territory, because articles like that by Administration hack Jason Furman in Monday’s Wall Street Journal do not further the search for actual truth about the topic. And this is a topic on which we should really care about a number of questions: how the ACA is affecting prices, how it is affecting health care utilization and availability, how it is affecting long-term economic growth, and so on. I will point out that none of these are questions that can be answered definitively today. My piece mentioned above speculated on possible effects, but we simply will not know for sure for a long time.
So, when Furman makes statements like “The 7.9 million private jobs added since the ACA became law are themselves enough to disprove claims that the ACA would cause the sky to fall,” we should immediately be skeptical. It should be considered laughably implausible to suggest that Obamacare had a huge and distinguishable effect before it was even implemented. Not to mention that it is very bad science to take a few near-term data points, stretching only for a couple of years in a huge and ponderous part of the economy, to extrapolate trends (this is the error that Greenspan made in the 1990s when he heralded the rise in productivity growth that was eventually all revised away when the real data was in). Furman also conflates declines in the rate of increase of spending with decelerating inflation – but changes in health care spending include price changes (inflation) as well as changes in utilization. I will talk more about that in a minute, but suffice to say that the Furman piece is pure politics. (A good analysis of similar logical fallacies made by a well-known health care economist that Furman cites is available here by Forbes.)
I want to point you to another piece (which also has flaws and biases but is much more subtle about it), but before I do let’s look at a long-term chart of medical care inflation and the spread of medical care inflation to headline inflation. One year is far too short a period to compare these two things, not least because one-time effects like pharmaceuticals losing patent protection or sequester-induced spending restraints can muddy the waters in the short run. The chart below (source: Enduring Investments) shows the rolling ten-year rise in medical care inflation and, in red, the difference between that and rolling ten-year headline inflation.
You can see from this picture that the decline in medical care inflation, and the tightening of the spread between medical care inflation and headline inflation, is nothing particularly new. Averaging through all of the year-to-year wiggles, the spread of medical care has been pretty stable since the turn of the century (which, since this is a 10-year average, means it has been pretty stable for a couple of decades). Maybe what we are seeing is actually the anticipation of HillaryCare? (Note: that is sarcasm.)
Now, the tightening relative to overall inflation is a little exaggerated in that picture, because for the last decade or so headline inflation has been somewhat above core inflation due to the persistent rise in energy prices throughout the ‘00s. So the chart below (source: Enduring Investments) shows the spread of medical care inflation over core inflation, which demonstrates even more stability and even less reason to think that something big and long-term has really changed. At least, not that we would already know about.
The other piece I mentioned, which is more worth reading (hat tip Dr. L) is “Health Care Spending – A Giant Slain or Sleeping?” in the New England Journal of Medicine. The authors here include David Cutler, whom Forbes suspected was tainting his views with politics (see link above), so we need to be somewhat cautious about the conclusions but in any event they are much more nuanced than in the Furman article and the article makes a number of good points. And, at the least, the authors distinguish between spending on health care and inflation in health care. A few snippets, and my remarks:
- “Estimates suggest that about half the annual increase in U.S. health care spending has resulted from new technology. The role of technology itself partly reflects other underlying forces, including income and insurance. Richer countries can afford to devote more money to expensive innovations.” This is an interesting observation that we ought to think carefully about when professing a desire to “bend the cost curve.” If we are reining in inflation, that’s a good thing. But is it a good thing to rein in innovation in health care? I don’t think so.
- The authors, though, clearly question the value of technological innovation. “The future of technological innovation is, of course, unknown. But most forecasts do not call for a large increase in the number of costly new treatments… some observers are concerned that a wave of costly new biologic agents (for which generic substitutes are scarce) will soon flood the market.” Heaven forbid that we get new treatments! “The use of cardiac procedures has slowed as well.” This is a good thing?
- “Health spending has clearly been associated with health improvements, but analysts differ on whether the benefits justify the cost.” Personally, it makes me uncomfortable to leave this question in the hands of the analysts. If the benefits don’t justify the cost, and the market was free, then no one will pay for those improvements. It’s only with a highly regulated market – replete with “analysts” doing their cost/benefit analysis on health care improvements – that this even comes up.
- Some of the statistical argument is a little weak. “The recent reduction in health care spending appears to have been correlated with slower employment growth in the health care field; this suggests that such changes may continue.” I’m not sure that the causality runs that way. Surely tighter limits on what health care workers can earn might cause slower employment growth? That’s at least as plausible as the direction they are arguing.
That sounds very critical, but I point these things out mainly to make them obvious. Overall, the paper does a very good job of discussing the possible causes of the recent slowdown in health care inflation (although they focus inordinately on “the first 9 months of 2013”, a period during which we know the sequester impacted health care prices), give plenty of credit to reforms instituted far before ACA implementation, correctly distinguish between utilization and prices, and highlight some of the promising trends in health care costs – and yes, there are some! The authors are clearly supportive of the ACA, which I am not, but by and large they raise the salient questions.
It matters less if we instantly agree on the solution than that we agree on the questions.
Note: The following blog post originally appeared on March 12th, 2013 and is part of a continuing year-end ‘best of’ series, calling up old posts that some readers may have not seen before. I have removed some of the references to then-current market movements and otherwise cut the article down to the interesting bits. You can read the original post here.
I just finished a paper called “Managing Laurels: Liability-Driven Investment for Professional Athletes,” and I thought that one or two of the charts might be interesting for readers in this space.
An athlete’s investing challenge is actually much more like that of a pension fund than it is of a typical retiree, because of the extremely long planning horizon he or she faces. While a typical retiree at the age of 65 faces the need to plan for two or three decades, an athlete who finishes a career at 30 or 35 years of age may have to harvest investments for fifty or sixty years! This is, in some ways, closer to the endowment’s model of a perpetual life than it is to a normal retiree’s challenge, and it follows that by making investing decisions in the same way that a pension fund or endowment makes them (optimally, anyway) an athlete may be better served than by following the routine “withdrawal rules” approach.
In the paper, I demonstrate that an athlete can have both good downside protection and preserve upside tail performance if he or she follows certain LDI (liability-driven investing) principles. This is true to some extent for every investor, but what I really want to do here is to look at those “withdrawal rules” and where they break down. A withdrawal policy describes how the investor will draw on the portfolio over time. It is usually phrased as a proportion of the original portfolio value, and may be considered either a level nominal dollar amount or adjusted for inflation (a real amount).
For many years, the “four percent rule” said that an investor can take 4% of his original portfolio value, adjusted for inflation every year, and almost surely not run out of money. This analysis, based on a study by Bengen (1994) and treated more thoroughly by Cooley, Hubbard, and Walz in the famous “Trinity Study” in 1998, was to use historical sampling methods to determine the range of outcomes that would historically have resulted from a particular combination of asset allocation and withdrawal policies. For example, Cooley et. al. established that given a portfolio mix of 75% stocks and 25% bonds and a withdrawal rate of 6% of the initial portfolio value, for a thirty-year holding period (over the historical interval covered by the study) the portfolio would have failed 32% of the time for, conversely, a 68% success rate.
The Trinity Study produced a nice chart that is replicated below, showing the success rates for various investment allocations for various investing periods and various withdrawal rates.
Now, the problem with this method is that the period studied by the authors ended in 1995, and started in 1926, meaning that it started from a period of low valuations and ended in a period of high valuations. The simple, uncompounded average nominal return to equities over that period was 12.5%, or roughly 9% over inflation for the same period. Guess what: that’s far above any sustainable return for a developed economy’s stock market, and is an artifact of the measurement period.
I replicated the Trinity Study’s success rates (roughly) using a Monte Carlo simulation, but then replaced the return estimates with something more rational: a 4.5% long-term real return for equities (but see yesterday’s article for whether the market is currently priced for that), and 2% real for nominal bonds (later I added 2% for inflation-indexed bonds…again, these are long-term, in equilibrium numbers, not what’s available now which is a different investing question). I re-ran the simulations, and took the horizons out to 50 years, and the chart below is the result.
Especially with respect to equity-heavy portfolios, the realistic portfolio success rates are dramatically lower than those based on the “historical record” (when that historical record happened to be during a very cheerful investing environment). It is all very well and good to be optimistic, but the consequences of assuming a 7.2% real return sustained over 50 years when only a 4.5% return is realistic may be incredibly damaging to our clients’ long-term well-being and increase the chances of financial ruin to an unacceptably-high figure.
Notice that a 4% (real) withdrawal rate produces only a 68% success rate at the 30 year horizon for the all-equity portfolio! But the reality is worse than that, because a “success rate” doesn’t distinguish between the portfolios that failed at 30 years and those that failed spectacularly early on. It turns out that fully 10% of the all-equity portfolios in this simulation have been exhausted by year 19. Conversely, 90% of the portfolios of 80% TIPS and 20% equities made it at least as far as year 30 (this isn’t shown on the chart above, which doesn’t include TIPS). True, those portfolios had only a fraction of the upside an equity-heavy portfolio would have in the “lucky” case, but two further observations can be made:
- Shuffling off the mortal coil thirty years from now with an extra million bucks in the bank isn’t nearly as rewarding as it sounds like, while running out of money when you have ten years left to lift truly sucks; and
- By applying LDI concepts, some investors (depending on initial endowment) can preserve many of the features of “safe” portfolios while capturing a significant part of the upside of “risky” portfolios.
The chart below shows two “cones” that correspond to two different strategies. For each cone, the upper line corresponds to the 90th percentile Monte Carlo outcome for that strategy and portfolio, at each point in time; the lower line corresponds to the 10th percentile outcome; the dashed line represents the median. Put another way, the cones represent a trimmed-range of outcomes for the two strategies, over a 50-year time period (the x-axis is time). The blue lines represent an investor who maintains 80% in TIPS, 20% in stocks, over the investing horizon with a withdrawal rate of 2.5%. The red lines represent the same investor, with the same withdrawal rates, using “LDI” concepts.
While this paper concerned investors such as athletes who have very long investing lives and don’t have ongoing wages that are large in proportion to their investment portfolios (most 35-year-old investors do, which tends to decrease their inflation risk), the basic concepts can be applied to many types of investors in many situations.
And they should be.
You can follow me @inflation_guy, or subscribe to receive these articles by email here.
Note: The following blog post originally appeared on April 4th, 2012 and is part of a continuing year-end ‘best of’ series, calling up old posts that some readers may have not seen before. I have removed some of the references to then-current market movements and otherwise cut the article down to the interesting bits. You can read the original post here.
I routinely deride economists who rely on the discredited notion that growth in excess of a nation’s productive capacity is what causes inflation – and, conversely, a surplus of productive capacity is what causes deflation. See, for example, here, here, and here. And that is just in the last month!
I want to point out that it isn’t that I don’t believe in microeconomics (where an increase in supply causes prices to fall and a decrease in supply causes prices to rise). I believe deeply in the supply-demand construct.
But the problem with applying these ideas to the macroeconomy is that people get confused with real and nominal quantities, and they think of the “productive frontier” of an economy as being one thing rather than a multi-dimensional construct.
When an economy reaches “productive capacity,” it isn’t because it has used up all of its resources. It is because it has used up the scarcest resource. Theory says that what should happen isn’t that all prices should rise, but that the price of the scarce resource should rise relative to the prices of other resources. For example, when labor is plentiful relative to capital, then what should happen is that real wages should stagnate while real margins increase – that is, because productivity is constrained by the scarce resource of capital, more of the economy’s gains should accrue to capital. And so Marx was right, in this sort of circumstance: the “industrial reserve army of the unemployed” should indeed increase the share of the economic spoils that go to the kapitalists.
And that is exactly what is happening now. In the banking crisis, the nation’s productive capacity declined because of a paucity of available capital, in particular because banks were forced to de-lever. Output declined, and after the shock adjustments the margins of corporate America rose sharply (which I recently illustrated here), near record levels from earlier in the decade of the 00s. And real wages stagnated. Be very clear on this point: it is real wages which are supposed to stagnate when labor is plentiful, not nominal wages.
Now, what should happen next in a free market system is that the real cost of capital should decline, or real wages should increase, or both, as labor is substituted for capital because of the shortage of capital. We indeed see that the real cost of capital is declining, because real rates are sharply negative out to 10 years and equities are trading at lusty multiples. But real wages are stagnating, going exactly nowhere over the last 36 months. Why is the adjustment only occurring on the capital side, with bull markets in bonds and stocks?
We can thank central bankers, and especially Dr. Bernanke and the Federal Reserve, for working assiduously to lower the cost of capital – also known as supporting the markets for capital. This has the effect, hopefully unintended, of lowering the level at which the convergence between real wages and the real cost of capital happens; and of course, it obviously also favors the existing owners of capital. By defending the owners of capital (and, among other things, refusing to let any of them go out of business), the Fed is actually helping to hold down real wages since there is no reason to substitute away from capital to labor!
But all of this happens in real space. One way that the real cost of capital and the real wage can stay low is to increase the price level, which is exactly what is happening. We call this inflation.
You can follow me @inflation_guy, or subscribe to receive these articles by email here.
Note: The following blog post originally appeared on June 14, 2012 and is part of a continuing year-end ‘best of’ series, calling up old posts that some readers may have not seen before. I have removed some of the references to then-current market movements and otherwise cut the article down to the interesting bits. You can read the original post here.
That said, there could be some signs that core CPI is flattening out. Of the eight ‘major-groups’, only Medical Care, Education & Communication, and Other saw their rates of rise accelerate (and those groups only total 18.9% of the consumption basket) while Food & Beverages, Housing, Apparel, Transportation, and Recreation (81.1%) all accelerated. However, the deceleration in Housing was entirely due to “Fuels and Utilities,” which is energy again. The Shelter subcategory accelerated a bit, and if you put that to the “accelerating” side of the ledger we end up with a 50-50 split. So perhaps this is encouraging?
The problem is that there is, as yet, no sign of deceleration in core prices overall, while money growth continues to grow apace. I spend a lot of time in this space writing about how important money growth is, and how growth doesn’t drive inflation. I recently found a simple and elegant illustration of the point, in a 1999 article from the Federal Reserve Board of Atlanta’s Economic Review entitled “Are Money Growth and Inflation Still Related?” Their conclusion is pretty straightforward:
“…substantial changes in inflation in a country are associated with changes in the growth of money relative to real income…the evidence in the charts is inconsistent with any suggestion that inflation is unrelated to the growth of money relative to real income. On the contrary, there appears to be substantial support for a positive, proportional relationship between the price level and money relative to income.”
But the power of the argument was in the charts. Out of curiosity, I updated their chart of U.S. prices (the GDP deflator) versus M2 relative to income to include the last 14 years (see Chart, sources: for M2 Friedman & Schwartz, Rasche, and St. Louis Fed, and Measuring Worth for the GDP and price series). Note the chart is logarithmic on the y-axis, and the series are scaled in such a way that you can see how they parallel each other.
That’s a pretty impressive correlation over a long period of time starting from the year the Federal Reserve was founded. When the authors produced their version of this chart, they were addressing the question of why inflation had stayed above zero even though M2/GDP had flattened out, and they noted that after a brief transition of a couple of years the latter line had resumed growing at the same pace (because it’s a logarithmic chart, the slope tells you the percentage rate of change). Obviously, this is a question of why changes in velocity happen, since any difference in slopes implies that the assumption of unchanged velocity must not hold. We’ve talked about how leverage and velocity are related before, but an important point is that the wiggles in velocity only matter if the level of inflation is pretty low.
A related point I have made is that at low levels of inflation, it is hard to disentangle growth and money effects on inflation – an observation that Fama made about thirty years ago. But at high levels of inflation, there’s no confusion. Clearly, money is far and away the most important driver of inflation at the levels of inflation we actually care about (say, above 4%!). The article contained this chart, showing the same relationship for Brazil and Chile as in the chart updated above:
That was pretty instructive, but the authors also looked across countries to see whether 5-year changes in M2/GDP was correlated with 5-year changes in inflation (GDP deflator) for two windows. In the chart below, the cluster of points around a 45-degree line indicates that if X is the rate of increase in M2/GDP for a given 5-year period, then X is also the best guess of the rate of inflation over the same 5-year period. Moreover, the further out on the line you go, the better the fit is (they left off one point on each chart which was so far out it would have made the rest of the chart a smudge – but which in each case was right on the 45-degree line).
That’s pretty powerful evidence, apparently forgotten by the current Federal Reserve. But what does it mean for us? The chart below shows non-overlapping 5-year periods since 1951 in the U.S., ending with 2011. The arrow points to where we would be for the 5-year period ending 2012, assuming M2 continues to grow for the rest of this year at 9% and the economy is able to achieve a 2% growth rate for the year.
So the Fed, in short, has gotten very lucky to date that velocity really did respond as they expected – plunging in 2008-09. Had that not happened, then instead of prices rising about 10% over the last five years, they would have risen about 37%.
Are we willing to bet that this time is not only different, but permanently different, from all of the previous experience, across dozens of countries for decades, in all sorts of monetary regimes? Like it or not, that is the bet we currently have on. To be bullish on bonds over a medium-term horizon, to be bullish on equity valuations over a medium-term horizon, to be bearish on commodities over a medium-term horizon, you have to recognize that you are stacking your chips alongside Chairman Bernanke’s chips, and making a big side bet with long odds against you.
I do not expect core inflation to begin to fall any time soon. [Editor's Note: While core inflation in fact began to decelerate in the months after this post, median inflation has basically been flat from 2.2% to just above 2.0% since then. The reason for the stark difference, I have noted in more-recent commentaries, involves large changes in some fairly small segments of CPI, most notably Medical Care, and so the median is a better measure of the central tendency of price changes. Or, put another way, a bet in June 2012 that core inflation was about to decline from 2.3% to 1.6% only won because Medical Care inflation unexpectedly plunged, while broader inflation did not. So, while I was wrong in suggesting that core inflation would not begin to fall any time soon, I wasn't as wrong as it looks like if you focus only on core inflation!]
 The reference of “money relative to income” comes from manipulation of the monetary identity, MV≡PQ. If V is constant, then P≡M/Q, which is money relative to real output, and real output equals income.
Note: The following blog post originally appeared on February 3, 2011 (with an additional reference that was referred to in a February 17, 2012 post) and is part of a continuing year-end ‘best of’ series, calling up old posts that some readers may have not seen before. I have removed some of the references to then-current market movements and otherwise cut the article down to the interesting bits. You can read the original post here.
Rising energy prices, if they rise for demand-related reasons, needn’t be a major concern. Such a price rise acts as one of the “automatic stabilizers” and, while it pushes up consumer prices, it also acts to slow the economy. This helps reduce the need for the monetary authority to meddle (not that anything has stopped them any time recently). It doesn’t need to respond to higher (demand-induced) energy prices, because those higher prices are serving the usual rationing function of higher prices vis a vis scarce resources.
But when energy prices (or, to a lesser extent, food prices) rise because of supply-side constraints – say, reduced traffic through the Suez Canal, or fewer oil workers manning the pumps in a major oil exporting region – then that’s extremely difficult for the central bank to deal with. More-costly energy will slow the economy inordinately, and higher prices also translate into higher inflation readings so that if the central bank responds to the economic slowdown they risk adding to the inflationary pressures.
One of the ways that we can restrain ourselves from getting too excited, too soon, about the upturn in employment is to reflect on the fact that surveys still indicate considerable uncertainty and pessimism among the people who are vying for those jobs (or clinging to the ones they have, hoping they don’t have to compete for those scarce openings). This is illustrated by the apparent puzzle that Unit Labor Costs (reported yesterday) remain under serious pressure and Productivity continues to rise at the same time that profit margins are already extremely fat. Rising productivity is normal early in an expansion, but the bullish economists tell us that the expansion started a year and a half ago. We’re about halfway through the duration of the average economic expansion (if you believe the bulls). And fat profit margins are not as normal early in an expansion.
Now, we don’t measure Productivity and Unit Labor Costs very well at all. Former Fed Chairman Greenspan used to say that we need 5 years of data before we can spot a change in trend, and he may be low. But it seems plausible that there remains downward pressure on wages. Call it the “industrial reserve army of the unemployed” effect. While job prospects are improving, they are apparently not improving enough yet for employed people to start pressing their corporate overlords to spread more of the profits around to the proletariat.
Fear not, however, that this restrains inflation. The evidence that wage pressures lead to price pressures (and conversely, the absence of wage pressures suggest an absence of price pressures) is basically non-existent. Let me present two quick charts that make the point simply.
The chart above (Source for data: Bloomberg) shows the relationship between the Unemployment Rate and the (contemporaneous) year-on-year rise in Average Hourly Earnings. I have divided the chart into four phases: 1975-1982 (a period which runs from roughly the end of wage-and-price controls in mid-1974 until the abandoning of the monetarist experiment near the end of 1982), a “transition period” of 1983-1984, the period of 1985-2007 (the “modern pre-crisis experience”), and a rump period of the crisis until now. Several interesting results obtain.
First of all, there should be no surprise that that the supply curve for labor has the shape it does: when the pool of available labor is low, the price of that labor rises more rapidly; when the pool of available labor is high, the price of that labor rises more slowly. Labor is like any other good or service; it gets cheaper if there’s more of it for sale! What is interesting as well is that abstracting from the “transition period,” the slopes of these two regressions are very similar: in each case, a 1% decline in the Unemployment Rate increases wage gains by about ½% per annum. Including the rump period changes the slope of the relationship slightly, but not the sign. This may well be another “transition” period leading to a permanent shift in the tradeoff of Unemployment versus wage inflation.
But clearly, then, when Unemployment is high we can safely conclude that since there are no wage pressures there should be no price pressures, right?
The second chart puts paid to that myth. It shows the same periods, but plots changes in core CPI, rather than Hourly Earnings, as a function of the Unemployment Rate. This is the famous “Phillips Curve” that postulates an inverse relationship between unemployment and inflation. The problem with this elegant and intuitive theory is that the facts, inconveniently, refuse to provide much support. [Note: the above chart is very similar to one appearing in this excellent article by economist John Cochrane, which appeared in the Fall of 2011.]
Why does it make sense that wages can be closely related to unemployment, but inflation is not? Well, labor is just one factor of production, and retail prices are not typically set on a labor-cost-plus basis but rather reflect (a) the cost of labor, (b) the cost of capital, (c) the proportion of labor to capital, and importantly (d) the rate of substitution between labor and capital. This last point is crucial, and it is important to realize that the rate of labor/capital substitution is not constant (nor even particularly stable). When capital behaves more like a substitute for labor, a plant owner can keep customer prices in check and sustain margins at the same time by deepening capital. This shows up as increased productivity, and causes the relationship between wages and end product prices to decouple. Indeed, in the second chart above the R2s for both periods is…zero!
This isn’t some discovery that no one has stumbled upon before. In a wonderful paper published in 2000, Gregory Hess and Mark Schweitzer at the Cleveland Fed wrote that
It turns out that the vast majority of the published evidence suggests that there is little reason to believe that wage inflation causes price inflation. In fact, it is more often found that price inflation causes wage inflation. Our recent research, which updates and expands on the current literature, also provides little support for the view that wage gains cause inflation. Moreover, wage inflation does a very poor job of predicting price inflation throughout the 1990s, while money growth and productivity growth sometimes do a better job. The policy conclusion to be drawn is that wage inflation, whether measured using labor compensation, wages, or unit-labor-costs growth, is not a reliable predictor of inflationary pressures. Inflation can strike unexpectedly without any evidence from the labor market.
The real mystery is why million-dollar economists, who have access to the exact same data, continue to propagate the myth that wage-push inflation exists. If it does, there is no evidence of it.
You can follow me @inflation_guy, or subscribe to receive these articles by email here.