Strategies for avoiding equity bubbles
The bursting of equity bubbles damages investors’ portfolios, but Robert Keavney argues that with robust valuation tools, bubbles can be identified in advance – and in time to take defensive action.
Money Management Analysis |
Editorial: Financial planners' reform fatigue Feature: SMSF reform: going to excess News in Focus: SMSFs would pay a high price for compensation Opinion: The commodities boom: is the best yet to come? Opinion: Rethinking hedge funds following the GFC Toolbox: More to account-based pensions than meets the eye |
Some investment professionals feel it is speculative or guesswork to attempt to identify bubbles. I have heard it argued that one can’t time markets, so it follows that one can’t know whether markets are expensive or cheap. But this does not follow.
Timing is a question of when an event will occur (eg, when a bubble will burst). Generally this cannot be known. But that any bubble will burst at some time is somewhere between highly probable and certain.
For example, most people today would accept that the American and world stock markets became over-valued in the dotcom period, peaking in 2000. Technology stocks traded with price/earning (PE) ratios in the hundreds (and in some cases, in the thousands), stocks with no profit were being valued as a multiple of revenue, and companies that changed their name to include ‘dot com’ experienced strong growth in share prices even if they had nothing to do with the Internet.
It is utterly implausible to suggest that this period was not a bubble. Yet, even those who recognised it at the time had no idea how long the bubble would last. Indeed, the market rose for several years beyond the point where it could first be described as excessive before the market collapsed.
We can therefore conclude that the inability to time markets is not inconsistent with being able to determine when they are dangerously over-valued.
Priority number one
It is often said, and experience confirms, that asset allocation is more important than fund/security selection in determining overall returns.
Consistent with this, I believe that recognising bubbles and taking defensive action is the single most important element in achieving superior long-term investment returns.
Very few financial planners are able to consistently produce an equity return 2 per cent per annum above index. Those who could would deliver a valuable enhancement to clients’ portfolios. However an adviser who produced only index-like returns on the equity portion of portfolios, but who had recognised over-valuation and, say, halved normal equity exposure before the popping of the dotcom and global financial crisis (GFC) bubbles, would have achieved a stronger long-term return than the adviser who achieved 2 per cent excess, but was not defensive at market peaks.
Surely it follows that more time should be devoted to market valuation than to stock/fund manager selection.
Fair value
Presumably when one buys at ‘fair value’, it implies that one has reasonable prospects of not looking back and feeling one paid too much. Of course, no guarantees are possible as markets can move away from fair value, but surely this is the essence of the concept of fair value (for completeness, we should add that fair value also implies ‘not outstandingly cheap’).
Can fair value fall by 85 per cent in 16 months, and then rise by 543 per cent in the next 15 months? If fair value could fall by 85 per cent in a year-and-a-third, it could be dangerous to pay fair price. Surely this would undermine any sense that fair value is a useful concept.
This introduces the principle that fair value must not be volatile. As John Hussman describes it, valuation must be based on smooth, low variability fundamentals.
Valuation tools
The most commonly used tool in valuing markets is PE. Figure 1 shows the earnings and price over the last century. Price is unambiguous, but earnings are not. Should a robust valuation tool use actual/historical or forecast earnings?
Any valuation tool should be back-tested over very long periods, which requires historical data. There is a huge volume of data for past actual earnings, whereas there is no way of knowing consensus earnings forecasts 50 or 100 years ago, hence there is no capacity to adequately test the veracity of forecasts using them.
Further, market earnings forecasts are notoriously unreliable, generally erring on the optimistic side. Thus, only historical earnings should be used in valuation tools for whole markets (the situation is quite different with individual stocks).
Yet, even recognising that we are interested in historical earnings, which of the various measures of earnings should we use?
The latest historical earnings data on the S&P 500 index website is for 30 June, 2009. Standards & Poor’s (S&P) reports that the PE of the market on that day was both 23.1 and 122.4 – depending which method of measuring earnings is used. As the higher number is more than five times the lower, each gives a very different impression of the level of the market. At a PE of 122 the US market would have been its most expensive ever, which is surprising, as this was far below the market peak in 2007.
S&P publishes ‘as reported’ earnings (ARE) and ‘operating’ earnings (OE). As the name implies, ARE is the bottom line in companies’ published accounts – their reported profits.
Intradaytips.com describes OE as: “Earnings without considering certain expenses such as inventory write downs, severance pay, depreciation and amortisation charges, or just about anything else the company feels like excluding to make its earnings look better.” While this may be a little harsh, it contains an element of truth. Let’s just say OE excludes certain expenses.
OE have averaged 19 per cent higher than ARE from 1988 through 2009. The amount by which OE exceeds ARE follows a strongly growing trend line. Directors are excluding more and more from their purported operating profits. It is difficult to suggest an explanation for this without reflecting poorly on the character of corporate America – still, readers are free to draw their own conclusions.
Valuation models should use ARE, especially if historical back testing is desired.
Normalising
Now we must come to the fundamental flaw in PE as a valuation tool. Earnings are highly volatile (from here on, ‘earnings’ refers to ARE). S&P 500 earnings fell 85 per cent from September 2007 to January 2009, then grew – I mean exploded – by 543 per cent to October 2010.
This alone should undermine any confidence in PE using one year’s earnings as a valuation tool.
Normalising (smoothing) is the process used to overcome the problems just described. Professor Robert Shiller, author of the prescient Irrational Exuberance, used the average of the last 10 years earnings in his PE model. This smoothed out the wild swings in earnings just described, resulting in a more stable and meaningful sense of fair value for the market.
An even smoother result is produced by using average 20-year earnings. The smoothest line is a trend line, but this requires decisions about which period to use for the trend line. Using one or two decades of average earnings in PE models will produce reasonably sound estimates of market value.
Figure 2 shows Shiller’s PE for the S&P 500 over time. He calculates the current multiple to be approximately 23. If you run your eye across the chart you see that only the market peaks of 1901, 1929, 1966 and the recent period have ever exceeded this level. Using this tool and others with a demonstrable track record, suggests US shares have again become dangerously over-valued.
The ‘X trap’
Earnings are cyclical. When they are above normal they will subsequently decline – and vice versa. It would seem sensible if investors were only willing to pay a lower PE multiple to buy shares when profits are abnormally high. If, say, profits were twice their norm, a PE of half its norm would keep prices stable and around fair value.
Conversely, when profits are below average a higher than average multiple would be sensible. If investors actually operated this way there would be no bubbles and no busts. Equities would grow steadily and profitably.
However, instead of this we fall into the ‘X trap’ – extrapolation. When conditions have been favourable and profits strong, investors extrapolate current favourable conditions into the future and are willing to pay a higher PE, justified in their mind by the future of unending profitability which they imagine.
Figure 2 shows that, in 2000 when profit margins were unsustainably fat, investors chose to pay higher PEs than ever in history. Conversely, when profits were low in the 1982 recession, investors only offered the lowest PEs since the 1930s.
Hence we have bubbles, and busts. This is exactly what a bubble is: high multiples of temporarily high profits.
Inflation
It is often said that lower inflation justifies higher PE multiples. As matter of fact there is a tendency for multiples to be above average when inflation is low. However, does not mean this is justified.
Inflation is not stable, so it fails our low-volatility test for valuation metrics.
History has confirmed this. An examination of all low inflation periods when PEs were high, shows that subsequent market returns were disappointing as PEs eventually reverted. Low inflation does not justify high PEs.
The value of valuation
It is not hard to make money in rising markets. It would be hard not to. The big threat is losing it again in falling markets.
One benefit of looking for over-valuation is that it allows reasonable exposure to growth assets when valuations are in the fair value range. One is not forced to sit in conservative portfolios through the whole cycle, to defend against bubbles. The good news is that bubbles are measurable, as is seen in figures 1 and 2.
However, it must be acknowledged that dangerously expensive markets can continue for an inconveniently long time. The S&P 500 peaked in the dotcom era in 2000, then fell until 2003. From then to 2007 the market rose, and was dangerously over-priced for at least two years of that time.
Jeremy Grantham called this period the “greatest sucker rally in history”. That it was over-valued was ultimately confirmed by the collapse during the GFC. However, two years is a long time to maintain faith in your valuation models, when it is costing your clients money, and competitors (and sometimes even colleagues) are criticising you. It also requires an ability to sustain clients’ comfort with your strategy during this extended phase.
It requires a certain patience and strength of character to adhere to a strategy based on valuation for several years, while markets rise inexorably.
But then, no matter what strategy one follows, there will be multi-year periods where it is not producing optimal results. The real test is long-term returns.
Looking at figure 2 makes clear that it could easily have been recognised, before the peaks of 2000 and 2007, that markets were dangerous. Identifying bubbles is the main purpose of valuation tools. This can make a considerable difference to your clients’ returns and their satisfaction with you as an adviser.
Robert Keavney is an industry commentator.
Recommended for you
The FSCP has announced its latest verdict, suspending an adviser’s registration for failing to comply with his obligations when providing advice to three clients.
Having sold Madison to Infocus earlier this year, Clime has now set up a new financial advice licensee with eight advisers.
With licensees such as Insignia looking to AI for advice efficiencies, they are being urged to write clear AI policies as soon as possible to prevent a “Wild West” of providers being used by their practices.
Iress has revealed the number of clients per adviser that top advice firms serve, as well as how many client meetings they conduct each week.