Don’t be misled by past performance
Surely the biggest urban myth of our industry is that last year’s top performers are this year’s bottom performers.
This trite old chestnut is trotted out at conference after presentation after briefing after advertisement.
This first ‘research in focus’, which will become a regular monthly feature, is devoted to explaining what is really known and proven about the ability of past performance to predict the future.
Fund managers keeping an eye on theAustralian Securities and Investments Commission’s (ASIC) intentions to curtail how performance can be used in advertising will be aware of an excellent paper on the issue of performance persistence,A review of research on the past performance of managed fundsby the Funds Management Research Centre.
According to ASIC, it has no intention of prescribing how advisers may use past performance with their clients.
But where is the logic in regulating how fund managers use performance data in advertisements, but stopping short of prescribing how researchers and advisers use performance with their clients, when 95 per cent of investment into funds is pre-researched and intermediated?
It is critical that advisers understand what types of performance analyses are best to use when selecting funds — and which to ignore when fund managers set about dazzling us with their performance graphs and tables.
So, here are five key conclusions from the 100 or so studies on performance persistency conducted in the past 40 years, in the USA, UK and Australia:
1. A number of studies have found that bad past performance increased the probability of bad future performance.
2. About half the studies found no correlation between good past and good future performance.
3. Where studies found support for performance persistency, it was more frequently in the shorter-term (one to two years) than in the medium to long-term. Studies came to inconsistent conclusions about which time periods (historical and future) were correlated.
4. Where persistence was found, the outperformance margin tended to be small.
5. Returns are only meaningful if adjusted for risk/volatility and comparing like with like.
Not a single study mentioned supported the ‘last year’s top performers are this year’s bottom performers’ myth.
That statement is a perversion of the conclusion that can be validly drawn from studies that found no evidence in support of performance persistence — the valid conclusion is that ‘there’s no evidence that the top performer over one period will be the top performer over another’. And even that statement flies in the face of the conclusions of studies that have found some degree of performance persistence.
The fact that most studies agree there is persistency among poor relative performers is really useful.
If it is hard to find fund managers that consistently outperform the relevant benchmark, we can at least take comfort from knowing it is much easier to conclude those that don’t, will continue not to.
Knowing it is safe to ignore the worst funds in a category can save a lot of time. And it makes intuitive sense — once a fund is in a deep hole, it is a very big challenge for the manager to get out of it.
This suggests it is not such a hard sell to get investors to understand that letting go of a bad fund is good investment sense. And when they ask why they were in a bad fund in the first place, point to number two on the list above.
What is especially ironic is number three — that performance persistency is strongest over the short term, say one to two years.
As an industry, we’ve spent years and millions of dollars encouraging investors to ignore short-term performance, and focus on the long-term. And here’s a reasonable body of evidence that suggests short-term relative performanceisrelevant to picking funds, more so than long-term relative performance.
Product providers should take note: removing the costs and inconvenience of frequent switching would help greatly in this finding being applied in the real world.
Soucik (2002) investigated this issue in detail. Eliminating survivorship bias, and using both raw and risk-adjusted returns, Soucik found that for equity funds, there’s a generally symmetrical pattern to performance predictability: shorter-term risk-adjusted returns correlated with shorter-term future performance; and, medium to long-term risk-adjusted returns correlated (weakly) with medium to long-term performance.
So an equity fund investor should use two years of data to help paint a picture of up to two years into the future, and at least three years of past returns if looking at a three year investment — but the two year prediction is the more robust.
But Soucik found that the prediction curve is more conservative for fixed interest funds. Examining the last two years of risk-adjusted returns helps explain at most the ensuing 18 months of fixed interest returns, while five years of data are needed to predict three years of future performance and again, the shorter-term prediction is the more robust.
Interestingly, Soucik also found that predictive power is strongest at the extremes. Like many other studies, Soucik found that the very poor performers tend to have persistent performance. But Soucik also found that the very top performers have a degree of performance persistence too.
Index managers may well fall on the fourth conclusion — that where persistence was found, the out-performance margin tended to be small — with great glee.
Index funds have their place, but even small outperformance can be meaningful over time. Just a 0.1 per cent difference in returns on an initial $100,000 investment can mean a difference of more than $10,000 at the end of 20 years.
But the last conclusion — that returns are only meaningful if adjusted for risk and comparing ‘like with like’ — is the most important.
How many advisers use latest month end performance stats in their financial plans? For years we have trained advisers to use research appropriately. Every single one of them began by naming month end subcategory performance reports as their quantitative report of choice when selecting funds.
Hallahan (1999) concluded that the use of raw returns creates an overall impression of performance reversal (which is perhaps where our industry myth originated) but that the use of risk-adjusted returns confirms the existence of performance persistence.
Hallahan and Faff (2001) also found a dominant pattern of performance reversal using year-on-year raw returns of Australian rollover funds. And I’ve already outlined Soucik’s finding that the very worst and very best performers tend to have persistent risk-adjusted returns.
So throw away point-to-point raw returns performance reports immediately and never go back there. They’re useless. Useless from a predictive perspective, and useless from a valuation perspective (very few of us invest on exactly the same days as shown on the report).
The principles described above are basic foundation stones for the role of quantitative analysis in selecting funds. There is a distinct and qualified role for quantitative analysis, although it’s certainly not an exclusive role.
Future ‘research in focus’ features will concentrate on how to get the best out of quantitative analysis of funds.
Deirdre Keown is with brillient! and has been a funds analyst for the past 10 years.
Recommended for you
A relevant provider has received a written direction from the Financial Services and Credit Panel after a superannuation rollover resulted in tax bill of over $200,000 for a client.
Estimates for the calendar year 2024 put the advice industry on track for a loss in adviser numbers as exits offset gains from new entrants.
Adviser Ratings shares five ways that financial advice changed in 2024 with an optimistic outlook for 2025, thanks to the Delivering Better Financial Outcomes legislation.
National advice firm Invest Blue has announced several acquisitions, including the purchase of an estate planning and wealth protection business Lambert Group.