What is wrong with the ratings system
Who will be the first to sue a research house for its star rating - an investor or an adviser? Will the grounds be negligence or conflicts of interest?
The practice of suing those who make generic recommendations on securities, even if they do not give specific advice to a client, has begun in America. A number of cases have been initiated that involve claims against analysts for issuing general recommendations on securities on which investors relied. Is a five star rating an expert’s recommendation on which an investor is reasonably entitled to rely?
Star ratings are becoming a major force in our industry, partly because they are simplistic.
Fund managers complain that some of the individuals judging them have neither the skills nor experience to do so.
However, the recent trend for individuals, such as Tom Cottam, to join the ranks of researchers responds to this criticism. We are now beginning to see individuals who have had long-term, successful, high-level experience in funds management adding their skills to the research of their peers.
This touches on the biggest challenge that ought to be put to research houses - to establish that they have competence in what they claim to do.
All that has been necessary to date to be able to give apparently authoritative judgements on the skills of fund managers, is to appropriate the name ‘research house’ to oneself. As a consequence, any opinions expressed, whether well-founded or shallow, carry an aura of authority and in some cases materially influence fund manager inflow.
The role of fund managers in this is intriguing, as they face a dilemma. Advertisements run by fund managers report their four or five star rankings with great enthusiasm. However, some will find they have created a rod for their own back.
At such a time as they find they receive a low ranking from some research house, it will be hard to debunk it as not authoritative when previously they have held up favourable rankings as if they are significant.
The advertisements of fund managers reporting their ratings is the primary factor that will add credibility to star ratings in the minds of the investing public.
This creates a delightful opportunity for any business to build its profile at no cost - adopt the term ‘research house’, ascribe high ratings to various fund managers, and find its brand featured in advertisements as an authoritative arbiter of quality investments. You will recognise that I am not the first to think of this.
It will be interesting when most research houses have a reasonable number of years of ratings histories available. We will be able to calculate the average returns and volatility achieved by their five star funds, four star funds and so forth. This will provide an objective means of measuring the relative skills of the various research houses, ie which recommendations have done best.
Indeed, we will be able to see if the rankings mean anything. That is, to find out if there is a high correlation between ranking and result? Will the five star funds outperform one star funds over the long-term? It is not self-evident that this always will be so.
In my view, the skills of the researchers will be seen to differ substantially. I would suggest that any method that emphasises short-term past returns may prove counterproductive by systematically encouraging the equivalent of buying at the top and selling at the bottom. Experience teaches repeatedly that emphasising short-term performance is a trap.
Every quality fund manager will experience variations of performance. If there is a systematic tendency to downgrade after underperformance, it may simply discourage participation in a recovery.
Similarly, if upgrades systematically follow periodic outperformance, it may encourage investment at a peak. In this case, it may be good timing to treat a downgrade as a buy signal!
Quality research should be of predictive value, not merely a reporting of history. Taking a topical case, Table 1 compares BT Funds Management’s performance with the market average. Would it have been appropriate to discourage investment in their previous troughs in 1987, 1989 and 1996?
In each case, the subsequent year saw them outperform. Their worst ever relative performance was 31.4 per cent behind the index up to September 1987. Over the next 12 months, they beat the market by 50.7 per cent.
Would it have been remotely intelligent to have downgraded BT just before the 1987 crash due to their apparently dreadful previous 12 months? Yet, a mechanistic focus on recent returns would inevitably encourage this.
This is not a guarantee of a pending BT outperformance today. It merely highlights that one needs to know why an investment has underperformed, not merely that it has.
Common sense would suggest that there is the same range of competence among organisations operating in the research area as there is with financial planners and fund managers.
The fact that an organisation offers ratings services does not guarantee superior skills in research. Yet ratings determine the recommended list of most advisers and consequently drive industry funds flow.
Fortunately, the number of ratings services has increased and they frequently disagree with each other. Consequently, the channelling of funds flow is not excessively narrow.
However, the differences in ratings highlight the necessarily subjective nature of these evaluations. Investment research is judgement, not objective science.
Planners need to form careful assessments about potential conflicts of interest of research houses and should ask about their revenue steams.
Some researchers charge managers to rate them. So long as there is no correlation between the charge and the number of stars, there would seem to be no inherent problem with this. However, it is unlikely that those who are consistently rated lowly would continue to pay for the privilege, so this may create a bias to rate up.
Those who charge managers a fee to publicise their rating may have a similar bias, as a one or two star manager is unlikely to wish to broadcast this. Further, if a manager is paying substantial fees to advertise its high rating, this could give the research house a disinclination to downgrade it.
It is instructive to analyse the spread of ratings given, to see if they produce a normal distribution curve or show a tendency for most to be above average. In this, look at actual history, not stated policy, as these may not align.
A most useful source of information about the different quality of research houses is to ask fund managers their views - off the record - as they respect the power of the researchers. In my experience these opinions are frequently:
Strongly held;
Surprisingly uniform; and
Not correlated with ratings received.
You know the views of researchers on managers. It’s useful to also know the reverse.
Rob Keavney is the managingdirector of Investor Security Group(ISG).
Recommended for you
Professional services group AZ NGA has made its first acquisition since announcing a $240 million strategic partnership with US manager Oaktree Capital Management in September.
As Insignia Financial looks to bolster its two financial advice businesses, Shadforth and Bridges, CEO Scott Hartley describes to Money Management how the firm will achieve these strategic growth plans.
Centrepoint Alliance says it is “just getting started” as it looks to drive growth via expanding all three streams of advisers within the business.
AFCA’s latest statistics have shed light on which of the major licensees recorded the most consumer complaints in the last financial year.