Secrets Of Success: Who polices the investment police?
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.Two headlines this week have drawn attention, once again, to the controversies that surround the business of fund selection. Both underline why most investors find it hard to know what to believe on the subject.
Two headlines this week have drawn attention, once again, to the controversies that surround the business of fund selection. Both underline why most investors find it hard to know what to believe on the subject.
One story, in a Sunday newspaper, quoted research from Professor Ian Tonks, at Exeter University. It found evidence that fund performance does "persist" from one period to the next. This is contrary to the view of the regulators that there is little or no merit in taking past performance into account when considering which fund to buy. On average, you would, they say, do as well with a dartboard, and better by picking funds on the basis of costs rather than performance.
The second story, by contrast, pointed to evidence that fund ratings, which many investors use to help them choose a fund to buy, "are unreliable and irrelevant". The source for this was another academic study, this time from a French business school.
Most of the best-known published ratings systems are based on a mixture of qualitative and quantitative measures, in which risk-adjusted past performance is given a significant weight. The implication is therefore once again that trying to predict which funds will do well in future is pretty much a waste of time, just as the Financial Services Authority (though not the providers of ratings) would prefer you to believe.
Which view is right? And should one take much notice anyway of academic studies of this kind (of which there are now a great number, stretching back more than 30 years)?
The answer, it seems to me, having ploughed through a lot of this stuff over the years, is that it is as well to be aware of the broad nature of the debate that goes on over the persistence of fund performance, but not to let it stop you applying common sense in choosing your funds. Being a slave to the latest academic nostrum can be an expensive habit.
The trouble with academic studies is that they tend to concentrate on aggregate data (in part, because it is readily accessible and can be quickly run through a computer programme to generate an easy, career-advancing article for a specialist journal). As a result, the exercise yields statistically significant results but doesn't actually tell you much that is of huge practical interest outside academia.
There is no question that the first study into fund management performance, by a young Harvard professor called Michael Jensen, caused something of a sensation when it first appeared more than 30 years ago. It showed clearly that most mutual funds (the US equivalent of our unit trusts and Oeics) did not, in fact, outperform the market they were trying so hard to beat and which everyone at the time assumed was their rationale. Scores of subsequent studies have confirmed this.
The discovery helped to sustain the momentum towards index-tracking funds over the next 25 years. But for such a potentially damaging discovery, it has to be said that it has done surprisingly little to stop the phenomenal growth of the actively managed fund business around the world over the same time period. While index funds have since grown from nowhere to become a huge industry in their own right, actively managed funds still account for the larger part of the fund universe.
Could this be because the academics are missing something that most investors can see makes sense from a pragmatic point of view? While the nature of the academic mind is to test general hypotheses from analysis of a specific set of data, the pragmatic investor is more interested in finding the exceptions that prove the rule. It is the funds that appear to have bucked the trend of market-trailing performance, not the average fund nor the aggregate behaviour of competitor funds, that we want to know about.
The academics eschew the National Lottery (on sensible statistical grounds) and put their money into index funds of one kind or another, thereby guaranteeing moderate but reliable performance over time. But the animal spirits among us tend to say: "I don't care how unreliable the data is, just give me a shot at the handful of funds that have the potential to beat the market."
The issue then becomes one of trying to predict which of the funds will outperform in future, a task in which ratings (in my view) have a certain but limited role to play.
Any system of ratings is, by definition, mostly backward-looking. If it is to be based on any analysis, it has to have some grounding in the way that the fund has performed in the past, for the obvious reason that data about the future does not yet exist. Fund rating services today are much more sophisticated than when they first appeared. Most look at risk-adjusted rather than raw performance figures, and some (such as Citywire) focus on individual fund manager performance, recognising that managers of funds do move around a lot.
Services such as Standard & Poor's also interview managers about the way they run their funds in some depth, and claim to give a lot more weight to qualitative and style factors.
Having said that, I know of no convincing evidence that picking funds on the basis of ratings alone does much to improve your chance of finding a market-beating fund. The founder of one of the better known ratings services told me once that his analysis suggested a top rating increased the probability of an outperforming fund continuing to outperform from 50 per cent (what you would expect if there was no way of predicting) to about 55 per cent. Even if true, and replicated elsewhere, that is not much of an edge.
The study by Professor Tonks, which looked at the performance of pension funds from 1983 to 1997, suggests a figure of 58 per cent, but notes that this only applies for performance one year ahead, and to the specific sample period he looked at.
Would it be cost-effective to switch funds around every year, hoping to catch an 8 per cent edge? Almost certainly not. Are ratings services a waste of time? I would not go that far, as they have a useful information and monitoring function.
Fund managers like them, because a rating provides some external validation, and they clearly have an influence on sales. The reality is that while a fund worth buying will almost certainly have a good rating, the reverse is not the case. A good rating does not guarantee the fund in question is worth owning.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments