The voluntary nature of hedge fund reporting has made many commentators see hedge fund indices and, by extension, hedge fund industry performance, with a somewhat jaundiced eye. Since hedge fund databases are designed as surreptitious marketing vehicles for hedge funds, the worst performing funds are likely to be absent from the indices.
But others counter that the best performing funds are also likely to abstain from voluntary reporting since they believe it might be viewed as kind of tacky by its rarefied clientele.
A recent study (“Out of the Dark: Hedge Fund Reporting Biases and Commercial Databases”) by Adam Aiken of Arizona State, Christopher Clifford of the University of Kentucky and Jesse Ellis of the University of Pittsburgh concludes that both are right – that “self reported hedge fund returns fail to account not only for some of the worst performing funds, but also for the best performing funds.”
The question remains, however, is the effect of this abstinence symmetrical or is one of these groups larger than the other?
Unfortunately, it’s tough to measure the performance of funds that don’t report to any databases. But you can analyze SEC-registered funds of funds and deduce from their performance the returns of their underlying hedge funds. That’s just was Aiken, Clifford and Ellis did since these funds of funds report holdings much like a mutual does in a 13F filing.
They found that up to two-thirds of the alpha apparently produced by hedge fund indices may simply be the result of the fact that the dogs are missing. As the authors observe:
“This evidence indicates that estimates of average managerial talent using self-reported data are likely overstated and implies that extrapolating performance results from self-reported data to the population of hedge funds as a whole is difficult.”
Yikes. That’s a pretty serious indictment of hedge fund indices. (Although researchers have long known about this issue and routinely adjust index returns for this and other biases).
But what about the best performing funds? Are they also AWOL from hedge fund databases? And if so, do missing starts mitigate the effect of the missing dogs?
The trio determined that only 59% of the top 100 hedge funds in 2008 (as ranked by Institutional Investor) actually reported to either Lipper TASS or Barclayhedge, two major – although apparently arbitrarily selected – databases.
The researchers calculated that the downward bias of the missing stars was not, in fact, large enough to mitigate the upward bias resulting from the missing dogs. The result: a net upward bias in hedge fund index returns.
The upward bias caused by non-reporting dogs and corresponding downward bias from non-reporting stars is illustrated by the following chart from the trio’s paper. The horizontal axis is the performance bucket of both non-reporting and reporting funds. The line represents the outperformance of reporting funds vs. non-reporting funds.
As you can see, when returns stink (left side of chart – lowest return buckets), the reporting funds outperform their non-reporting brethren (i.e. terrible funds didn’t report). But when returns are hot, the reporting funds underperform their non-reporting brethren (i.e. awesome funds didn’t report).
Unfortunately, the story doesn’t end there. In fact, it gets more complicated since the missing 41 “top” funds ranked by II are generally “mega funds”. This means that on a dollar-weighted basis, the effect of missing stars may still be larger than the effect of missing dogs. With the top 100 funds managing nearly three-quarters of all hedge fund assets, the absence of 41% of them would likely amount to more assets than all of the non-reporting dogs put together. As a result, the authors conclude that “it is possible that from a value-weighted perspective the bias would in fact be negative.”