Researchers to hedge fund investors: Don’t throw away Sharpe ratios just yet

Granted, it took a future Nobel Laureate to invent the Sharpe ratio.  Yet this handy measure has achieved unprecedented ubiquity largely because of its simplicity and ease of use, not necessarily its robustness.

The Sharpe ratio assumes that returns are symmetrical and bell-shaped – that the chance of winning is the same as the chance of losing and that these chances are easily predictable if you know the standard deviation.  But since hedge funds tend to invest in things that have asymmetrical returns (options, for example), their own returns are often skewed to the positive or negative.  Furthermore, these exotic instruments tend to increase the chances of both winning and losing even though the standard deviation of the fund may not change at all.

Given this, you’d think that better measures of hedge fund performance exist somewhere out there.  In fact, we’ve written about several candidates on these pages.  But a study by Martin Eling of the University of St. Gallen and Frank Schuhmacher of the University of Applied Sciences and Technology Aachen suggests that such a search might actually be in vain.  In fact, the duo say the choice of performance measure doesn’t actually influence the relative ranking of hedge funds much at all.  (They extend their research into the world of mutual funds in the May/June 2008 issue of the Financial Analysts Journal.)

While the integration of “higher moments” into performance metrics makes a lot of intuitive sense (the most notable tool being the “omega measure” that integrates standard deviation, skew and kurtosis – see guest posting by its creator, Dr. William Shadwick), Eling and Schumacher seem to throw cold water on the notion that hedge funds are a special case that require new and distinctive metrics to rank:

“The main result from our empirical investigation is that the choice of performance measure does not affect the ranking of hedge funds as much as one would expect after studying the performance measurement literature. It appears that, even though hedge fund returns are not normally distributed, the first two moments (i.e., mean and variance) describe the return distribution sufficiently well.”

Their methodology is about a straightforward as it gets.  Eling and Schuhmacher rank a list of thousands of hedge funds using various metrics – one at a time.  Then they calculate the correlation between the resulting rankings.

As you can see, the results obtained by using the other metrics have a 0.91 to 1.0 correlation to those obtained using the plain old Sharpe ratio.  Put another way,

“…the choice of performance measure does not have a crucial influence on the relative evaluation of hedge funds.  For example, in the portfolio context there are 98 hedge funds among the top 100 funds according to an evaluation made using the Sharpe ratio, which are also among the top 100 funds according to an evaluation made using Omega…”

So while most of the more complex metrics make intuitive sense, the question is one of balance between complexity and practicality.  The author of this point would seem to agree – suggesting that since the Sharpe ratio is “best known and best understood performance…it might be considered superior to other performance measures from a practitioner’s point of view.”

Editor’s Note: As you might guess, early reaction to this post has been heated.  One reader, for example, points out that measures which integrate “higher moments” of return distributions provide critical insights – regardless of their effect on fund rankings:

“…this study is about averages.  But what does that mean?  So you buy the fund that has a high Sharpe but the underlying distribution is bimodal and then the left hand hump gets you.  Are you going to be happy?  I’d say no.  Even Bill Sharpe warns users of the shortcomings of his own measure.  I can’t believe regulators haven’t made the use of Sharpe ratios illegal for describing hedge fund performance!”

Be Sociable, Share!

One Comment

  1. Bill aka NO DooDahs!
    September 15, 2008 at 7:37 am

    While the methodology is straightforward, it is flawed. It measures merely the statistical tendency for high-ranking Sharpes to also rank highly on Sortino, etc.

    A histogram of how many funds changed rankings, and by how much, would be a better metric.

    For example, of the 1000s of funds, rank them by deciles using Sharpe. What percentage of funds changed decile categorization by no deciles, one decile, two deciles, etc., when re-ranked by each other metric?

    Of course, from the POV of a potential investor trying to determine whether Fund A is better than Fund B, no such “average-based” methodology provides help.

    If one has a minimum acceptable return, then one needs to use an evaluation metric that incorporates the minimum acceptable return. It really is as simple as that …


Leave A Reply

← Apples to Apples: the case for liquidity adjustments to hedge fund of funds returns Strategy Distinctiveness and Hedge Fund Performance →