By standard definition, “ambiguity aversion” is the preference for known risks over unknown risks, the known unknowns over the unknown unknowns. * A recent paper discusses *the portfolio-level consequences of this aversion.

The paper, written by Valery Polkovnichenko and Hui (Grace) Wang, explains that for an ambiguity-*neutral* investor, “adding active portfolio with statistically significant alpha always implies efficiency gain relative to the optimal portfolio of factors.” But that is not the case for an ambiguity-averse investor, who must get an efficiency gain above a certain threshold for the new active alpha to be worthwhile. The size of that threshold will depend on uncertainty about the factors, and about the expected returns of the active portfolio.

Polkovnichenko is with the division of research and statistics at the Federal Reserve. Wang is a Ph.D. candidate at the University of Texas at Dallas, School of Management.

**Some History**

Polovnichenko and Wang are advancing a line of research that predates modern portfolio theory. It goes back to Frank Knight’s 1921 essay, “Risk, uncertainty, and profit.” [That publication date precedes the birth of Harry Markowitz by six years.] Knight wrote of “the fundamental distinction between the reward for taking a known risk and that for assuming a risk whose value itself is not known.” A known risk can be treated as an “effective certainty,” that is, we are certain about the *probability* of the relevant outcomes.

Former Secretary of Defense Donald Rumsfeld famously paraphrased Knight’s reasoning in the run-up to the 2003 Iraq War. Asked about whether various terrorist groups really had ties to Iraq, Rumsfeld say that there are “known knowns,” “known unknowns,” and then “unknown unknowns.” Asked to return to the terrorism question with the help of those labels, he said, “I’m not going to say which it is.”

But getting back to the scholarly study of finance: in 1976, Roger W. Klein and Vijay S. Bawa wrote of “The effect of estimation risk on optimal portfolio choice.” They argued that “traditional analysis neglects estimation risk by treating the estimated parameters as if they were the true parameters to determine the optimal choice under uncertainty.” Klein and Bawa found that for normally distributed returns and invariant priors, treating the estimated parameters as the true parameters works well enough. Once those two conditions are relaxed, it’s Katie, bar the door!

**An Asset Exclusion Threshold**

Polkovnichenko and Wang say that they are working from a theoretical framework developed by Lorenzo Garlappi and two colleagues in 2007, in a paper for *The Review of Financial Studies. *Garlappi *et al.* created a model for investors with “multiple priors and aversion to ambiguity.” They also laid out some conditions for further such modeling: the guidelines involved should be “flexible enough to allow for different degrees of uncertainty about expected returns … and also about the return-generating model.”

Building on that work, Polkovnichenko and Wang propose using an “asset exclusion threshold.”

They are also indebted to the work of R. Gibbons, Tephena Ross, and Jay Shanken, who in 1988 offered a test for the efficiency of a portfolio. GRS were implicitly working for the sort of ambiguity neutral investor mentioned above. Polkovnichenko and Wang make the point that their exclusion threshold is not redundant relative to the GRS statistic. They go a little further, saying that the GRS test “may indicate statistically significant alpha at conventional levels and nevertheless a sufficient ambiguity about mean return.”

Polkovnichenko and Wang say that an alpha ambitious investor working with the GRS test will be required to “hold assets with ambiguous return distribution and for sufficiently high ambiguity this tradeoff is not attractive.”

**Other Work by Polkovnichenko**

Polkovnichenko has been writing about ambiguity aversion since at least 2010, when he wrote a paper that concluded that even for pure factor portfolios, “pricing errors may be non-zero and would not be eliminated by ambiguity-averse arbitrageurs.” Even if one assumes that agents can learn from the return-generating process over time, he added, the pricing errors induced by volatility would remain “quantitatively significant” he said, after adjusting for the effects of learning.