Parsimony Is Beautiful

The last couple days I have been playing with a model for forecasting equity returns. The model came to my attention through David Merkel at The Aleph Blog, who had discovered it on Philosophical Economics.

I am not going to delve into the guts of the model in this post. You will be better served reading the linked posts above. However, the gist is that the aggregate level of equity ownership as a percentage of financial assets is an excellent predictor of subsequent 10-year annualized returns (r-squared is around 85%). Below is the most recent update from David Merkel’s website:

David_Merkel_Philosophical_Economics_Forecast_Model
Source: The Aleph Blog

What this is telling us is that given the aggregate allocation to equities across all investors’ portfolios, we should expect the S&P 500 to return an average 4.58% over the next 10 years, including dividends. It is more or less impossible to understate the significance of this result for anyone responsible for asset allocation decisions: individual investors planning for retirement; financial advisors serving individual clients; institutional investment committees responsible for pensions and endowments.

I was so taken with this model that after reading all about it I rebuilt it myself in Excel. What amazes me is its simplicity. You do have to do some mild data wrangling to calculate the aggregate investor allocation to equities (the data point is not pictured on the above chart). However, the model itself is derived from a simple ordinary least squares regression. It is easy to create and the underlying reasoning is fairly intuitive.

We live in the age of the algoInvestment firms are literally turning money over to black box AI systems. There seems to be something of a fetish for complexity.

I do not care much for complexity. I certainly do not care for complexity paired with opacity. It is possible I simply resent not being smart enough to follow it all. Whatever. When it comes to financial modeling, parsimony is a beautiful thing. You are far better off correctly capturing a few critical variables than trying to nail down every last detail. The world is too complex, dynamic and random a system to be modeled with a high level of precision. And the penalties for getting a key variable wrong can be very, very high.

Recall the pre-2008 risk models built on the assumption that home prices only ever went up. For a variety of reasons (including greed), everyone had it spectacularly wrong. From the linked article by Felix Salmon:

 “Everyone was pinning their hopes on house prices continuing to rise,” says Kai Gilkes of the credit research firm CreditSights, who spent 10 years working at ratings agencies. “When they stopped rising, pretty much everyone was caught on the wrong side, because the sensitivity to house prices was huge. And there was just no getting around it. Why didn’t rating agencies build in some cushion for this sensitivity to a house-price-depreciation scenario? Because if they had, they would have never rated a single mortgage-backed CDO.”

Bankers should have noted that very small changes in their underlying assumptions could result in very large changes in the correlation number. They also should have noticed that the results they were seeing were much less volatile than they should have been—which implied that the risk was being moved elsewhere. Where had the risk gone?

They didn’t know, or didn’t ask. One reason was that the outputs came from “black box” computer models and were hard to subject to a commonsense smell test. Another was that the quants, who should have been more aware of the copula’s weaknesses, weren’t the ones making the big asset-allocation decisions. Their managers, who made the actual calls, lacked the math skills to understand what the models were doing or how they worked. They could, however, understand something as simple as a single correlation number. That was the problem.