Perspective on skill, luck, and Druck.
"What the human being is best at doing is interpreting all new information so that their prior conclusions remain intact."
A knotty, counter-intuitive truth that investors must accept is that portfolio outcomes don't necessarily provide useful information. Investing is probabilistic — the future only tells us what actually happened. It doesn't tell us what could have happened, or even what was most likely to have happened.
After former President Donald Trump's upset Hillary Clinton in 2016, many folks railed against the polling experts who said Hillary would win. Famed-statistician Nate Silver got hit particularly hard for getting it wrong.
But this was his website on election day:
Like investing, political outcomes only tell us what happened, which isn't necessarily worth very much. Was Nate wrong?
During Obama's 2012 election, when Nate became a household name, it was because he was not wrong...to say the least. He nailed everything.
Was Nate a genius in 2012? And then an idiot four years later?
Possibly neither. We can't extrapolate small data sets so frivolously.
Investors should consider that even after the 2016 election, we can't know if Nate's 29% forecast of Trump's chances of winning was even a bad one. It could have been fairly accurate, and a guess at an outcome that had a wide distributions of possibilities, and it just happened that a lower probability outcome occurred. With the same odds again, he may be right 10 times in a row. You can't be wrong with one forecast that puts someone at a nearly 1/3 chance of winning (his forecast reflected that Trump winning was reasonable).
Similarly, he may not be able to prove he was skillful in 2012, either. If most of the state races were obvious already (e.g. Alabama, California, etc...), he may have only made a few challenging calls (e.g. Florida, Ohio, etc...) and gotten lucky.
The future has to turn out some way, and so we shouldn't be surprised when it does.
Proving skillfulness (or unskillfulness) is a tall order, and requires complex statistics. In doing that work, there is only one way to determine if Nate, or anyone else making forecasts — political, investment-related, or otherwise — is skilled (or unskilled).
We need more forecasts. More data points. We need to know if there is any statistical reliability, and it typically takes a lot more data than people think. Ken French, esteemed Professor of Finance at Dartmouth College, shared this with me last year:
If you beat a benchmark by 5%, with 20% volatility (which is ~ average for stocks), it would take 64 years to show statistically significant alpha.
Alpha meaning outperforming the relevant investment benchmark. Fancy language for — if you can have that incredible performance for 64+ years, then it's fair to say you're skillful. Anything less than that, and we can't distinguish luck from skill.
Yet we live in a world where people day-trade with leverage, leading to massive swings and large dispersions from benchmarks (often within hours!). I can't necessarily prove that a day trader who goes on a losing streak is unskillful, nor is a winning streak necessarily skillful, either. Given that most day-traders eventually lose...I may not be able to prove any single struggling day trader is unskillful (rather than unlucky), but on average it's not a great hobby.
If you don’t get this elementary, but mildly unnatural, mathematics of elementary probability into your repertoire, then you go through a long life like a one-legged man in an ass-kicking contest.
In prepping for the event, he shared with me that in his 43-year track record, he's never had a negative year. I've been thinking about it ever since: he's an active manager who shuns diversification...which typically leads to extreme ups and downs...and yet he's never been nicked on an annual basis.
He even describes his approach as "waiting for fat pitches" — well, Babe Ruth was famous for homeruns (714), sure, but also strikeouts (1,330).
It's difficult to swing hard and never miss.
I don't know for sure what complex statistics would say about Stan's performance. One of the other issues about statistical significance claims is that we use a definition of 95%+ certainty. It means there is still potentially 5% where we acknowledge an indefinitive answer. It's a vastly disrespected puzzle. Even if outperformance has significant outperformance/alpha, there's still a small chance that it's luck.
I'll say this — he's endlessly impressive. He was also gracious, kind, and lit up at the opportunity to share about the joy he gets from giving away his wealth to causes including education, poverty, climate change, and health. That was enough for me.
But during the event he kept talking about how hard investing was for him recently. He's had a tough few years relative to other investors. He didn't blow smoke up his own ass. No ego-trips on past performance from years ago. He was just a humble active manager.
My guess is that Stan is, or certainly has been, skillful. But there he was reminding the audience that even with his impressive historical track record, it doesn't mean anything right now. Not so different from Nate Silver forecasting elections, or any probabilistic pursuit. It's hard to delineate luck from skill, and even if you can, it's unclear what to do about it.
Moreover, Stan's track record itself is only one data point amongst thousands of active managers. Most investors can't do what he did, and especially over long time periods. SPIVA is the well-regarded annual report each year about active manager performance. Here is performance against the common benchmark (S&P 500) over the last 15 years.
93.4% underperform. Ouch...but that's with 15 years of data.
If you only looked at the most recent one-year of data, it's less obvious, almost a coin toss:
In Philip Tetlock's book, Superforecasting, he posits that across all the various future events one might forecast (geopolitical, economic, etc...), some people really are (statistically significantly) good at it. For investing, it's reasonable that Stan may be one of them.
But this pattern from one-year to 15-years is critical. It isn't until we have more and more trials (in this case, years of active manager performance) that we start to reduce the element of luck. Almost anyone can look good in one year...hard to do it for many.
Tetlock describes comments from Richard Danzig, former Secretary of the Navy, on how great forecasters consider years deep into the future, where it's harder to guess outcomes:
"If you have to plan for a future beyond the forecasting horizon, plan for surprise. That means, as Danzig advises, planning for adaptability and resilience."
Even if someone had investment skill in the past, I don't know if they'll have it in the future. So I am always planning beyond the forecasting horizon Danzig describes, because to me, uncertainty starts now. I don't want my client outcomes to rely on stock pickers needing to be skillful — because what happens if they're not?
Assuming no one knows the future has a stellar track record.
It's the inverse of the SPIVA numbers. Essentially just own the entire benchmark (index) itself, without trying to guess which stocks will win or lose. That provides adaptability and resilience...pursuing that whichever stocks do well in the future, we've designed a strategy that seeks to at least own a small percentage of each of them.
The one-year SPIVA data alone doesn't tell us what the 15-year SPIVA does...which is how extremely insignificant any extrapolations made from the one-year data are.
I'm aware of the noise, and I prefer not to listen.