Edge Over Odds

Kelly Criterion

This the Kelly Criterion. It is a formula well-known to both gamblers and investors. It solves for the optimal bet size, relative to your bankroll, as a function of the probability of winning a bet and the payoff for the win. The underlying intuition is often summarized as “edge over odds.” The greater your edge, the more you should bet. For example, any time you have a 100% probability of winning, Kelly says you should bet your entire bankroll, regardless of payoff.

In investing, we often throw the word “edge” around in imprecise ways.

“What’s your edge?”

We hear this question all the time. In many cases we answer with things like “no career risk,” “longer time horizon,” and “better behavior.” These may well be competitive advantages but they are not themselves edge. At least not in the Kelly sense. In Kelly terms, you have edge to the extent the probability of winning a bet exceeds the probability of losing it. When we talk about edge, we’re talking about positive expected value.

In that sense, there is “Kelly edge” to be found in many investment strategies. Buy and hold equity investing, value investing, momentum investing. These are just a few strategies where we have pretty robust evidence supporting positive expected values over time and thus at least some degree of Kelly edge. All these strategies are potentially worth a bet.

What is considerably more controversial are the sources of the Kelly edge associated with these strategies. Because when we think about investing, as opposed to gambling, there’s a distinction to be made between the Kelly edge associated with fair odds and the Kelly edge associated with mispriced odds.

A casino is a controlled environment with set payoffs that favor the house (“house edge”). Beating the dealer is an uphill battle. Simply being able to make bets with positive expected values, however small, is the holy grail for every casino gambler.

Taking the odds in craps is a “good bet” because it offers fair odds (there is no “house edge”). The payoff fairly compensates you for the risk of the bet. Whether you ultimately win or lose the bet is the outcome of a random process.

In blackjack, the basic strategy is a “good bet” because it gets you very close to fair odds, although technically the house still has a slight edge.

Card counting in blackjack, on the other hand, is a strategy for identifying and exploiting mispriced odds.

Now, it’s more complicated in investing because investing isn’t a casino game. Financial markets aren’t controlled environments where payoffs are static and specified in advance. Investing is a game where it’s possible to make all kinds of different bets with positive expected values. Moreover, the implied odds and payoffs change on a daily basis. Here the distinction between fair odds and mispriced odds is more subtle and nuanced.

I’ve deliberately avoided using the words “alpha” and “beta” up until now. But here’s how I’m thinking about these terms in this context.*

A beta process earns returns simply as compensation for bearing risk in a fair odds bet. Buying and holding a global market cap weighted equity portfolio is an obvious example of this. But plenty of active discretionary strategies make money this way, too.

An alpha process earns returns by explicitly identifying and exploiting mispriced odds. Alpha processes are about exploiting Information (in the formal sense). I provide a specific example of this further below.

A somewhat inscrutable definition of Information that I quite like is the one from Gregory Bateson: “a difference that makes a difference.”

Do value investors make money over time by making “good bets” with positive expected values, or by identifying mispriced odds? In more academic terms: is the value premium simply fair compensation for bearing a specific type of risk? I’m not going to pretend I have the definitive answer to that question. It’s a debate that’s raged for a long time. I’m certainly not going resolve it on this blog.

My personal view on the subject is that “it depends.” Event-driven value investments such as value + catalyst trades and special situations investments are more like alpha bets. The defining characteristic is the presence of a hard catalyst, usually a corporate action. Hard catalysts, after all, are the very definition of Information. In the absence of a hard catalyst, however, buying a “quality company on sale” (something I am fond of personally) is more of a fair odds bet. A value investor may well think in terms of mispriced odds. But in the absence of Information, it’s an implicit mispricing of odds.

Incidentally, this is also where investor behavior comes back into the picture. Investor behavior is quite plausibly responsible for the historical success of systematic Value and Momentum strategies, and their persistence over time.

At the risk of overreaching, I’m going to go out on a limb and suggest most of us investors earn a greater proportion of our returns from making good bets, as compensation for bearing risk, than by exploiting Information.

Does this mean we should give up on security selection and put all our money into SPY? No. Not in the least. It is plenty difficult to distinguish whether a bet is fair and worth taking, thank you very much. Furthermore, I do believe it’s possible to outperform SPY or any other capitalization weighted index by betting smart over time. Particularly if you’re able to play in less liquid market niches with less carrying capacity and thus less appeal to larger pools of capital managed by folks with a lot of money and resources to throw at Information gathering and processing.

How do you know if you’re exploiting Information versus simply placing good bets? Here is my simple test:

Ask: Do I know for sure? If so, how?

For example, I met a muni bond trader who bought a micro issue at 60 even though there was public record of it having been called at 100. This is perhaps the single best example of an alpha trade I have ever seen in my life. It is the kind of thing that should literally never happen in a reasonably efficient market. It’s the Platonic Ideal of an alpha trade. It’s a real-life version of the old joke about the academic economist who won’t pick up the $20 bill lying on the ground in front of him because he believes people are rational actors and someone should have picked it up already.

Did the trader know for sure? Yes.

How? The issue being called was a matter of public record.

It doesn’t get much cleaner than this. And of course, examples like this one are rare.

By contrast, I had a stock in my PA go up 3x over the last two years. I was of course happy about this. It is fun to make money. I modeled the business out based on publicly available information and felt the market price reflected neither the quality of the business nor its growth prospects.

Did I know for sure? No. Not even close. I simply felt I was being fairly compensated for bearing the risk associated with the bet. But I had no Information in the formal sense–no way of knowing the odds were mispriced.

Fortunately, the P&L doesn’t distinguish between money earned by exploiting Information and money earned as compensation for bearing risk. This discussion is academic. But I sure find it fun to think about. And I do believe it’s beneficial to try to reason clearly about how and why you’re making money over time.

Why?

So you can diagnose problems and potentially make adjustments if a strategy ever stops working.

 

* A somewhat similar formulation of the difference between beta and alpha bets:

OddsTweet

The Confidence Meter

If you are anywhere near as strange a person as me, you spend a lot of time thinking. And not only thinking, but thinking about thinking (whether any good ideas actually come out of this process is a discussion for another time). Over the years I’ve become more and more interested in epistemology. Is there a such thing as Truth with a capital T? If so, how would we know if we found it? How can we better manage the Bayesian updating of our priors?

Personally, as far as epistemology is concerned, I come down on the side of fallibism. Whether fallibism is, or should be, applicable to moral questions lies beyond the scope of what I write about here. But when it comes to our beliefs about economics, geopolitics, and investing, I think fallibism is an eminently sensible position.

Now, it’s important to distinguish between fallibism and nihilism.

Nihilism is extreme skepticism in the existence of Truth.

Fallibism is extreme skepticism in the provability of Truth and in the methods we use to arrive at Truth. (See also: The Problem of Induction; The Trouble With Truth)

For a fallibist, acquiring knowledge is a relentless, grinding process of formulating conjectures, challenging them, adjusting them, discarding them, and so on. It never ends. By definition, it can’t end. So if you’re bought-in on fallibism, you need to seek out people and ideas to challenge your priors.

This is not fun. In fact, we as humans pretty much evolved to do the opposite of this. For most of our history, if you were the oddball in the tribe you risked being exiled from the group to make your way in a harsh and unforgiving world, where you would likely die miserable and alone (albeit rather quickly), without the consolation of having passed along your genetic material.

The Confidence Meter is a little mental trick I use to mitigate my evolved distaste for challenging my priors, as well as my evolved distaste for being wrong. It’s something I think about when I want to judge how tightly to grip an idea (such as an investment idea). It also helps interrupt emotional thought patterns around certain ideas. For fans of Kahneman, I use it to interrupt System 1 and activate System 2.

The Confidence Meter (A Stylized Example)

the_confidence_meter

At 0% confidence, I shouldn’t even be making a conjecture. At 0% confidence, I should just be gathering information, and soaking it all in without an agenda. (Not always easy)

At a toss-up, I could make a conjecture and support it with evidence, but I wouldn’t put anything at risk. 

At 75% confidence and greater, a willingness to bet money on the outcome implies a sound grasp of the theory underlying my idea, as well as the empirical evidence. It also implies I have a sound grasp of the arguments and empirical data challenging my position.

Using this framework, how many of your beliefs do you suppose merit a >=75% confidence level?

For me, it’s a very small number. To the extent I’m >=75% confident of anything I believe, it’s elementary, almost tautological stuff, like how you make money investing.

The empirical data around the impact of the minimum wage on unemployment? Meh.

The relationship between marginal tax rates and economic growth? Meh.

That doesn’t mean I don’t have beliefs about these things. I’m just leaving an allowance for additional dimensions of nuance and complexity. Particularly when we’re inclined to look at relationships in linear, univariate terms for political reasons. The world is a more complex place than that admittedly powerful little regression model, Y = a + B(x) + e, would lead us to believe.

The Confidence Meter helps me keep that in perspective.

A Man’s Got To Know His Limitations

Lieutenant Briggs: You just killed three police officers, Harry. And the only reason why I’m not gonna kill you, is because I’m gonna prosecute you–with your own system. It’ll be my word against yours. Who’s gonna believe you? You’re a killer, Harry. A maniac.

[Briggs starts to drive away when the car blows up]

Harry Callahan: A man’s got to know his limitations.

That’s the end of the 1973 movie Magnum Force. Briggs, a vigilante cop, has an opportunity to shoot Harry Callahan dead. But Briggs is an egomaniac convinced of his own moral superiority. He opts for a clever revenge scheme instead. He flees in a car, which, unbeknowst to him, has a live bomb in the backseat.

A man’s got to know his limitations.

I was moved to reflect on this after a recent due diligence trip. In investing, outcomes are inherently uncertain. We never have perfect information when making investment decisions. We’re lucky to have “good” information in most cases. Even then, unexpected events have a nasty habit of blowing up our plans.

Investing is an exercise in probabilistic thinking. Outcomes do not necessarily reflect the quality of decisions (good investment decisions often result in bad outcomes and vice versa).

When investing, you’ve got to know your limitations.

If you’re a typical outside minority passive investor, you have minimal control over investment outcomes. Basically, the only variable you can control is your own behavior.

You need to be realistic about what you can and can’t know, and the kinds of things you should and shouldn’t expect to get right. The more you can expect to get a decision right, the more time you should spend on that area. Don’t waste time on things that aren’t knowable, or things subject to lots of random noise.

 

Things I Will Never Get Right

Forecasts for prices and other variables. (This would seem obvious but it never ceases to amaze me how much time and energy is wasted here)

Timing, in the sense of trying to buy the bottom tick or sell the top tick.

Macroeconomics.

Intrinsic value. (It’s not observable)

 

Things I Should Get Right More Than Half The Time

The general quality of a given management team.

The general quality of a given business.

Industry dynamics, competitive forces and secular trends.

The potential range of outcomes for a given investment.

 

Things I Should Get Right Most Of The Time

The handful of key variables that will make or break an investment.

How I’ll know if I’m wrong about any of the key variables that will make or break an investment.

Assessing the major “go-to-zero” risks: leverage, liquidity, concentration, technological obsolescence and fraud.

When to average down, when to hold and when to sell out of an investment, not based on price action but on the key drivers and risks.

How To Win

One of my favorite bits of life advice comes from Mark Cuban. A couple of years ago, Business Insider wrote a brief piece on his view that surprisingly few people are willing to put in the effort to gain a knowledge advantage in their fields. I remember it to this day, because it is consistent with much of my experience in the working world.

“I remember going into customer meetings or talking to people in the industry and tossing out tidbits about software or hardware,” he writes. “Features that worked, bugs in the software. All things I had read. I expected the ongoing response of: ‘Oh yeah, I read that too in such-and-such.’ That’s not what happened. They hadn’t read it then, and they still haven’t started reading it.”

Cuban says that despite a minimal background in computers, he was outperforming so-called experts in the field simply because he put time and effort in. It’s why, he writes, he still allocates a chunk of his day to reading whatever he can to gain an edge in the businesses he’s involved in.

“Most people won’t put in the time to get a knowledge advantage,” he writes.

Another quote that sticks in my head along these lines (I don’t recall exactly where I heard this, and it’s possible I’ve fused a couple different quotes together):

“If you aren’t passionate about what you’re doing, don’t ever make the mistake of competing with someone who is. You will lose every time.”

Mental Model: Time Dilation

Time dilation is a consequence of relativity in physics. Put simply: individuals moving at different speeds perceive time differently. The most extreme example of this would be someone moving at the speed of light versus a stationary observer. For the person traveling at the speed of light, time measured from the point of view of the stationary observer would appear to have stopped.

Crazy, right?

Take a moment and allow that to sink in. It is pretty wild to think about. It took me two passes through A Brief History of Time before I felt like I had a decent handle on the concept.

Below is a fun animation to help illustrate.

Nonsymmetric_velocity_time_dilation
Time dilation illustrated. The motion inside each “clock” represents the perceived passage of time from the blue clock’s perspective. Source: Wikipedia

In financial markets, we experience our own form of time dilation. A high frequency trader experiences time differently than Warren Buffett. Here the relative velocity we are concerned with isn’t physical motion, but rather the velocity of activity in a portfolio of financial assets. The more you trade, the slower time moves for you.

Below are two charts for AAPL, one for the last trading day and one for a trailing 1-year period. All the price action depicted in the first chart is imperceptible on the second.

AAPL_1_Day
Source: Google
AAPL_1_Year
Source: Google

This idea of time dilation presents significant challenges for investment organizations.

The first challenge is the friction it creates between stated investment horizons and performance measurement. It is tough to manage money to a three or five-year horizon if your investors are measuring performance monthly. With that kind of mismatch, stuff that wouldn’t seem significant over three or five years starts to look significant (in a way, it is). And so you are tempted to “do stuff” to keep your investors happy.

While you should be focused on the “signal” from annual reports, you get bogged down in the “noise” of quarterly fluctuations in earnings. Or, god forbid, daily and weekly newsflow. Unless you are a proper trader, nothing good ever comes of focusing attention on daily and weekly newsflow.

Also, since people pay money for investment management, it is easy for them to mistake large volumes of activity for productive activity. Yet, just because you “do a lot of stuff” doesn’t mean your results will be any better. In fact, plenty of empirical evidence argues the opposite. The more “stuff” you do, the worse your results will be.

Here is an example of market time dilation from Professor Sanjay Bakshi. Years ago he executed a “very cool” arbitrage trade involving Bosch stock for a triple digit IRR. That’s an objectively fantastic result. And yet, Prof Bakshi readily admits to missing the forest for the trees. Why? He was operating on a different time horizon.

bakshi_bosch
Source: Professor Sanjay Bakshi

You can judge whether someone truly has a long term mindset based on how he feels about being “taken out” of a stock in a merger or buyout. Long-term thinkers don’t like to be taken out of their positions! They would rather compound capital at 20% annually for 30 years than have a 100% return in a single year.

They all explain this the same way: there aren’t that many businesses capable of compounding value at 20% annually for 30 years. When you find one you should own it in size. Selling it should be physically painful. Only phony long term thinkers are happy about getting taken out of good businesses.

Now, that’s certainly not the only way to make money in the markets. The trick isn’t so much finding “the best way” to make money as it is genuinely aligning your process with your time horizon. This is not a trivial thing. Particularly if you manage other people’s money.

Clear Thinking: Why Many Great Investors Are Also Great Writers

Morgan Housel observes:

Communicating and allocating capital are miles apart. Completely different topics. But look around, and the two are constantly paired.

Warren Buffett is a great writer. Paul Graham is a great writer. John Bogle is a great writer. Howard Marks is a great writer. Josh Brown is a great writer. Brent Beshore, Seth Klarman, Joel Greenblat, Ben Graham – the list goes on.

None of this is a coincidence. These aren’t just great investors who happen to be good communicators; their ability to communicate well helped make them great investors.

The post focuses on the importance of clear and effective client communication. However, I would argue that great investors often make for great writers because great investing and great writing both require clarity of thought. Parsimony is a beautiful thing.

In writing, the ultimate example of this is Hemingway’s “six word novel.” Here it is in its entirety:

For sale: baby shoes, never worn

Those six words evoke an entire lifetime of experiences and emotions. You could write a thousand page novel about the death of a child and you would struggle to make the impact of the six word Hemingway story. Why? The six word story contains only the most important parts. Your thousand page novel is going to contain all kinds of extraneous crap. And that extraneous crap dilutes the emotional impact of the most important parts.

Likewise in investing, you need clarity of thought to identify the key drivers of a situation. Most great investments hinge on two or three key drivers. Everything else is noise. You get lost in the noise at your peril. In Margin of Safety, Seth Klarman tells the story of an analyst who (badly) missed the forest for the trees on Clorox:

David Dreman recounts, “the story of an analyst so knowledgeable about Clorox that ‘he could recite bleach shares by brand in every small town in the Southwest and tell you the production levels of Clorox’s line number 2, plant number 3.’ But somehow, when the company began to develop massive problems, he missed the signs… .” The stock fell from a high of 53 to 11.

The analyst knew a lot of crap about Clorox. But he wasn’t thinking clearly. All that extraneous crap he knew about Clorox blinded him to what really mattered. So knowing all that crap about Clorox was irrelevant to the outcome.

Elsewhere, Charlie Munger has commented on how important clarity of thought is at Berkshire Hathaway:

Our ideas are so simple. People keep asking us for mysteries, but all we have are the most elementary ideas.

I have this pet theory that you should be able to go through a portfolio and summarize every single investment thesis (as a falsifiable statement, of course!) in just a couple of lines. If there are things you can’t do that for, you probably shouldn’t own them.

“The Last, Best Order”

There is a neat post on Redfin’s blog. It is the CEO’s “IPO diary.” Read the whole thing for a fascinating look at the process from the inside. A couple of sections really resonated with me:

Masters of the Universe
In other ways too, the roadshow had the feel of a bygone era. For example, almost everyone on the buy-side we met that week was a man: in one group lunch, all 24 of the portfolio managers in attendance were male. We may have met more portfolio managers who were Israeli special forces veterans than women. I asked our bankers how long it would take the first one to kill me with his bare hands.

Almost all of them took notes on tablets. Some of them tried to look up as you spoke, but with their eyes focused on nothing except the numbers in their head. They weren’t just capturing the highlights of a meeting; it was a nearly verbatim transcription of what we’d said, so we could be held accountable for it later. Information in every form is the currency of Wall Street, and drops of it never seem to fall on the floor.

Chess with Bobby Fischer
Most of the fund managers were exotically, obviously smart. Except for one person who fell asleep in a meeting, none of the fund managers we met was anyone I’d want to be on the other side of a trade with, buying what he sold, or selling what he bought. This is what I realized I had been doing my whole life as an E-Trade stock-picker; it had been like challenging Bobby Fischer to a game of chess. I spent a long time that first week trying to judge whether it made sense to have so many brilliant people decide where our society allocates capital, as opposed to making cars or software or hospitals.

The Last Ideology-Free Realm
What impressed me most about these people was their willingness to change their minds. No one in our society seems to change her mind about Donald Trump or Hillary Clinton based on a new fact, but a fund manager on the wrong side of a bad trade has to change her mind in a moment or lose her job. This is why investing is the world’s last ideology-free realm. It would be easier to accept the premise that our society can’t agree on one version of the truth anymore, about whether temperatures are rising or the economy is growing, except that’s exactly what happens when every public company reports its earnings every quarter. You can believe what you want to believe, but not with a million dollars on the line.

And, perhaps most interesting to me:

The Last, Best Order
One of my favorite meetings was with a Scottish fund manager in San Francisco. His firm was known for buying only a few stocks, and holding each for as long as a decade. In a hotel meeting room with enough prospectuses, pitchbooks, cookies, fruit, cheeses, crackers and popcorn for 30 people, he came in alone. And rather than rattling through twenty or thirty questions about our metrics, he just asked me why I ran the company.

I found myself talking about my older brother, who had died just before I became Redfin’s CEO, and the feeling I had then that my life so far hadn’t made the world a much better place. He asked me about whether Redfin’s sense of mission would survive our public offering. He didn’t write much down. His order was one of the last, and the best, to come in.

My aspiration as an investor is to be that “last, best order.” There’s a reason I classified this post under Finance, Investing, Learning and Values. There is some real insight here.

A Mental Model For Politics

Politics is the process by which tribal groups negotiate the distribution of power and resources in a society. A tribal group may identify strongly with a particular philosophy. However, conflating politics with philosophy (“values”) is a muddy way of viewing the underlying drivers of political conflict. It took me a long time (about 15 years) to realize this.

I now realize there are two dimensions to tribal politics:

  • The competitive dimension (the political process itself). This is essentially a strategy game. Because it is a strategy game, effective political operatives (Lee Atwater) needn’t actually concern themselves with “correct” policy or philosophy. Their role is simply to “win”–that is, secure power and resources for the tribal group they serve.
  • The philosophical dimension (the inner lives of tribal group members). This is the process by which tribal group members construct their identities, bond with one another and develop a shared vision of how power and resources should be allocated across society. Tribal group members may or may not develop their identities through a rigorous process of reasoning from first principles. That depends largely on the mental complexity of each individual.

Thus, mental complexity is a key input to this model:

  • Socialized minds simply adopt an identity consistent with their surroundings.
  • Self-authoring minds go a step further and build their own identity.
  • Self-transforming minds go a step even further and work to develop a meta-understanding of tribal group dynamics, in order to integrate that into a more “complete” mental model of how the world works.

To make that more concrete:

  • The socialized mind says: “Everyone in my town and my workplace supports Political Party A. Political Party A is the place to be. I am A.”
  • The self-authoring mind says: “I identify with aspects of Political Party A, but also Political Party B. Furthermore, I believe in X, Y and Z based on my education, life experience and vision for what I want to achieve in life. I combine these inputs to formulate my own identity, views and goals. I am a C.”
  • The self-transforming mind says: “I am a C, but it is possible (in fact likely) that my views as a C are incomplete, inaccurate, or oversimplified. I must leave room to modify these views over time. Over the years I will likely transform from a C to a D, to an E, and so on as I constantly integrate new learnings into my mental model of the world.”

Morgan Housel provides a good example of how a self-transforming mind views tribal politics:

Everyone belongs to a tribe and underestimates how influential that tribe is on their thinking. There is little correlation between climate change denial and scientific literacy. But there is a strong correlation between climate change denial and political affiliation. That’s an extreme example, but everyone has views persuaded by identity over pure analysis. There’s four parts to this:

  • Tribes are everywhere: Countries, states, parties, companies, industries, departments, investment styles, economic philosophies, religions, families, schools, majors, credentials, Twitter communities.
  • People are drawn to tribes because there’s comfort in knowing others understand your background and goals.
  • Tribes reduce the ability to challenge ideas or diversify your views because no one wants to lose support of the tribe.
  • Tribes are as self-interested as people, encouraging ideas and narratives that promote their survival. But they’re exponentially more influential than any single person. So tribes are very effective at promoting views that aren’t analytical or rational, and people loyal to their tribes are very poor at realizing it.

Psychologist Geoffrey Cohen once showed Democratic voters supported Republican proposals when they were attributed to fellow Democrats more than they supported Democratic proposals attributed to Republicans (and the opposite for Republican voters). This kind of stuff happens everywhere, in every field, if you look for it.

It should be obvious by now why most political debates among individuals go nowhere:

  • Most individuals debating politics do not clearly distinguish between the strategy game dimension and the philosophical dimension. This is important. A genuine philosophical debate is a complex and mentally taxing endeavor requiring concentration and a high level of openness. The goal of a philosophical debate is to pursue Truth, not to “win.” What we call political “debate” is almost always strategy and tactics masquerading as philosophy.
  • Socialized minds are simply not capable of engaging in genuine philosophical debate. They do not possess the requisite level of mental complexity (though they certainly can develop it). You will never change a socialized mind with evidence and argument. You need look no further than the comments section of a website for evidence of this.
  • Self-authoring minds are more than capable of engaging in lively philosophical debate. However, they tend to grasp their mental models rather tightly (after all, these are intelligent, highly motivated individuals we are talking about). This can be perceived as either stubborn, obnoxious or even courageous, depending on the observer. As mentioned above, genuine philosophical debate is exhausting. Most people do not want to put the energy into engaging in genuine philosophical debate. Don’t waste your time trying to debate political philosophy with people who aren’t interested in working hard at the process!
  • From a distance, self-transforming minds can seem devoid of logical consistency. This is especially true from the perspective of a socialized mind, for which maintaining an identity consistent with the tribal group is of paramount importance. This is because self-transforming minds are explicitly aware not only of the need to develop mental models, but of the need to adjust them. This can make it difficult for them to relate to the more static worldviews of self-authoring and socialized minds (and vice versa). Others may view a self-transforming mind as an untrustworthy waffler.

To conclude, here is a little checklist for thinking about politics:

  • Be explicit about the dimension you are analyzing:
    • Strategic Dimension?
    • Philosophical Dimension?
  • When analyzing the strategic dimension, do not conflate “values” with strategy and tactics. This will allow you to reason more clearly.
  • If you are analyzing the philosophical dimension, account for mental complexity.
  • If you are involved in a political discussion, try to understand the level of mental complexity the other part(ies) are operating at. This will lead to richer, more fulfilling conversations. On the other end of the spectrum, it will clue you in on when it might make sense to simply disengage.
  • If you want to do deep, truthful, political analysis, you need to integrate both the strategic and philosophical dimensions.

First Principles

I have three great posts I would like to share. All deal with the subject of mental models and reasoning from first principles:

“Speculation In A Truth Chamber” (Philosophical Economics)

First, the idea behind the exercise is not for you to literally walk through it, in full detail, every time you are confronted with a question that you want to think more truthfully about. Rather, the idea is simply for you to use it to get a sense of what it feels like to be genuinely truthful about something, to genuinely try to describe something correctly, as it is, without pretenses or ulterior motivations. If you know what that state of mind feels like, if you are familiar with it, then you will be able to stop and return yourself to it as needed in your trading and investment deliberations and in your everyday life, without having to actually step through the details of the scenario.

Second, the exercise is intended to be used in situations where you actually want to get yourself to think more truthfully about a topic and where you would stand to actually benefit from doing so. Crucially, that situation does not describe all situations in life, or even most situations. There are many situations in life where extreme truthfulness can be counterproductive, creating unnecessary problems both for you and for others.

Third, all that the exercise can tell you is what you believe the most likely answer to a question is, along with your level of confidence in that belief. It cannot tell you whether you are actually correct in having that belief. You might believe that the answer to a question is X when it’s in fact Y; you might have a lot of confidence in your belief when you should only have a little. Your understanding of the subject matter could be mistaken. You could lack the needed familiarity or experience with it to have a reliable opinion. Your judgment could be distorted by cognitive biases. These are always possibilities, and the exercise cannot protect you from them. However, what it can do is make you more careful and humble as a thinker, more open to looking inward and assessing the strength and reliability of your evidence and your reasoning processes, more willing to update your priors in the face of new information–all of which will increase your odds of getting things right.

Thinking From First Principles” (Safal Niveshak)

Practicing first principles thinking is not as easy as explaining it. As Musk said, it’s mentally taxing. Thinking from first principles is devilishly hard to practice.

The first part, i.e., deconstruction, demands asking intelligent questions and having a deep understanding of the fundamental principles from various fields. And that’s why building a latticework of mental models is so important. These mental models are the fundamental principles, the big ideas, from different fields of human knowledge.

The best way to achieve wisdom, said Charlie Munger, “is to learn the big ideas that underlie reality.”

The second step is the recombination of the pieces which were identified in the first step. This is again a skill which can only be developed by deliberate practice. Any idea as an isolated piece of information doesn’t stay in the human brain for long. To be sticky, it needs to be connected with other ideas. A latticework is essentially a grid of ideas connected to each other. These connections are the glue which holds those ideas together.

If the new knowledge doesn’t find any connection or relevance to the old knowledge, it will soon be forgotten. New ideas can’t just be “stored” like files in a cabinet. They have to connect with what’s already there like pieces of a jigsaw puzzle. As you become better in finding connections between seemingly disconnected ideas, your recombination-muscle becomes stronger. Someone with a strong recombination-muscle will find it easy to practice the second step of first principles thinking.

“Playing Socratic Solitaire” (Fundoo Professor)

I am going to play a game based on ideas derived from Socrates and Charlie Munger. We will start with “Socratic Questioning” which is described as

disciplined questioning that can be used to pursue thought in many directions and for many purposes, including: to explore complex ideas, to get to the truth of things, to open up issues and problems, to uncover assumptions, to analyze concepts, to distinguish what we know from what we don’t know, to follow out logical implications of thought, or to control the discussion.

Socratic Questioning relates to “Socratic Method,” which is:

a form of inquiry and debate between individuals with opposing viewpoints based on asking and answering questions to stimulate critical thinking and to illuminate ideas.

Charlie Munger started using these two Socratic devices in a variation he called Socratic Solitaire, because, instead of a dialogue with someone else, his method involves solitary play.

Munger used to display Socratic Solitaire at shareholder meetings of Wesco Corporation. He would start by asking a series of questions. Then he would answer them himself. Back and forth. Question and Answer. He would do this for a while. And he would enthral the audience by displaying the breadth and the depth of his multidisciplinary mind.

I am going to play this game. Or at least, I am going to try. Watch me play.

If you are seriously interested in finance and investing, there is nothing more important to your development than accumulating a robust inventory of mental models. What mental models and reasoning from first principles allow you to do is see through to the true drivers of a situation, where it is often easy to get bogged down in unimportant details.

For example, if you are viewing a business through the lens of discounted cash flow valuation, here are the drivers of intrinsic value:

  • Operating margin
  • Asset turnover
  • Maintenance capex needs
  • Growth capex/reinvestment opportunities
  • Discount rate

Operating margin and asset turnover are quantitative measures reflecting the strength of your competitive advantage and, perhaps more importantly, the source of your competitive advantage.

Maintenance capex tells you how much cash the business needs to spend to keep running.

Growth capex/reinvestment opportunities give you an idea of growth potential over time.

When you combine operating margins and asset turnover (technically NOPAT x Sales/Invested Capital) you get a figure for return on capital. Return on capital is an excellent quantitative proxy for management’s skill allocating capital. Thus, it is also an excellent proxy for quality of management (though it is certainly not a be-all, end-all measure). When you combine return on capital with reinvestment opportunities you get an idea of what sustainable growth in operating income might look like.

There are lots of ways to handle the discount rate. Over time I have come to prefer an implied IRR method, where you simply “solve for” the discount rate that sets your cash flow model equal to the current stock price. You can then compare this to your hurdle rate for new investments.

DCF is one of the most important models in finance because it works with any investment that produces (or is expected to produce) cash flows in the future. At the end of the day, even an exercise as complicated as valuing a mortgage-backed security is just a variation on discounting cash flows.

All great mental models have two defining characteristics:

(1) They are robust. That is, they are applicable to a broad set of opportunities.

(2) They are parsimonious. That is, that is they demonstrate “economy of explanation.”

In my humble opinion, the most important mental models you need to understand in investing are:

  • Time Value of Money/Discounted Cash Flows
  • Capital Structure
  • Expected Value/Probabilistic Thinking
  • Optionality
  • Convexity/Linear Vs. Non-Linear Rates Of Change (e.g. compounding)
  • Investor Psychology

Conceptually that is really what it all boils down to (though the permutations are endless–for instance, a mortgage can be viewed as the combination of discounted cash flows and a call option). Now, you could of course write dozens of volumes on the nuances and applications of each of these models. That is part of what makes them robust. They are adaptable to an almost inconceivable range of circumstances.

This is something I don’t think most candidates in the CFA Program think about (they are too preoccupied with passing the exams!). The curriculum is designed to comprehensively introduce you to the most robust mental models in finance, and then to test your ability to apply those models to specific cases. Level I tests whether you understand the basic “tools” you have available to you; Level II tests more advanced uses of those tools (in exhausting detail, one might add); Level III tests your ability to apply all your tools to “real world” situations.

Maybe some day I will write up how I think about these super important mental models. In the meantime, enjoy the above linked posts!

Warfighting

Warfighting
Source: United States Marine Corps via Verdad Capital Management

I want to share a reading recommendation with you all: WarfightingThis is not a manual but rather a philosophy for decision making under uncertainty.

h/t to Verdad Capital for the link in this excellent post (actually a newsletter piece), which opines:

I also learned at Quantico that complex linear planning fails in warfare because the profession involves “the shock of two hostile bodies in collision, not the action of a living power upon an inanimate mass,” as Clausewitz reminds us. In the military-industrial exuberance of the post–Cold War decades, we invested heavily in exotic platforms such as drones, cyber capabilities, and billion-dollar strike fighters. Our low-tech but moderately street-savvy opponents in this millennium decided to fight us precisely where and how these assets were near useless. With few exceptions, the most useful equipment for this environment came from the Vietnam era and the most enduring lessons from the time of the Spartans.

Financial markets, made up of people competing for an edge, are precisely the type of environment designed to bedevil static planning. The financial environment is one where valuation multiples persistently mean revert, where income statement growth is not persistent or predictable, where GDP growth does not correlate with equity returns, where market share and moats do not lead to competitive advantage or price return.

So what are we to do in such an environment where outcomes are determined not so much by the very little we can foresee but by what might unexpectedly happen relative to the expectations embedded in the price at which the security is bought? How would we affirmatively strategize and operate differently as investors if all of our most cherished and marketed crystal balls for forecasting price returns are shattered? How should we operate amidst the chaos without operating chaotically?

In Afghanistan, I found that the most consequential assets on our side were the most robust and persistent throughout the history of warfare. An asymmetric but intelligent adversary had refused to engage us on any terms but those where war devolved to a competition of wills, where discipline, resolve, adaptability, and habituated combat-arms tactics dictated the victor, not drones or robot pack mules. Our own persistent behavioral biases were our worst enemy.

This is precisely why I am so interested in things like tail hedging. And lest you be tempted to write this off as interdisciplinary silliness, consider for a moment that life itself can be viewed as an extended exercise in decision making under uncertainty.

UPDATE: After reading and reflecting on this, it seems clear to me it is essentially providing a mental model for what war is and how it is conducted.

At first glance, war seems a simple clash of interests. On closer examination, it reveals its complexity and takes shape as one of the most demanding and trying of human endeavors. War is an extreme test of will. Friction, uncertainty, fluidity, disorder, and danger are its essential features. War displays broad patterns that can be represented as probabilities, yet it remains fundamentally unpredictable. Each episode is the unique product of myriad moral, mental, and physical forces.

Individual causes and their effects can rarely be isolated. Minor actions and random incidents can have disproportionately large—even decisive—effects. While dependent on the laws of science and the intuition and creativity of art, war takes its fundamental character from the dynamic of human interaction.

Also this bit:

War is an extension of both policy and politics with the addition of military force. Policy and politics are related but not synonymous, and it is important to understand war in both contexts. Politics refers to the distribution of power through dynamic interaction, both cooperative and competitive, while policy refers to the conscious objectives established within the political process. The policy aims that are the motive for any group in war should also be the foremost determinants of its conduct. The single most important thought to understand about our theory is that war must serve policy.