The Trouble With Truth

A friend and I have been having a running conversation about the “post-truth era” and bias in the media. This post is an attempt to pull the ideas from those conversations together into a kind of mental model.

Essentially there are three issues in play here: epistemic uncertainty (the problem of induction), cognitive biases and incentive systems.

The first two help explain why otherwise intelligent and well-meaning people can come to inhabit echo chambers when they otherwise seek to reason objectively. Incentive systems then reinforce the sub-optimal behavior of well-meaning people and assist opportunists and charlatans in spreading outright falsehoods.

This post is not meant to address opportunists and charlatans as their motives are things like wealth, power and ideological fanaticism. For these individuals the truth is simply an inconvenient speed bump along the road to power. Rather, I am interested in how the uncertainty inherent in scientific reasoning leaves openings for multiple truths and seemingly contradictory bodies of evidence.

Epistemic Uncertainty

How can we know a thing is true in the first place? That seems like a good place to start.

Broadly speaking, we can reason deductively or inductively. Deductive reasoning is a process that arrives at a “logically certain conclusion.” Deductive reasoning is what you do in math class. The beauty of mathematics, which I did not properly appreciate as a kid, is that it is about the only discipline where you can know with certainty when you are right. Your conclusion must follow inevitably from your premises. It cannot be otherwise.

Inductive reasoning, on the other hand, takes specific observations and then infers general rules. Importantly, the scientific method is a form of inductive reasoning. All of the social sciences, including economics, utilize inductive reasoning. Inductive reasoning is subject to the so-called “problem of induction.” Namely: inferences are not “logically certain.”

The classic example involves swans. For a long time people believed all swans were white. This was an inference based on the fact that in every recorded observation of a swan, the swan had been white. Critically, this did not prove all swans were white. In order to prove all swans were white, you would have to observe every swan in existence, every swan that had ever existed, and every swan that ever would exist. That is of course impossible. And in fact, as soon as someone discovered a black swan (in Australia in 1697), the inference that all swans were white was immediately proven false.

That’s not to say the inference was a bad one. It was perfectly reasonable given the available data. You see how this presents issues for science, and any other truth-seeking endeavors. Even “good science” is often wrong.

If you have spent any time reading scientific research, you are familiar with the way hypotheses are formulated and tested. It is never a question of “true or false.” It is a question of “whether the null hypothesis can be rejected at such-and-such a confidence interval.”

What confidence interval is appropriate? The gold standard is 95% (a.k.a. within two standard deviations of the mean, assuming normally distributed results). However, there is a healthy debate over where that threshold should be set.

The probabilistic nature of induction results creates epistemic uncertainty. In that sense, there is no post-truth era. There has never really been an era of truth, either. Science has never really given us truth. It’s given us inferences, some of which have withstood many years of repeated testing (evolution, Newton’s laws, etc.), and to which we’ve assigned extremely high levels of confidence. In other words: we are pretty damn sure some things are true. Sure enough we can do things like send satellites out of our solar system. But it’s still not logical certainty.

In other areas, science has given us inferences where confidence levels are much lower, or where there is significant debate over whether the inference if of any significance at all. Many scientific studies don’t replicate.

The point of this is not to argue we should junk science or inductive reasoning. It’s to show how even if two parties use scientific reasoning in good faith and with the exact same methodology, they might arrive at different conclusions. How do you resolve the conflict?

To function properly, the scientific method requires friction. Replication of results in particular is critical. However, when we layer on cognitive biases and political and economic incentives, scientific inqiuiry and other inductive reasoning processes become distorted.

Cognitive Biases

Humans are funny creatures. Our brains evolved to deal with certain specific problems. It was not that long ago that the issues of the day were mainly things like: “can I eat this mushroom without dying?” and “that animal looks like it wants to eat me.”

Evolution did not optimize human brains for analyzing collateralized loan obligations.

I am not going to rehash the literature on cognitive biases here. If you are interested in a deep dive you should read Thinking, Fast and Slow, by Daniel Kahneman. Rather, I want to mention one bias in particular: confirmation bias.

Instead of looking for evidence that their inferences are false, people look for evidence that confirms them. The Wiki for confirmation bias calls it “a systematic error of inductive reasoning.” There is a saying among quants that if you torture data long enough it will say whatever you want it to. These days we have more data than ever at our fingertips, as well as new and exciting torture methods.

Importantly, confirmation bias does not represent a conscious decision to lie or deceive. People who consciously manipulate data to support a hypothesis they know ex ante to be false are opportunists and charlatans. We are not concerned with them here.

People aren’t evil or stupid for exhibiting confirmation bias. They just do. Intelligent people have to be especially careful about confirmation bias. They will be extra unconsciously clever about it.

You can probably see how combining this with inductive reasoning can be problematic. It creates a situation where everyone has “their” facts. What’s more, most people involved in research and reporting operate within incentive systems that encourage confirmation bias rather than mitigate it.

Incentives

If people tend to seek out information confirming their views, it is only logical that media businesses pander to that tendency. The media business is first and foremost an attention business. Either you have people’s attention or you don’t. If you don’t, the subscribers stop paying and the advertisers don’t want to be on your platform and pretty soon you are out of business. It behooves you to serve up the kinds of stories your readers like reading, and that align with their worldviews.

Likewise academics face their own pressures to conform with peers. Academic departments are subject to the same power games and politics as corporate boardrooms. Reputation matters. Particularly given the importance of tenure to young faculty. Also, if you are an academic star who has built a 40-year reputation on the back of a particular theory, how much incentive do you have to want to try and poke holes in that? If you think these dynamics don’t impact behavior, you don’t know very much about human behavior.

Closer to home for this blog, at hedge funds and mutual funds analysts often receive bonuses based on how their ideas perform once they are in a portfolio. But what if you are the analyst covering a weak opportunity set? The right thing to do is throw up your hands and say, “everything I am looking at sucks.” But if you go that route you can look forward to no bonus and possibly being fired. So instead you will sugar coat the least bad ideas and try to get them into the book.

Putting It All Together

So here we have it, from start to finish:

  • Many forms of “knowing,” including the scientific method, are forms of inductive reasoning. Inductive inferences are subject to uncertainty and potential falsification. This means there is always an opening for doubt or contradictory evidence. We accept certain scientific principles as true, but they are not actually logical certainties. Truth in the sense of logical certainty is not as common as many people think.
  • Due to cognitive biases, especially confirmation bias, people distort the process of scientific inquiry. Rather than seek information that could falsify their existing beliefs (the correct approach), they seek out information that confirms them. People have “their facts,” which they can back up with evidence, which in turn creates multiple, plausible versions of “the truth.”
  • Economically, media companies are incentivized to appeal to peoples’ cognitive biases. The economics of media incentivize a continuous feedback loop between content producers and consumers. Academics and other researchers are also incentivized to confirm their beliefs due to issues of reputation, professional advancement and compensation.

Betting Dark Side

In craps the best bet on the table (other than Odds) is Don’t Pass. The house edge is just a teensy bit narrower there than on the Pass Line. But no one really bets that way. And when people do, they are quiet about it, because they are betting for everyone else at the table to lose. That’s not the way you endear yourself to a bunch of degenerates at the casino. Betting Don’t Pass is also called betting “dark side.”

Personally, I have no interest in betting dark side in craps. The edge is pretty small to have to endure swarthy drunks shooting you sideways glances all night. But when it comes to investing I am plenty interested in opportunities to bet dark side.

In fact, sometimes I play a mental little game with myself called: What’s A Seemingly Obvious Trend Or Theme I Can Get On The Other Side Of?

For example right now everyone in the US is whining about how there are no cheap stocks. You know where stocks are cheap?

Russia.

In Russia you’ve got stuff on single digit earnings multiples paying 6% dividend yields. And it’s not even distressed stuff for the most part. Research Affiliates has got a phenomenal little asset allocation tool you can use for free. See those two red dots on the upper right in the double-digit return zone? That’s Russian and Turkish equities. (In case you are wondering, US large cap equity plots at about 40 bps of annualized real return)

RAFI_Cap_Markets201806
Source: Research Affiliates; Returns pictured are estimated real returns

Yeah. I know. Everyone hates Russia. You can probably rattle off at least five reasons why Russia is an absolute no-go off the top of your head. But I will happily bet dark side on Russian equity. I won’t bet the farm, but I’ll take meaningful exposure. The reason is I am getting paid pretty well to take Russian equity risk.

Risk assets are a pretty crappy deal here in the US. (40 bps real per year over the next decade, remember?) Here everyone’s convinced themselves stocks don’t go down anymore so they are willing to pay up. I guess some day that will be put to the test. We’ll see.

In the meantime, what other trends can we get on the other side of?

ESG might create opportunities. If you haven’t heard of ESG it stands for Environmental, Social and Governance. Big asset managers have become obsessed with ESG because it’s an opportunity to gather assets from millennials and women at a time when index funds and quants are hoovering up all the flows.

This is literally what the big asset managers tell allocators in presentations now: “millenials and women are going to inherit all the assets and they want to be invested in line with their values. Here are all our ESG products. Also here is marketing collateral to help you have ‘the ESG talk’ with your clients.”

So where do we go from here?

Well, for starters I am thinking a trillion dollars rotates into stuff that screens well on ESG. If this persists long enough and to a significant enough degree the stuff that doesn’t screen well on ESG is going to get hammered. With any luck it will get kicked out of indices and analysts will drop coverage and the bid-offer spreads will blow out.

Like Russian equities, the oil companies and the natural gas companies and the miners and the basic chemical companies and the capital intensive heavy manufacturers will trade on single digit earnings multiples with 6% dividend yields. All because they don’t score well on the asset gatherers’ screens.

So yeah, I think I’ll bet dark side when it comes to ESG, too.

For the record, I don’t have anything against ESG in principle. I am actually a big fan of  an extreme form of ESG, called impact investing, where you allocate capital with low return hurdles (like 0% real) to achieve a specific social objective. Maybe to fund development in a low income community in your city. Micro-lending is an example of this, and I think it’s a better model than philanthropy in many cases. But that’s a topic for another day.

This post is about how people’s emotional reactions to the securities they own create bargains. Here betting dark side is betting on something kind of icky. “Ick” is an emotional reaction. When people react emotionally to stuff, it has the potential to get mispriced. “Ick” is a feeling that encourages indiscriminate selling.

That’s where the Don’t Pass bet comes back into play. It’s one of the better bets in the casino, and it’s massively underutilized. Why?

Because it makes people feel icky.