The Trouble With Truth

A friend and I have been having a running conversation about the “post-truth era” and bias in the media. This post is an attempt to pull the ideas from those conversations together into a kind of mental model.

Essentially there are three issues in play here: epistemic uncertainty (the problem of induction), cognitive biases and incentive systems.

The first two help explain why otherwise intelligent and well-meaning people can come to inhabit echo chambers when they otherwise seek to reason objectively. Incentive systems then reinforce the sub-optimal behavior of well-meaning people and assist opportunists and charlatans in spreading outright falsehoods.

This post is not meant to address opportunists and charlatans as their motives are things like wealth, power and ideological fanaticism. For these individuals the truth is simply an inconvenient speed bump along the road to power. Rather, I am interested in how the uncertainty inherent in scientific reasoning leaves openings for multiple truths and seemingly contradictory bodies of evidence.

Epistemic Uncertainty

How can we know a thing is true in the first place? That seems like a good place to start.

Broadly speaking, we can reason deductively or inductively. Deductive reasoning is a process that arrives at a “logically certain conclusion.” Deductive reasoning is what you do in math class. The beauty of mathematics, which I did not properly appreciate as a kid, is that it is about the only discipline where you can know with certainty when you are right. Your conclusion must follow inevitably from your premises. It cannot be otherwise.

Inductive reasoning, on the other hand, takes specific observations and then infers general rules. Importantly, the scientific method is a form of inductive reasoning. All of the social sciences, including economics, utilize inductive reasoning. Inductive reasoning is subject to the so-called “problem of induction.” Namely: inferences are not “logically certain.”

The classic example involves swans. For a long time people believed all swans were white. This was an inference based on the fact that in every recorded observation of a swan, the swan had been white. Critically, this did not prove all swans were white. In order to prove all swans were white, you would have to observe every swan in existence, every swan that had ever existed, and every swan that ever would exist. That is of course impossible. And in fact, as soon as someone discovered a black swan (in Australia in 1697), the inference that all swans were white was immediately proven false.

That’s not to say the inference was a bad one. It was perfectly reasonable given the available data. You see how this presents issues for science, and any other truth-seeking endeavors. Even “good science” is often wrong.

If you have spent any time reading scientific research, you are familiar with the way hypotheses are formulated and tested. It is never a question of “true or false.” It is a question of “whether the null hypothesis can be rejected at such-and-such a confidence interval.”

What confidence interval is appropriate? The gold standard is 95% (a.k.a. within two standard deviations of the mean, assuming normally distributed results). However, there is a healthy debate over where that threshold should be set.

The probabilistic nature of induction results creates epistemic uncertainty. In that sense, there is no post-truth era. There has never really been an era of truth, either. Science has never really given us truth. It’s given us inferences, some of which have withstood many years of repeated testing (evolution, Newton’s laws, etc.), and to which we’ve assigned extremely high levels of confidence. In other words: we are pretty damn sure some things are true. Sure enough we can do things like send satellites out of our solar system. But it’s still not logical certainty.

In other areas, science has given us inferences where confidence levels are much lower, or where there is significant debate over whether the inference if of any significance at all. Many scientific studies don’t replicate.

The point of this is not to argue we should junk science or inductive reasoning. It’s to show how even if two parties use scientific reasoning in good faith and with the exact same methodology, they might arrive at different conclusions. How do you resolve the conflict?

To function properly, the scientific method requires friction. Replication of results in particular is critical. However, when we layer on cognitive biases and political and economic incentives, scientific inqiuiry and other inductive reasoning processes become distorted.

Cognitive Biases

Humans are funny creatures. Our brains evolved to deal with certain specific problems. It was not that long ago that the issues of the day were mainly things like: “can I eat this mushroom without dying?” and “that animal looks like it wants to eat me.”

Evolution did not optimize human brains for analyzing collateralized loan obligations.

I am not going to rehash the literature on cognitive biases here. If you are interested in a deep dive you should read Thinking, Fast and Slow, by Daniel Kahneman. Rather, I want to mention one bias in particular: confirmation bias.

Instead of looking for evidence that their inferences are false, people look for evidence that confirms them. The Wiki for confirmation bias calls it “a systematic error of inductive reasoning.” There is a saying among quants that if you torture data long enough it will say whatever you want it to. These days we have more data than ever at our fingertips, as well as new and exciting torture methods.

Importantly, confirmation bias does not represent a conscious decision to lie or deceive. People who consciously manipulate data to support a hypothesis they know ex ante to be false are opportunists and charlatans. We are not concerned with them here.

People aren’t evil or stupid for exhibiting confirmation bias. They just do. Intelligent people have to be especially careful about confirmation bias. They will be extra unconsciously clever about it.

You can probably see how combining this with inductive reasoning can be problematic. It creates a situation where everyone has “their” facts. What’s more, most people involved in research and reporting operate within incentive systems that encourage confirmation bias rather than mitigate it.

Incentives

If people tend to seek out information confirming their views, it is only logical that media businesses pander to that tendency. The media business is first and foremost an attention business. Either you have people’s attention or you don’t. If you don’t, the subscribers stop paying and the advertisers don’t want to be on your platform and pretty soon you are out of business. It behooves you to serve up the kinds of stories your readers like reading, and that align with their worldviews.

Likewise academics face their own pressures to conform with peers. Academic departments are subject to the same power games and politics as corporate boardrooms. Reputation matters. Particularly given the importance of tenure to young faculty. Also, if you are an academic star who has built a 40-year reputation on the back of a particular theory, how much incentive do you have to want to try and poke holes in that? If you think these dynamics don’t impact behavior, you don’t know very much about human behavior.

Closer to home for this blog, at hedge funds and mutual funds analysts often receive bonuses based on how their ideas perform once they are in a portfolio. But what if you are the analyst covering a weak opportunity set? The right thing to do is throw up your hands and say, “everything I am looking at sucks.” But if you go that route you can look forward to no bonus and possibly being fired. So instead you will sugar coat the least bad ideas and try to get them into the book.

Putting It All Together

So here we have it, from start to finish:

  • Many forms of “knowing,” including the scientific method, are forms of inductive reasoning. Inductive inferences are subject to uncertainty and potential falsification. This means there is always an opening for doubt or contradictory evidence. We accept certain scientific principles as true, but they are not actually logical certainties. Truth in the sense of logical certainty is not as common as many people think.
  • Due to cognitive biases, especially confirmation bias, people distort the process of scientific inquiry. Rather than seek information that could falsify their existing beliefs (the correct approach), they seek out information that confirms them. People have “their facts,” which they can back up with evidence, which in turn creates multiple, plausible versions of “the truth.”
  • Economically, media companies are incentivized to appeal to peoples’ cognitive biases. The economics of media incentivize a continuous feedback loop between content producers and consumers. Academics and other researchers are also incentivized to confirm their beliefs due to issues of reputation, professional advancement and compensation.

Leave a Reply