“How To Avoid Bad Decisions” is an ongoing series about cognitive biases, or the ways that we can misinterpret information without realizing it. This week’s Volume 4 details Errors of Logic, or ways that reasoning can appear valid even when it doesn’t hold up to logical scrutiny. Click the links below to read previous volumes.

Volume 1 — Errors of Metacognition
Volume 2 — Errors of Groupthink
Volume 3 — Errors of Impression

In Volumes 1 through 3, we discussed the “friction” between different systems in our brains and how this causes certain cognitive biases.

For Volume 4, we’ll discuss some Errors of Logic, a.k.a. situations where reasoning which appears logical on its face but doesn’t hold up to logical scrutiny. These are similar to “logical fallacies,” of which there are many—and what’s insidious about this set of biases is that they can even ensnare people who are trying to avoid them.

Both of today’s biases are easier to explain if you picture yourself in Las Vegas.

First: the clustering illusion. Human beings are adept at spotting patterns because that helps us in so many areas of life. Whether you’re hunting for food or navigating a delicate discussion, we’re way likelier to “trust” certain observations when they’re part of a pattern, on the simple logic that “one thing could be a fluke, but I can be surer about many things together.”

Here’s the thing: not every pattern we spot is meaningful or even actually a pattern. The clustering illusion happens whenever we perceive a pattern to be meaningful when it isn’t meaningful—or isn’t even a pattern.

A specific version of the clustering illusion is called the gambler’s fallacy. Suppose you’re playing a game of chance where the odds of winning are 1 in 5 (and The House is being historically honest here). The gambler’s fallacy says that you should win once if you play at least five times; in other words, this fallacy creates a “mental cluster” where you play five games and expect to win one of them.

This sounds logical, but it isn’t. In statistical reality, you have 1-in-5 odds of winning each game independently, meaning that your odds don’t improve by playing more games; the odds actually “reset” every single time you play. The cluster is… wait for it… an illusion. In real life, if you play 5 games, you could win anywhere from 0 to 5 times; probability estimates you’ll win once, but that describes the average and not every data set person within it.

Today’s second Error of Logic is called the outcome bias. To explain this one, let’s think big-picture about your decision to go to Vegas in the first place.

Just picture The Hangover if it’s easier, but imagine that—even if things go sideways in the middle—you’re able to leave Vegas no worse than you arrived. In fact, let’s imagine that you got lucky and you’re leaving with some extra cash despite your gambler’s fallacy. Does that mean you made good decisions?

No. No, it does not. 🤦

Sometimes, all is NOT well just because it ended well. The word “history” should be sufficient for evidence.

Survivorship Bias

One reason we can’t completely avoid mistakes is that everyone is, at all times, working from incomplete information. Sometimes, you simply couldn’t have known better. Happens to everyone.

As Donald Rumsfeld famously discussed, there’s a difference between “known unknowns” and “unknown unknowns.” The former has plenty of visible examples, like what students will learn in school today (they haven’t learned the material yet, but they know what they’ll be learning). The latter, however, is completely invisible; the “unknown unknowns” are everything you don’t see no matter how hard you’re looking for clues, like blind spots you don’t even know you have.

Which brings us to survivorship bias, which happens when you misjudge a situation by focusing only on the visible or “surviving” information. The key clarification is that, while the misjudgment is yours, the original cause of the error is the information itself; per the bias’s name, it’s not your fault if certain information “dies” before reaching your eyes.

Let’s suppose your Uncle Bob (heh) believes that entrepreneurship is easy because he’s only ever heard of entrepreneurs who succeeded. This belief is, of course, silly—but let’s say your Uncle Bob is a good guy, and smart, and he listens. He’s the kind of person who reads the whole newspaper every day. Survivorship bias is his only bias.

Your Uncle Bob has two information problems: (1) the newspaper and (2) the “unknown unknown” of “what to read to fix this belief I don’t know is broken.”

The newspaper is Bob’s Info Problem #1 for a simple, intuitive reason: only newsworthy things get printed in the newspaper. (Unless you live in a small town, which he does not.) He’s not aware of the selection bias built into his daily reading, so he won’t ever read about the mundane struggles, much less uninteresting failures, of the common entrepreneur.

As for Info Problem #2… if Bob’s as good as we describe, then a couple hours reading the right things would change his belief about entrepreneurship. But the punishing part of survivorship bias is that you don’t even get the chance to learn what you don’t know until you’ve somehow come face-to-face with your ignorance. How else would you know where to look—or to look at all?

For this bias, the closest we get to a solution is this: be gracious in your ignorance, for everyone is ignorant (it’s just a question of degree). You’re gonna have to wash this idea a few times before it’s soft enough to wear, but Socrates put it well: “The only true wisdom is in knowing you know nothing.”

Postscript: the haunting 352-word Kafka story “A Message From the Emperor” illustrates how survivorship bias can be truly, well, Kafka-esque in its nature.

Information Bias and Zero-Risk Bias

If you take “information-seeking impulse” and stretch it to its logical extremes, you get information bias—the tendency to believe more information is better even when it’s not helpful for taking action or making decisions.

The root of information bias is the simple belief that “more information is always better than less information.” This belief is sticky because there is some degree of intuitive truth to it; after all, you can always ignore extra data, but it sucks to need more data than you have.

If you had all the time in the world (or you could pause time), then this information-seeking impulse would almost always help you. But there’s the rub: time is more often scarce than plentiful.

It stands to reason that information bias is “dangerous” in proportion to the urgency of the situation. If you have spare time to look around and gather details, you probably should—but if you’re running for your life, thinking wider than your next few steps might slow you down enough to kill you.

Our brains hate running for their lives (prepared though they may be), which is brings us to our second bias: zero-risk bias, which is the tendency to prefer the option with the least risk regardless of other merits.

What’s the opposite of risk? Depending on your semantics, you might have said “safety” or “certainty,” but really, don’t both of those words make you feel warm and fuzzy inside?

A crude way of explaining the problem behind zero-risk bias is that it’s a cognitive addiction. Picture the last time you were in a scary and risky situation, and then picture what it was like to exit the danger and re-enter the comforts of safety. That’s how it feels to take back-to-back shots of dopamine and serotonin—a combination which has hooked and sunk so many clear thoughts like the speedball has addicted and/or killed so many big-name talents.

Usually, those two neurotransmitters (dopamine and serotonin) won’t both flood your brain at the same time. The former is excitement, rush, reward; the latter is contentment, calm, peace. Both of them “feel good,” but in radically different ways. So on the rare occasion when we can get both rushes at once—feeling super-hyped and super-safe at the same time—the feeling will leave a lasting impression upon our perception of the world.

Slippery Slope Bias

Most of you will be familiar with the concept of a slippery slope, where if A happens, B is virtually certain to happen, then C and D and E and so on—and the final letter on the slope is something very bad. For example, the “gateway drug” argument is a slippery slope; it presumes that, once a person has tried something (usually pot), they will more-or-less inevitably try and then become addicted to worse things.

Why do slippery slopes appeal to our brains so much? Put simply, all human beings are two things: products of survival, and storytellers. All people intuitively understand (chronological) stories because each person is experiencing one; if you want others to understand something, telling a story is a reliable way to get there. This is why Aesop’s fables have endured for thousands of years… they illustrate abstract principles to children who couldn’t understand them in any other way.

A story is even likelier to stick if it ends poorly for somebody—because that tells the listener’s brain “this could happen to me and I want to know how to avoid it.” Every slippery slope is a story of the If-You-Start-Down-This-Path-You’ll-End-Up-In-A-Very-Bad-Place form.

There’s a sense in which slippery slopes are an offshoot of the clustering illusion: we perceive a pattern whether or not it actually exists. We have an instinctive fear if we (think we) recognize the beginning of a bad pattern, or one which ended badly before—and in those moments, it’s surprisingly easy to confuse certainty of the past for certainty of the future.

Sometimes, when we’re telling stories after the fact, slippery slopes are the reality in plain view. But it’s very different when you’re trying to predict the future—which you can’t. That’s the logical mistake at the heart of most slippery slopes: assuming that Z is inevitable simply because A has happened. For any argument of that type to be logically valid, you have to warrant every logical step between A and Z; even so, break a single link in the chain and the argument can fall apart.

Returning to our example, here are the two main errors in the “gateway drug” argument (which are replicated in many other slippery-slope arguments):

  1. While it is true that virtually all hard-drug users have smoked pot, the reverse isn’t true at all. Case in point: millions of people across 33 U.S. states. (Well, all 50, but… you know. Sorry, Utah!)
  2. Correlation is not the same as causation. No matter how commonly hard drugs and MJ roll together, the latter can’t directly cause the former. Those are independent decisions with separate causes and effects for that person.

Get more from theCLIKK