HomeBusiness Books'The Great Mental Models Volume 1: General Thinking Concepts' Book Summary

‘The Great Mental Models Volume 1: General Thinking Concepts’ Book Summary

Charlie Munger, the lesser-known business partner of Warren Buffet, once said,

“I believe in the discipline of mastering the best of what other people have figured out.”

The Great Mental Models project, authored by Shane Parrish of Farnam Street blog, is a homage to the aforementioned idea. A series of four books, aimed at distilling the most useful mental models from a wide variety of disciplines, should surely help anyone who reads it upgrade their thinking, and subsequently, decision-making capabilities.

The first volume of the series The Great Mental Models introduces us to nine fundamental metal models, in a bid to help us minimize blind spots and understand reality better. My book summary, like the book, contains a brief definition of mental models and then moves on to discussing each of the mental models individually.

As far as the utility of book summaries is concerned, no summary, including this one, should be regarded as a substitute for reading the book. That said, my summary should be most useful either to those who are looking to acquaint themself with the book before reading it fully or those who have read it once and are looking to refresh their memory with the book’s wisdom.

Now, before you move on to reading my book summary, let me briefly explain the methodology I followed to create this summary so that you have an idea of what you are getting into.

As someone, who reads a lot of books, I have come to realize that 90% of the value is usually in 10% of the books, so my summaries, including this one, are highlights of the 10% high on signal, low on noise content of the book. In light of this idea, my ‘The Great Mental Models: General Thinking Concepts Volume 1’ summary is exhaustive: it is 7814 words long and will take you an estimated 31 minutes to read at an average reading speed of 250 words per minute.

However, you will most likely have to slow down to absorb the content thoroughly, meaning it might take you even longer to read the summary in its entirety. If you do not have that much time at the moment, feel free to skim through the summary or bookmark this page to revisit later.

‘The Great Mental Models Volume 1: General Thinking Concepts’ Book Summary

The key to better understanding the world is to build a latticework of mental models.

Mental models describe the way the world works. They shape how we think, how we understand, and how we form beliefs. Largely subconscious, mental models operate below the surface. We’re not generally aware of them and yet they’re the reason when we look at a problem we consider some factors relevant and others irrelevant. They are how we infer causality, match patterns, and draw analogies. They are how we think and reason.

A mental model is simply a representation of how something works. We cannot keep all of the details of the world in our brains, so we use models to simplify the complex into understandable and organizable chunks. Whether we realize it or not, we then use these models every day to think, decide, and understand our world.

Why mental models?

There is no system that can prepare us for all risks. Factors of chance introduce a level of complexity that is not entirely predictable. But being able to draw on a repertoire of mental models can help us minimize risk by understanding the forces that are at play. Likely consequences don’t have to be a mystery.

Not having the ability to shift perspective by applying knowledge from multiple disciplines makes us vulnerable. Mistakes can become catastrophes whose effects keep compounding, creating stress and limiting our choices. Multidisciplinary thinking, learning these mental models and applying them across our lives, creates less stress and more freedom. The more we can draw on the diverse knowledge contained in these models, the more solutions will present themselves.

Mental models and how to use them

Perhaps an example will help illustrate the mental models approach. Think of gravity, something we learned about as kids and perhaps studied more formally in university as adults. We each have a mental model about gravity, whether we know it or not. And that model helps us to understand how gravity works. Of course we don’t need to know all of the details, but we know what’s important. We know, for instance, that if we drop a pen it will fall to the floor. If we see a pen on the floor we come to a probabilistic conclusion that gravity played a role.

This model plays a fundamental role in our lives. It explains the movement of the Earth around the sun. It informs the design of bridges and airplanes. It’s one of the models we use to evaluate the safety of leaning on a guard rail or repairing a roof. But we also apply our understanding of gravity in other, less obvious ways. We use the model as a metaphor to explain the influence of strong personalities, as when we say, “He was pulled into her orbit.” This is a reference to our basic understanding of the role of mass in gravity—the more there is the stronger the pull. It also informs some classic sales techniques. Gravity diminishes with distance, and so too does your propensity to make an impulse buy. Good salespeople know that the more distance you get, in time or geography, between yourself and the object of desire, the less likely you are to buy. Salespeople try to keep the pressure on to get you to buy right away.

Gravity has been around since before humans, so we can consider it to be time-tested, reliable, and representing reality. And yet, can you explain gravity with a ton of detail? I highly doubt it. And you don’t need to for the model to be useful to you. Our understanding of gravity, in other words, our mental model, lets us anticipate what will happen and also helps us explain what has happened. We don’t need to be able to describe the physics in detail for the model to be useful.

However, not every model is as reliable as gravity, and all models are flawed in some way. Some are reliable in some situations but useless in others. Some are too limited in their scope to be of much use. Others are unreliable because they haven’t been tested and challenged, and yet others are just plain wrong. In every situation, we need to figure out which models are reliable and useful. We must also discard or update the unreliable ones, because unreliable or flawed models come with a cost.

9 Mental Models Mentioned in ‘The Great Mental Models Volume 1: General Thinking Concepts

  1. The Map is Not the Territory
  2. Circle of Competence
  3. First Principles Thinking
  4. Thought Experiment
  5. Second-Order Thinking
  6. Probabilistic Thinking
  7. Inversion
  8. Occam’s Razor
  9. Hanlon’s Razor

Mental Model #1: The Map is Not the Territory

We use maps every day. They help us navigate from one city to another. They help us reduce complexity to simplicity. Think of the financial statements for a company, which are meant to distill the complexity of thousands of transactions into something simpler. Or a policy document on office procedure, a manual on parenting a two-year-old, or your performance review.

We need maps and models as guides. But frequently, we don’t remember that our maps and models are abstractions and thus we fail to understand their limits. We forget there is a territory that exists separately from the map. This territory contains details the map doesn’t describe. We run into problems when our knowledge becomes of the map, rather than the actual underlying territory it describes.

When we mistake the map for reality, we start to think we have all the answers. We create static rules or policies that deal with the map but forget that we exist in a constantly changing world. When we close off or ignore feedback loops, we don’t see the terrain has changed and we dramatically reduce our ability to adapt to a changing environment. Reality is messy and complicated, so our tendency to simplify it is understandable. However, if the aim becomes simplification rather than understanding we start to make bad decisions.

We can’t use maps as dogma. Maps and models are not meant to live forever as static references. The world is dynamic. As territories change, our tools to navigate them must be flexible to handle a wide variety of situations or adapt to the changing times. If the value of a map or model is related to its ability to predict or explain, then it needs to represent reality. If reality has changed the map must change.

In order to use a map or model as accurately as possible, we should take three important considerations into account:

  1. Reality is the ultimate update.
  2. Consider the cartographer.
  3. Maps can influence territories.

Reality is the ultimate update: When we enter new and unfamiliar territory it’s nice to have a map on hand. Everything from travelling to a new city, to becoming a parent for the first time has maps that we can use to improve our ability to navigate the terrain. But territories change, sometimes faster than the maps and models that describe them. We can and should update them based on our own experiences in the territory.

Consider the cartographer: Maps are not purely objective creations. They reflect the values, standards, and limitations of their creators. We can see this in the changing national boundaries that make up our world maps. Countries come and go depending on shifting political and cultural sensibilities. When we look at the world map we have today, we tend to associate societies with nations, assuming that the borders reflect a common identity shared by everyone contained within them.

However, as historian Margaret MacMillan has pointed out, nationalism is a very modern construct, and in some sense has developed with, not in advance of, the maps that set out the shape of countries.9 We then should not assume that our literal maps depict an objective view of the geographical territory. For example, historians have shown that the modern borders of Syria, Jordan, and Iraq reflect British and French determination to maintain influence in the Middle East after World War I. Thus, they are a better map of Western interest than of local custom and organization

Maps can influence territories: This problem was part of the central argument put forth by Jane Jacobs in her groundbreaking work, The Death and Life of Great American Cities. She chronicled the efforts of city planners who came up with elaborate models for the design and organization of cities without paying any attention to how cities actually work. They then tried to fit the cities into the model. She describes how cities were changed to correspond to these models, and the often negative consequences of these efforts. “It became possible also to map out master plans for the statistical city, and people take these more seriously, for we are all accustomed to believe that maps and reality are necessarily related, or that if they are not, we can make them so by altering reality.”

Mental Model #2:Circle of Competence

Understanding your circle of competence improves decision-making and outcomes.

In order to get the most out of this mental model, we will explore the following:

  1. What is a circle of competence?
  2. How do you know when you have one?
  3. How do you build and maintain one?
  4. How do you operate outside of one?

What is a circle of competence?

Imagine an old man who’s spent his entire life up in a small town. He’s the Lifer. No detail of the goings-on in the town has escaped his notice over the years. He knows the lineage, behavior, attitudes, jobs, income, and social status of every person in town. Bit by bit, he built that knowledge up over a long period of observation and participation in town affairs.

The Lifer knows where the bodies are buried and who buried them. He knows who owes money to whom, who gets along with whom, and who the town depends on to keep spinning. He knows about that time the mayor cheated on his taxes. He knows about that time the town flooded, how many inches high the water was, and exactly who helped whom and who didn’t.

Now imagine a Stranger enters the town, in from the Big City. Within a few days, the Stranger decides that he knows all there is to know about the town. He’s met the mayor, the sheriff, the bartender, and the shopkeeper, and he can get around fairly easily. It’s a small town and he hasn’t come across anything surprising.

In the Stranger’s mind, he’s convinced he pretty much knows everything a Lifer would know. He has sized up the town in no time, with his keen eye. He makes assumptions based on what he has learned so far, and figures he knows enough to get his business done. This, however, is a false sense of confidence that likely causes him to take more risks than he realizes. Without intimately knowing the history of the town, how can he be sure that he has picked the right land for development, or negotiated the best price? After all, what kind of knowledge does he really have, compared to the Lifer?

The difference between the detailed web of knowledge in the Lifer’s head and the surface knowledge in the Stranger’s head is the difference between being inside a circle of competence and being outside the perimeter. True knowledge of a complex territory cannot be faked. The Lifer could stump the Stranger in no time, but not the other way around. Consequently, as long as the Lifer is operating in his circle of competence he will always have a better understanding of reality to use in making decisions. Having this deep knowledge gives him flexibility in responding to challenges, because he will likely have more than one solution to every problem. And this depth increases his efficiency—he can eliminate the bad choices quickly because he has all the pieces of the puzzle.

What happens when you take the Lifer/Stranger idea seriously and try to delineate carefully the domains in which you’re one or the other? There is no definite checklist for figuring this out, but if you don’t have at least a few years and a few failures, you cannot consider yourself competent in a circle.

How do you know when you have a circle of competence?

Within our circles of competence, we know exactly what we don’t know. We are able to make decisions quickly and relatively accurately. We possess detailed knowledge of additional information we might need to make a decision with full understanding, or even what information is unobtainable. We know what is knowable and what is unknowable and can distinguish between the two.

How do you build and maintain a circle of competence?

One of the essential requirements of a circle of competence is that you can never take it for granted. You can’t operate as if a circle of competence is a static thing, that once attained is attained for life. The world is dynamic. Knowledge gets updated, and so too must your circle. There are three key practices needed in order to build and maintain a circle of competence: curiosity and a desire to learn, monitoring, and feedback.

How do you operate outside a circle of competence?

Part of successfully using circles of competence includes knowing when we are outside them—when we are not well equipped to make decisions. Since we can’t be inside a circle of competence in everything, when we find ourselves Strangers in a place filled with Lifers, what do we do? We don’t always get to “stay around our spots.” We must develop a repertoire of techniques for managing when we’re outside of our sphere, which happens all the time.

There are three parts to successfully operating outside a circle of competence:

  1. Learn at least the basics of the realm you’re operating in, while acknowledging that you’re a Stranger, not a Lifer. However, keep in mind that basic information is easy to obtain and often gives the acquirer an unwarranted confidence.
  2. Talk to someone whose circle of competence in the area is strong. Take the time to do a bit of research to at least define questions you need to ask, and what information you need, to make a good decision. If you ask a person to answer the question for you, they’ll be giving you a fish. If you ask them detailed and thoughtful questions, you’ll learn how to fish. Furthermore, when you need the advice of others, especially in higher stakes situations, ask questions to probe the limits of their circles. Then ask yourself how the situation might influence the information they choose to provide you.
  3. Use a broad understanding of the basic mental models of the world to augment your limited understanding of the field in which you find yourself a Stranger. These will help you identify the foundational concepts that would be most useful. These then serve as a guide to help you navigate the situation you are in.

Mental Model #3:First Principles Thinking

First principles thinking is one of the best ways to reverse-engineer complicated situations and unleash creative possibility. Sometimes called reasoning from first principles, it’s a tool to help clarify complicated problems by separating the underlying ideas or facts from any assumptions based on them. What remain are the essentials. If you know the first principles of something, you can build the rest of your knowledge around them to produce something new.

First principles do not provide a checklist of things that will always be true; our knowledge of first principles changes as we understand more. They are the foundation on which we must build, and thus will be different in every situation, but the more we know, the more we can challenge. For example, if we are considering how to improve the energy efficiency of a refrigerator, then the laws of thermodynamics can be taken as first principles. However, a theoretical chemist or physicist might want to explore entropy, and thus further break the second law into its underlying principles and the assumptions that were made because of them. First principles are the boundaries that we have to work within in any given situation—so when it comes to thermodynamics an appliance maker might have different first principles than a physicist.

If we want to identify the principles in a situation to cut through the dogma and the shared belief, there are two techniques we can use: Socratic questioning and the Five Whys.

Socratic questioning generally follows this process:

  1. Clarifying your thinking and explaining the origins of your ideas. (Why do I think this? What exactly do I think?)
  2. Challenging assumptions. (How do I know this is true? What if I thought the opposite?)
  3. Looking for evidence. (How can I back this up? What are the sources?)
  4. Considering alternative perspectives. (What might others think? How do I know I am correct?)
  5. Examining consequences and implications. (What if I am wrong? What are the consequences if I am?)
  6. Questioning the original questions. (Why did I think that? Was I correct? What conclusions can I draw from the reasoning process?)

The Five Whys is a method rooted in the behavior of children. Children instinctively think in first principles. Just like us, they want to understand what’s happening in the world. To do so, they intuitively break through the fog with a game some parents have come to dread, but which is exceptionally useful for identifying first principles: repeatedly asking “why?”

The goal of the Five Whys is to land on a “what” or “how”. It is not about introspection, such as “Why do I feel like this?” Rather, it is about systematically delving further into a statement or concept so that you can separate reliable knowledge from assumption. If your “whys” result in a statement of falsifiable fact, you have hit a first principle. If they end up with a “because I said so” or ”it just is”, you know you have landed on an assumption that may be based on popular opinion, cultural myth, or dogma. These are not first principles.

Mental Model #4:Thought Experiment

Thought experiments can be defined as “devices of the imagination used to investigate the nature of things.”1 Many disciplines, such as philosophy and physics, make use of thought experiments to examine what can be known. In doing so, they can open up new avenues for inquiry and exploration. Thought experiments are powerful because they help us learn from our mistakes and avoid future ones. They let us take on the impossible, evaluate the potential consequences of our actions, and re-examine history to make better decisions. They can help us both figure out what we really want, and the best way to get there.

Suppose I asked you to tell me who would win in a game of basketball: The NBA champion LeBron James or the filmmaker Woody Allen? How much would you bet that your answer was correct?

I think you’d get me an answer pretty quickly, and I hope you’d bet all you had.

Next, suppose I asked you to tell me who’d win in a game of basketball: The NBA champion LeBron James or the NBA champion Kevin Durant? How much would you bet that your answer was correct?

A little harder, right? Would you bet anywhere near all you had on being right?

Let’s think this through. You attempted to solve both of the questions in the same way—you imagined the contests. Perhaps more importantly, you didn’t attempt to solve either of them by calling up Messrs. James, Allen, and Durant and inviting them over for an afternoon of basketball. You simply simulated them in your mind.

In the first case, your knowledge of James (young, tall, athletic, and skilled), Allen (old, small, frail, and funny), and the game of basketball gave you a clear mental image. The disparity between the players’ abilities makes the question (and the bet) a total no-brainer.

In the second case, your knowledge of LeBron and Durant may well be extensive, but that doesn’t make it an easy bet. They’re both professional basketball players who are quite similar in size and ability, and both of them are likely to go down as among the best ever to play the game. It’s doubtful that one is much better than the other in a one-on-one match. The only way to answer for sure would be to see them play. And even then, a one-off contest is not going to be definitive.

Thought experiments are more than daydreaming. They require the same rigor as a traditional experiment in order to be useful. Much like the scientific method, a thought experiment generally has the following steps:

  1. Ask a question
  2. Conduct background research
  3. Construct hypothesis
  4. Test with (thought) experiments
  5. Analyze outcomes and draw conclusions
  6. Compare to hypothesis and adjust accordingly (new question, etc.)

In the James/Allen experiment above, we started with a question: Who would win in a game of basketball? If you didn’t already know who those people were, finding out would have been a necessary piece of background research. Then you come out with your hypothesis (James all the way!), and you thought it through.

One of the real powers of the thought experiment is that there is no limit to the number of times you can change a variable to see if it influences the outcome. In order to place that bet, you would want to estimate in how many possible basketball games does Woody Allen beat LeBron James. Out of 100,000 game scenarios, Allen probably only wins in the few where LeBron starts the game by having a deadly heart attack. Experimenting to discover the full spectrum of possible outcomes gives you a better appreciation for what you can influence and what you can reasonable expect to happen.

Here are a few areas in which thought experiments are tremendously useful.

  1. Imagining physical impossibilities
  2. Re-imagining history
  3. Intuiting the non-intuitive

Mental Model #5:Second-Order Thinking

Almost everyone can anticipate the immediate results of their actions. This type of first-order thinking is easy and safe but it’s also a way to ensure you get the same results that everyone else gets. Second-order thinking is thinking farther ahead and thinking holistically. It requires us to not only consider our actions and their immediate consequences, but the subsequent effects of those actions as well. Failing to consider the second- and third-order effects can unleash disaster.

It is often easier to find examples of when second-order thinking didn’t happen—when people did not consider the effects of the effects. When they tried to do something good, or even just benign, and instead brought calamity, we can safely assume the negative outcomes weren’t factored into the original thinking. Very often, the second level of effects is not considered until it’s too late. This concept is often referred to as the “Law of Unintended Consequences” for this very reason.

We see examples of this throughout history. During their colonial rule of India, the British government began to worry about the number of venomous cobras in Delhi. To reduce the numbers, they instituted a reward for every dead snake brought to officials. In response, Indian citizens dutifully complied and began breeding the snakes to slaughter and bring to officials. The snake problem was worse than when it started because the British officials didn’t think at the second level.

Let’s look at two areas where second-order thinking can be used to great benefit:

  1. Prioritizing long-term interests over immediate gains
  2. Constructing effective arguments

Second-order thinking and realizing long-term interests: This is a useful model for seeing past immediate gains to identify long-term effects we want. This is often a conflict for us, as when we choose to forgo the immediate pleasure of candy to improve our long-term health. The first-order effect is this amazing feeling triggered by pure sugar. But what are the second-order effects of regular candy consumption? Is this what I want my body or life to look like in ten years? Second-order thinking involves asking ourselves if what we are doing now is going to get us the results we want.

Constructing an effective argument: Second-order thinking can help you avert problems and anticipate challenges that you can then address in advance. For example, most of us have to construct arguments every day. Convincing your boss to take a chance on a new form of outreach, convincing your spouse to try a new parenting technique. Life is filled with the need to be persuasive. Arguments are more effective when we demonstrate that we have considered the second-order effects and put effort into verifying that these are desirable as well.

Mental Model #6:Probabilistic thinking

Probabilistic thinking is essentially trying to estimate, using some tools of math and logic, the likelihood of any specific outcome coming to pass. It is one of the best tools we have to improve the accuracy of our decisions. In a world where each moment is determined by an infinitely complex set of factors, probabilistic thinking helps us identify the most likely outcomes. When we know these our decisions can be more precise and effective.

There are three important aspects of probability that we need to explain so you can integrate them into your thinking to get into the ballpark and improve your chances of catching the ball:

  1. Bayesian thinking
  2. Fat-tailed curves
  3. Asymmetries

The core of Bayesian thinking (or Bayesian updating, as it can be called) is this: given that we have limited but useful information about the world, and are constantly encountering new information, we should probably take into account what we already know when we learn something new. As much of it as possible. Bayesian thinking allows us to use all relevant prior information in making decisions. Statisticians might call it a base rate, taking in outside information about past situations like the one you’re in.

Consider the headline “Violent Stabbings on the Rise.” Without Bayesian thinking, you might become genuinely afraid because your chances of being a victim of assault or murder is higher than it was a few months ago. But a Bayesian approach will have you putting this information into the context of what you already know about violent crime.

You know that violent crime has been declining to its lowest rates in decades. Your city is safer now than it has been since this measurement started. Let’s say your chance of being a victim of a stabbing last year was one in 10,000, or 0.01%. The article states, with accuracy, that violent crime has doubled. It is now two in 10,000, or 0.02%. Is that worth being terribly worried about? The prior information here is key. When we factor it in, we realize that our safety has not really been compromised.

Conversely, if we look at the diabetes statistics in the United States, our application of prior knowledge would lead us to a different conclusion. Here, a Bayesian analysis indicates you should be concerned. In 1958, 0.93% of the population was diagnosed with diabetes. In 2015 it was 7.4%. When you look at the intervening years, the climb in diabetes diagnosis is steady, not a spike. So the prior relevant data, or priors, indicate a trend that is worrisome.

It is important to remember that priors themselves are probability estimates. For each bit of prior knowledge, you are not putting it in a binary structure, saying it is true or not. You’re assigning it a probability of being true. Therefore, you can’t let your priors get in the way of processing new knowledge. In Bayesian terms, this is called the likelihood ratio or the Bayes factor. Any new information you encounter that challenges a prior simply means that the probability of that prior being true may be reduced. Eventually some priors are replaced completely.

Fat-tailed curves: Many of us are familiar with the bell curve, that nice, symmetrical wave that captures the relative frequency of so many things from height to exam scores. The bell curve is great because it’s easy to understand and easy to use. Its technical name is “normal distribution.” If we know we are in a bell curve situation, we can quickly identify our parameters and plan for the most likely outcomes.

Fat-tailed curves are different. Take a look. At first glance they seem similar enough. Common outcomes cluster together, creating a wave. The difference is in the tails. In a bell curve the extremes are predictable. There can only be so much deviation from the mean. In a fat-tailed curve there is no real cap on extreme events.

The more extreme events that are possible, the longer the tails of the curve get. Any one extreme event is still unlikely, but the sheer number of options means that we can’t rely on the most common outcomes as representing the average. The more extreme events that are possible, the higher the probability that one of them will occur. Crazy things are definitely going to happen, and we have no way of identifying when.

Think of it this way. In a bell curve type of situation, like displaying the distribution of height or weight in a human population, there are outliers on the spectrum of possibility, but the outliers have a fairly well-defined scope. You’ll never meet a man who is ten times the size of an average man. But in a curve with fat tails, like wealth, the central tendency does not work the same way. You may regularly meet people who are ten, 100, or 10,000 times wealthier than the average person. That is a very different type of world.

Let’s re-approach the example of the risks of violence we discussed in relation to Bayesian thinking. Suppose you hear that you had a greater risk of slipping on the stairs and cracking your head open than being killed by a terrorist. The statistics, the priors, seem to back it up: 1,000 people slipped on the stairs and died last year in your country and only 500 died of terrorism. Should you be more worried about stairs or terror events?

Some use examples like these to prove that terror risk is low—since the recent past shows very few deaths, why worry?3 The problem is in the fat tails: The risk of terror violence is more like wealth, while stair-slipping deaths are more like height and weight. In the next ten years, how many events are possible? How fat is the tail?

The important thing is not to sit down and imagine every possible scenario in the tail (by definition, it is impossible) but to deal with fat-tailed domains in the correct way: by positioning ourselves to survive or even benefit from the wildly unpredictable future, by being the only ones thinking correctly and planning for a world we don’t fully understand.

Asymmetries: Finally, you need to think about something we might call “metaprobability”—the probability that your probability estimates themselves are any good. This massively misunderstood concept has to do with asymmetries. If you look at nicely polished stock pitches made by professional investors, nearly every time an idea is presented, the investor looks their audience in the eye and states they think they’re going to achieve a rate of return of 20% to 40% per annum, if not higher. Yet exceedingly few of them ever attain that mark, and it’s not because they don’t have any winners. It’s because they get so many so wrong. They consistently overestimate their confidence in their probabilistic estimates. ( For reference, the general stock market has returned no more than 7% to 8% per annum in the United States over a long period, before fees.)

How do we benefit from the uncertainty of a world we don’t understand, one dominated by “fat tails”? The answer to this was provided by Nassim Taleb in a book curiously titled Antifragile.

Here is the core of the idea. We can think about three categories of objects: Ones that are harmed by volatility and unpredictability, ones that are neutral to volatility and unpredictability, and finally, ones that benefit from it. The latter category is antifragile—like a package that wants to be mishandled. Up to a point, certain things benefit from volatility, and that’s how we want to be. Why? Because the world is fundamentally unpredictable and volatile, and large events—panics, crashes, wars, bubbles, and so on—tend to have a disproportionate impact on outcomes.

What are some ways we can prepare—arm ourselves with antifragility—so we can benefit from the antifragility of the world?

The first one is what Wall Street traders would call “upside optionality”, that is, seeking out situations that we expect have good odds of offering us opportunities. Take the example of attending a cocktail party where a lot of people you might like to know are in attendance. While nothing is guaranteed to happen—you may not meet those people, and if you do, it may not go well— you give yourself the benefit of serendipity and randomness. The worst thing that can happen is…nothing. One thing you know for sure is that you’ll never meet them sitting at home. By going to the party, you improve your odds of encountering opportunity. The second thing we can do is to learn how to fail properly. Failing properly has two major components. First, never take a risk that will do you in completely. (Never get taken out of the game completely.)

Second, develop the personal resilience to learn from your failures and start again. With these two rules, you can only fail temporarily.

Mental Model #7:Inversion

Most of us tend to think one way about a problem: forward. Inversion allows us to flip the problem around and think backward. Sometimes it’s good to start at the beginning, but it can be more useful to start at the end.

Think of it this way: Avoiding stupidity is easier than seeking brilliance. Combining the ability to think forward and backward allows you to see reality from multiple angles.

There are two approaches to applying inversion in your life.

  1. Start by assuming that what you’re trying to prove is either true or false, then show what else would have to be true.
  2. Instead of aiming directly for your goal, think deeply about what you want to avoid and then see what options are left over.

In the 1920s the American Tobacco Company wanted to sell more of their Lucky Strike cigarettes to women. Men were smoking, but women weren’t. There were pervasive taboos against women smoking—it was seen as a man’s activity. Women therefore presented an untapped market that had the potential of providing huge revenue. The head of the company thought that they needed to convince women that smoking would make them thinner, riding on the slimness trend that had already begun, so he hired Edward Bernays, who came up with a truly revolutionary marketing campaign.

In the style of the inversion approach described above, Bernays did not ask, “How do I sell more cigarettes to women?” Instead, he wondered, if women bought and smoked cigarettes, what else would have to be true? What would have to change in the world to make smoking desirable to women and socially acceptable? Then—a step farther—once he knew what needed to change, how would he achieve that?

To tackle the idea of smoking as a slimming aid, he mounted a large anti-sweets campaign. After dinner, it was about cigarettes, not dessert. Cigarettes were slimming, while desserts would ruin one’s figure. But Bernays’s real stroke of genius lay not just in coming out with adverts to convince women to stay slim by smoking cigarettes; “instead, he sought nothing less than to reshape American society and culture.”

He solicited journalists and photographers to promote the virtues of being slim. He sought testimonials from doctors about the health value of smoking after a meal. He combined this approach with

…altering the very environment, striving to create a world in which the cigarette was ubiquitous. He mounted a campaign to persuade hotels and restaurants to add cigarettes to dessert-list menus, and he provided such magazines as House and Garden with feature articles that included menus designed to preserve readers ‘from the dangers of overeating’…. The idea was not only to influence opinion but to remold life itself. Bernays approached designers, architects, and cabinetmakers in an effort to persuade them to design kitchen cabinets that included special compartments for cigarettes, and he spoke to the manufacturers of kitchen containers to add cigarette tins to their traditional lines of labeled containers for coffee, tea, sugar, and flour.

The result was a complete shift in the consumption habits of American women. It wasn’t just about selling the cigarette, it was reorganizing society to make cigarettes an inescapable part of the American woman’s daily experience.

Bernays’s efforts to make smoking in public socially acceptable had equally startling results. He linked cigarette smoking with women’s emancipation. To smoke was to be free. Cigarettes were marketed as “torches of freedom.” He orchestrated public events, including an infamous parade on Easter Sunday in 1929 which featured women smoking as they walked in the parade. He left no detail unattended, so public perception of smoking was changed almost overnight. He both normalized it and made it desirable in one swoop.

Although the campaign utilized more principles than just inversion, it was the original decision to invert the approach that provided the framework from which the campaign was created and executed. Bernays didn’t focus on how to sell more cigarettes to women within the existing social structure. Sales would have undoubtedly been a lot more limited. Instead he thought about what the world would look like if women smoked often and anywhere, and then set about trying to make that world a reality. Once he did that, selling cigarettes to women was comparatively easy.

What are you trying to avoid? Instead of thinking through the achievement of a positive outcome, we could ask ourselves how we might achieve a terrible outcome, and let that guide our decision-making. Index funds are a great example of stock market inversion promoted and brought to bear by Vanguard’s John Bogle. Instead of asking how to beat the market, as so many before him, Bogle recognized the difficulty of the task. Everyone is trying to beat the market. No one is doing it with any consistency, and in the process real people are losing actual money. So he inverted the approach. The question then became, how can we help investors minimize losses to fees and poor money manager selection? The results were one of the greatest ideas—index funds—and one of the greatest powerhouse firms in the history of finance.

One of the theoretical foundations for this type of thinking comes from psychologist Kurt Lewin.10 In the 1930s he came up with the idea of force field analysis, which essentially recognizes that in any situation where change is desired, successful management of that change requires applied inversion. Here is a brief explanation of his process:

  1. Identify the problem
  2. Define your objective
  3. Identify the forces that support change towards your objective
  4. Identify the forces that impede change towards the objective
  5. Strategize a solution! This may involve both augmenting or adding to the forces in step 3, and reducing or eliminating the forces in step 4.

Even if we are quite logical, most of us stop after step 3. Once we figure out our objective, we focus on the things we need to put in place to make it happen, the new training or education, the messaging and marketing. But Lewin theorized that it can be just as powerful to remove obstacles to change.

The inversion happens between steps 3 and 4. Whatever angle you choose to approach your problem from, you need to then follow with consideration of the opposite angle. Think about not only what you could do to solve a problem, but what you could do to make it worse—and then avoid doing that, or eliminate the conditions that perpetuate it.

Mental Model #8:Occam’s Razor

Simpler explanations are more likely to be true than complicated ones. This is the essence of Occam’s Razor, a classic principle of logic and problem-solving. Instead of wasting your time trying to disprove complex scenarios, you can make decisions more confidently by basing them on the explanation that has the fewest moving parts.

We all jump to overly complex explanations about something. Husband late getting home? What if he’s been in a car accident? Son grew a centimeter less than he did last year? What if there is something wrong with him? Your toe hurts? What if you have bone cancer? Although it is possible that any of these worst case scenarios could be true, without any other correlating factors, it is significantly more likely that your husband got caught up at work, you mismeasured your son, and your shoe is too tight.

We often spend lots of time coming up with very complicated narratives to explain what we see around us. From the behavior of people on the street to physical phenomena, we get caught up in assuming vast icebergs of meaning beyond the tips that we observe. This is a common human tendency, and it serves us well in some situations, such as creating art. However, complexity takes work to unravel, manage, and understand. Occam’s Razor is a great tool for avoiding unnecessary complexity by helping you identify and commit to the simplest explanation possible.

Named after the medieval logician William of Ockham, Occam’s Razor is a general rule by which we select among competing explanations. Ockham wrote that “a plurality is not to be posited without necessity”—essentially that we should prefer the simplest explanation with the fewest moving parts. They are easier to falsify, easier to understand, and generally more likely to be correct. Occam’s Razor is not an iron law but a tendency and a mind-frame you can choose to use: If all else is equal, that is if two competing models both have equal explanatory power, it’s more likely that the simple solution suffices.

Why are more complicated explanations less likely to be true? Let’s work it out mathematically. Take two competing explanations, each of which seem to equally explain a given phenomenon. If one of them requires the interaction of three variables and the other the interaction of thirty variables, all of which must have occurred to arrive at the stated conclusion, which of these is more likely to be in error? If each variable has a 99% chance of being correct, the first explanation is only 3% likely to be wrong. The second, more complex explanation, is about nine times as likely to be wrong, or 26%. The simpler explanation is more robust in the face of uncertainty.

Simplicity can increase efficiency

With limited time and resources, it is not possible to track down every theory with a plausible explanation of a complex, uncertain event. Without the filter of Occam’s Razor, we are stuck chasing down dead ends. We waste time, resources, and energy.

The great thing about simplicity is that it can be so powerful. Sometimes unnecessary complexity just papers over the systemic flaws that will eventually choke us. Opting for the simple helps us make decisions based on how things really are. Here are two short examples of those who got waylaid chasing down complicated solutions when simple ones were most effective.

The ten-acre Ivanhoe Reservoir in Los Angeles provides drinking water for over 600,000 people. Its nearly 60 million gallons of water are disinfected with chlorine, as is common practice. Ground water often contains elevated levels of a chemical called bromide. When chlorine and bromide mix, then are exposed to sunlight, they create a dangerous carcinogen called bromate.

In order to avoid poisoning the water supply, the L.A. Department of Water and Power (DWP) needed a way to shade the water’s surface. Brainstorming sessions had yielded only two infeasible solutions, building either a ten-acre tarp or a huge retractable dome over the reservoir. Then a DWP biologist suggested using “bird balls,” the floating balls that airports use to keep birds from congregating near runways. They require no construction, no parts, no labor, no maintenance, and cost US$0.40 each. Three million UV-deflecting black balls were then deployed in Ivanhoe and other LA reservoirs, a simple solution to a potentially serious problem.

A few caveats

One important counter to Occam’s Razor is the difficult truth that some things are simply not that simple. The regular recurrence of fraudulent human organizations like pyramid schemes and Ponzi schemes is not a miracle, but neither is it obvious. No simple explanation suffices, exactly. They are a result of a complex set of behaviors, some happening almost by accident or luck, and some carefully designed with the intent to deceive. It isn’t a bit easy to spot the development of a fraud. If it was, they’d be stamped out early. Yet, to this day, frauds frequently grow to epic proportions before they are discovered.

Mental Model #9:Hanlon’s Razor

Hard to trace in its origin, Hanlon’s Razor states that we should not attribute to malice that which is more easily explained by stupidity. In a complex world, using this model helps us avoid paranoia and ideology. By not generally assuming that bad results are the fault of a bad actor, we look for options instead of missing opportunities. This model reminds us that people do make mistakes. It demands that we ask if there is another reasonable explanation for the events that have occurred. The explanation most likely to be right is the one that contains the least amount of intent.

Assuming the worst intent crops up all over our lives. Consider road rage, a growing problem in a world that is becoming short on patience and time. When someone cuts you off, to assume malice is to assume the other person has done a lot of risky work. In order for someone to deliberately get in your way they have to notice you, gauge the speed of your car, consider where you are headed, and swerve in at exactly the right time to cause you to slam on the brakes, yet not cause an accident. That is some effort. The simpler and thus more likely explanation is that they didn’t see you. It was a mistake. There was no intent.

Hanlon’s Razor, when practiced diligently as a counter to confirmation bias, empowers us, and gives us far more realistic and effective options for remedying bad situations. When we assume someone is out to get us, our very natural instinct is to take actions to defend ourselves. It’s harder to take advantage of, or even see, opportunities while in this defensive mode because our priority is saving ourselves—which tends to reduce our vision to dealing with the perceived threat instead of examining the bigger picture.

Summary
Great Mental Models (Vol. 1): General Thinking Concepts Book Summary
Article Name
Great Mental Models (Vol. 1): General Thinking Concepts Book Summary
Description
A mental model is simply a representation of how something works. We cannot keep all of the details of the world in our brains, so we use models to simplify the complex into understandable and organizable chunks. Whether we realize it or not, we then use these models every day to think, decide, and understand our world.
Author
Publisher Name
WhatIsTheBusinessModelOf
Publisher Logo

Muaaz Qadri
A Proud Computer Engineer turned Digital Marketer
Summary
Great Mental Models (Vol. 1): General Thinking Concepts Book Summary
Article Name
Great Mental Models (Vol. 1): General Thinking Concepts Book Summary
Description
A mental model is simply a representation of how something works. We cannot keep all of the details of the world in our brains, so we use models to simplify the complex into understandable and organizable chunks. Whether we realize it or not, we then use these models every day to think, decide, and understand our world.
Author
Publisher Name
WhatIsTheBusinessModelOf
Publisher Logo