The Assumptions You Have Never Examined Are Running Your Life

On mental models, inherited beliefs, and the quiet cost of thinking inside frameworks you did not choose

DEEPAK PATEL

Every person operates from a set of mental models, internal pictures of how the world works, what causes what, what is possible and what is not, how people behave and why. Most of these models were not chosen. They were absorbed, gradually and largely unconsciously, from the environment of early childhood, from the beliefs of the adults closest to us, from the culture we grew up inside, from the education we received and, perhaps most pervasively, from the repeated experiences that taught us what to expect before we were old enough to question whether those expectations were well-founded.

The models themselves are not the problem. Nobody could function without them. The mind could not navigate even the most ordinary day if it had to reason from first principles about every situation it encountered. Mental models are the cognitive shortcuts that make action possible. They are, in that sense, essential.

The problem is not having mental models. The problem is having ones that are wrong, or that were once accurate but are no longer, or that apply in some domains but are being unconsciously extended to domains where they do not fit, and never examining any of them closely enough to notice.

Charlie Munger, who spent decades thinking carefully about how people reason well or badly in the specific context of investment decisions, argued that the quality of a person's decisions is determined almost entirely by the quality of the mental models they carry. Not their intelligence. Not their experience. Not the amount of information available to them. The models. A person with a small number of flexible, accurate mental models will consistently make better decisions than a person with greater intelligence and more information but poorer frameworks for making sense of it. The models determine what you see, what you ignore, what connections you make, and what conclusions feel obvious. They are the lens, and most people have never looked at the lens.

The most dangerous mental models are not the ones that are obviously wrong. Those tend to be corrected by experience quickly enough. The most dangerous ones are the ones that are right often enough to feel reliable, but that fail systematically in exactly the situations that matter most, and that have been held long enough and confirmed by enough experience that questioning them feels unnecessary.

Consider the model, widely held and rarely examined, that hard work reliably produces reward. This is true enough in enough situations that most people treat it as a general principle. Work hard and good things follow. The problem is that this model says nothing about whether the work is pointed in the right direction, whether the effort is being deployed intelligently, whether the domain being worked in is one where effort actually determines outcome, or whether the definition of reward being used is one that will still feel meaningful in ten years. People who hold this model unexamined will work extremely hard in directions that do not serve them, for longer than they should, because the model tells them the problem is insufficient effort rather than misdirected effort. The model is not wrong. It is incomplete in ways that have real consequences.

Or consider the model, equally widespread and equally unexamined, that more information produces better decisions. This feels self-evidently true. Understanding a situation more fully should lead to more accurate conclusions. The research suggests something considerably more complicated. Beyond a certain threshold of relevant information, additional information tends to increase confidence in a decision without increasing its accuracy. The more information we have, the better we feel about the conclusion we have already reached, because we unconsciously select and weight the information that confirms it. The model that more information leads to better decisions turns out to be a very effective engine for sophisticated self-deception.

The process of examining mental models is not comfortable, and that discomfort is worth naming directly rather than glossing over. The models you carry are not just cognitive tools. They are, in a real sense, part of how you understand yourself and your place in the world. Questioning them feels, at some level, like questioning yourself. It can produce the specific vertigo of realising that a belief you have acted on for years, that has shaped significant decisions and relationships and choices, is not as well-founded as you assumed. That experience is genuinely disorienting.

It is also, when navigated honestly, one of the most significant developments available to an adult mind.

The practical starting point is simpler than most people expect. Not a systematic audit of everything you believe, which would be both impossible and counterproductive. A single question, applied consistently to the beliefs that govern important decisions: how do I know this is true, and what would I need to see to change my mind?

The first part of the question surfaces the evidence, or the absence of it, behind a belief. Many strongly held views, when pressed, turn out to rest on a single formative experience, a secondhand account, a feeling of obvious rightness that was never tested against evidence. The question makes that visible.

The second part of the question is the more important one. A belief for which no possible evidence would change your mind is not a belief in any epistemically meaningful sense. It is a commitment, and commitments and beliefs are different things that should be held differently. The person who cannot specify what would change their mind on a question is not reasoning about it. They are defending a position, and the reasoning they are producing is working in service of the defence rather than in pursuit of accuracy.

Munger's approach to this, developed over decades and across an extraordinary range of domains, was to build what he called a latticework of mental models drawn from every major discipline, not to become an expert in all of them, but to have enough working familiarity with the core models of physics, biology, psychology, economics, mathematics, and history that any new situation could be examined through multiple lenses simultaneously. The point was not breadth for its own sake. The point was that every discipline has developed, through centuries of collective effort, a set of frameworks for understanding a particular category of phenomena, and that those frameworks are often powerfully applicable outside the discipline that developed them.

A person who understands evolution does not only understand biology. They understand any system in which variation, selection pressure, and inheritance are operating, which includes markets, organisations, cultural practices, and ideas. A person who understands compound interest does not only understand finance. They understand any process in which small consistent inputs accumulate into large outputs over time, which includes skills, habits, relationships, and reputation. The model transfers. The person who has it available can see things in situations that the person without it literally cannot perceive.

This is what examining and upgrading your mental models actually produces. Not just better decisions, though it does produce those. A richer and more accurate picture of what is actually happening in any situation you encounter. That accuracy is the foundation of everything else that follows.