Skip to Content
Core FrameworkMental Models

Good decisions compound. Bad ones don’t just cost you the immediate outcome — they shape the options you have next. The difference between people who consistently make good calls and those who don’t usually isn’t intelligence or effort. It’s the quality of their thinking tools.

Charlie Munger called it a “latticework of mental models” — a collection of frameworks drawn from different disciplines that, used together, let you see problems most people miss entirely. Shane Parrish at Farnam Street  spent years distilling those models into a working reference. Personal OS puts them inside your operating system.

The Core Models

You don’t need to memorize 100 models. You need 10 you actually use. These are the ones that show up most often in planning sessions, decisions, and reviews.

First Principles Thinking

Break a problem down to its fundamental truths, then reason back up from scratch. Stop arguing from analogy (“we’ve always done it this way”) and start asking what’s actually true.

Use it when: Conventional approaches aren’t working. A project is stuck and everyone’s suggesting variations of the same failed idea. You want to design something from scratch instead of copying the template.

Second-Order Thinking

Ask “And then what?” after every proposed action. First-order thinking is fast: do X, get Y. Second-order thinking is harder: do X, get Y, which causes Z, which eventually produces W. Most problems come from optimizing for first-order outcomes.

Use it when: A decision seems too easy. A solution sounds good but you haven’t thought through what happens next. You’re making a tradeoff that involves time — short-term vs. long-term.

Inversion

Instead of asking “How do I succeed at this?” ask “What would guarantee this fails?” List the failure modes, then actively avoid them. You often find more by looking for what to eliminate than by searching for what to add.

Use it when: You’re stuck on a planning problem. You want to stress-test a plan before committing. Risk assessment that feels abstract becomes concrete when you ask: what would destroy this?

Circle of Competence

Know what you know. Know where your expertise actually ends. The cost of operating outside your circle isn’t just getting the answer wrong — it’s not knowing you got it wrong.

Use it when: You’re evaluating advice from someone with a financial interest in a particular outcome. You’re about to make a call in a domain you don’t understand well. You’re tempted to be confident in a situation where you should be asking more questions.

Probabilistic Thinking

Think in distributions, not certainties. “This will definitely work” is almost never true. “There’s a 70% chance this works, and here’s what happens if it doesn’t” is how careful thinkers actually operate.

Use it when: Planning assumes only the best case. Someone is speaking in absolutes. You need to prepare for scenarios beyond the most likely one.

Opportunity Cost

Every choice excludes other choices. The true cost of anything is what you give up to get it — not just money, but time, attention, and the alternatives you didn’t pursue. Most people evaluate decisions in isolation.

Use it when: Allocating time or resources. Evaluating a new commitment. When something seems free because it has no direct dollar cost.

Margin of Safety

Build in a buffer for things you don’t know you don’t know. Systems running at maximum capacity have no resilience. Projects planned to the hour have no room for reality.

Use it when: Estimating timelines. Building plans that need to survive contact with the real world. Evaluating whether a system can handle unexpected stress.

Incentives

People respond to incentives more reliably than to instructions or intentions. “Show me the incentive and I’ll show you the outcome.” This applies to yourself as much as to anyone else.

Use it when: Behavior doesn’t match stated values — yours or someone else’s. You’re designing a process that needs to produce specific results. You’re confused about why someone keeps acting a certain way.

Feedback Loops

Output becomes input. Positive feedback loops amplify change; negative feedback loops dampen it. Small inputs in a positive feedback loop compound into large outcomes over time. This is why habits matter, and why compounding works in both directions.

Use it when: Analyzing why something is growing or declining faster than expected. Designing systems that self-reinforce. Thinking about habits, skill development, or any long-horizon outcome.

Hanlon’s Razor

Never attribute to malice what can be adequately explained by incompetence, ignorance, or miscommunication. Most people who frustrate you are not out to get you — they’re just operating with different information, different incentives, or less skill in a particular area.

Use it when: You’re assuming someone acted with bad intent. A relationship is strained because of a perceived slight. You’re about to escalate a conflict that might dissolve if you first asked a question.

How They Connect to Personal OS

Mental models aren’t decoration. They’re wired into how the agent thinks alongside you.

During /reflect

When you’re processing what happened yesterday or setting intentions for today, the agent can surface the relevant model. Noticed a project stalled despite effort? The agent might ask: “Is there a bottleneck you’re optimizing around instead of through?” That’s Systems Thinking — the constraint limits the whole, not just one part.

During decisions

When you’re working through a decision — hiring, committing to a new project, dropping something — the agent checks against applicable models rather than just asking “what do you want to do?” It will offer inversion: “What would guarantee this fails?” It will probe second-order effects: “And then what happens six months out?” It will check incentives: “Who benefits from the outcome you’re being steered toward?”

This is documented in the Decision Framework playbook, which uses these models as its thinking substrate.

During weekly and quarterly reviews

The review process tracks not just what got done, but how you thought. Over time, you’ll notice patterns: which biases keep catching you, which models you’re not deploying, where your circle of competence has edges you weren’t aware of. The quarterly review explicitly asks: what did you get wrong, and why?

In the reference library

The full 100+ model reference lives at 03-resources/references/mental-models-fs-blog.md. The agent has access to it during every conversation — so when a relevant model applies to what you’re discussing, it can pull the exact framing, not just a vague gesture toward “think more carefully.”

The Reference Library

The full reference at 03-resources/references/mental-models-fs-blog.md organizes 100+ models into eight categories:

  1. General Thinking Concepts — The Map is Not the Territory, First Principles, Second-Order Thinking, Inversion, Probabilistic Thinking, Occam’s Razor, Hanlon’s Razor
  2. Physics and Chemistry — Leverage, Inertia, Activation Energy, Friction, Catalysts
  3. Biology and Evolution — Natural Selection, Incentives, Ecosystems, Red Queen Effect, Niches
  4. Systems Thinking — Feedback Loops, Bottlenecks, Margin of Safety, Emergence, Law of Diminishing Returns
  5. Numeracy and Mathematics — Compounding, Regression to the Mean, Distributions, Randomness, Multiply by Zero
  6. Microeconomics — Opportunity Cost, Comparative Advantage, Sunk Costs, Supply and Demand, Creative Destruction
  7. Military and Strategy — Asymmetric Warfare, Two-Front War, Seeing the Front
  8. Human Nature and Judgment — Availability Heuristic, Commitment Bias, Social Proof, Hindsight Bias, Action Bias

Each model includes when to spot it in the wild — which makes the reference practical rather than encyclopedic.

Using Models Together

Single models are useful. Combined, they’re more powerful — the latticework effect Munger described.

Inversion + Second-Order Thinking: What would cause this to fail? And what would that failure lead to next? Use this before committing to anything significant.

Incentives + Hanlon’s Razor: Before assuming someone is being difficult, ask what incentives they’re responding to. They’re probably not malicious — they’re just optimizing for a different thing than you are.

Circle of Competence + Opportunity Cost: Stay in your zone of genuine expertise, but be honest about what you’re giving up by staying there. Sometimes expanding your circle is the highest-leverage move.

First Principles + Activation Energy: Once you’ve broken a problem down to its fundamentals, identify the minimum viable starting action. Most projects fail not from bad strategy but from never actually starting.

Margin of Safety + Probabilistic Thinking: Don’t just plan for the most likely scenario — build in buffer for the scenarios you’re underweighting. The black swans that hit you are the ones you thought were only 10% likely.

Common Mistakes

Forcing the model. Not every situation needs a named framework. If you’re reaching for a model to sound rigorous, you’re using it wrong. The model should clarify — if it’s adding complexity, set it aside.

Justifying a decision already made. Post-hoc rationalization wearing the costume of structured thinking. If you’ve already decided and you’re now picking the model that supports it, you’ve defeated the purpose. Models are for deciding, not for defending.

Confusing the map for the territory. The Map is Not the Territory is itself one of the models — and the most important meta-point about all the others. No model is reality. Each one is a simplification that’s useful in certain conditions and misleading in others. Use them as lenses, not as facts.

Collecting models instead of using them. Knowing about 100 models and using 10 fluently beats knowing about 10 and using none. Depth over breadth.

Key Files

  • 03-resources/references/mental-models-fs-blog.md — Full 100+ model reference, organized by category
  • 03-resources/playbooks/decision-framework.md — How to run a decision using these models as the thinking substrate
  • 00-cockpit/quarterly-review.md — The review template that asks what you got wrong and which biases showed up

Next Steps

  • OKRs — How mental models support the quarterly planning process
  • Weekly and Quarterly Reviews — Where you track which models you used and which biases caught you
  • Getting Things Done — The capture and clarify system that feeds decisions into a trusted process
Last updated on