Frame: Towards a Discipline for Serious Decisions
An experiment in structured decision-making with AI
Most organizations oscillate between two unsatisfying modes of decision-making.
On one end is the lone executive, deciding quickly, intuitively, and often privately — decisive, but fragile. On the other is the committee process: slides, pre-reads, alignment meetings, and consensus theater — thorough, but slow and diluted.
What's missing is a discipline for thinking clearly about consequential decisions, one that preserves human judgment while making trade-offs, risks, and assumptions explicit.
Frame is an experiment in building toward that discipline.
It's not an AI "decision engine." It doesn't optimize choices or score alternatives. Instead, it's an attempt to use structured context and large language models to examine decisions the way an experienced strategist would — carefully, skeptically, and with respect for complexity.
The real problem: incomplete views
Typically, the ingredients of a decision are scattered.
Financial assumptions live in a spreadsheet someone made six months ago. Strategy lives in a deck from last year's offsite that three people actually remember. Political risks get discussed through side channels after the meeting. Who actually owns the decision? Well, it depends on who you ask.
When we ask an AI to "help us decide," we give it a paragraph. Maybe a prompt. And then we're surprised when the answer sounds plausible but feels off.
The issue isn't the model.
The issue is that we're looking at the decision through too narrow aperture that leaves out half of what matters.
Frame starts there. Before asking for analysis, it asks: what are we actually seeing, and what are we pretending isn't relevant?
Decisions as structured context, not prompts
The core shift is this: treat a decision not as a single question to be answered, but as a packet of integrated context.
A decision packet makes visible:
- the observable facts (not the spin)
- the alternatives that are actually under consideration (not the ones in the deck)
- the value at stake and the cost of delay (in real terms, not aspirational ones)
- who owns the decision in practice (not on the org chart)
- what cannot break (the true constraints, not the negotiable ones)
- how this connects to strategy (if it does at all)
- what would make us revisit this later (because we will, and that's expected, not failure)
This isn't documentation theater. It's a forcing function.
When you work a decision this way, the gaps show up fast. The places where you're guessing. The place where authority and accountability don't line up. The trade-off everyone's been avoiding.
The AI doesn't fix that.
It just makes it harder to ignore.
Lenses: changing how we look
Most bad decisions aren't wrong because data was missing. They're wrong because the perspective was too narrow.
I'm working with "lenses," bounded ways of examining a decision. These aren't checklists. They're not scoring rubrics. They shape the questions that get asked and the risks that surface.
For example:
An organizational design lens catches things like: you're asking someone to deliver a result, but you didn't give them budget authority. Or: you've split ownership of a decision between two people who have opposite incentives. Or: the bottleneck isn't the process, it's that one person who's been here for 15 years and is the only one who knows how the old system works.
An adaptive leadership lens distinguishes between problems you can solve with a new policy and problems that require people to actually change how they work. You can write all the guidance documents you want, but if the real issue is that people don't trust the new system, your document isn't going to fix that.
A strategy lens asks whether this decision reinforces the strategy you said you had or quietly erodes it. Because most strategy doesn't die in a big dramatic pivot. It dies in a hundred small decisions that seemed fine at the time. Frame treats strategy as a persistent anchor — decisions are evaluated against it, not the other way around.
The value isn't the lens itself.
It's what becomes visible when you apply it.
AI as a disciplined examiner, not an oracle
Most AI tools want to be helpful. Agreeable. Confident.
Frame is engineered from the ground-up to resist that.
Instead of "what's the best answer," it runs decisions through explicit reasoning modes:
- A mode that asks uncomfortable questions before making any recommendation. The kind of questions a good advisor asks when they're not worried about being polite.
- A mode that pressure-tests a path for fragility. What breaks if your timeline slips? What happens if the person you're counting on leaves? What are you assuming that might not be true?
- A lightweight mode for decisions that genuinely don't deserve heavy analysis. Sometimes the right answer is "just pick one and move on."
Each mode works from the same decision packet. No hidden memory. No conversational drift. No quietly changing the question halfway through.
The packet is a human-readable auditable memory. No hidden states or dropped context that makes us ask "how exactly did we get here?"
This keeps the decision with the human. AI doesn't own it. It sharpens it.
What makes this different
It's easy to call this "structured prompts" or "decision templates," but that's not quite it.
What I'm after is a discipline that:
- Maps constraints before proposing actions.
- Names trade-offs instead of smoothing them over.
- Treats loss and resistance as signal, not noise.
- Defines when to stop reasoning (because at some point more analysis is just procrastination).
- Sets re-decision triggers up front, not after things go sideways. Revisiting a decision isn't a failure. It's best practice.
Over time, this creates something more interesting than better individual decisions. You start to see patterns in how decisions get structured, where organizations consistently get stuck, and where strategy quietly degrades when no one's watching.
Most tools don't surface that. And you definitely don't get it from chatting with an AI in a browser.
Why work on this now
Organizations are making more consequential decisions, faster, and with less margin for error.
At the same time, AI makes it trivially easy to generate answers that sound good but are either hollow or haven't been thought through.
Frame is an attempt at a counterweight.
It intentionally doesn't promise speed or certainty. The goal is clarity, coherence, auditability, and fewer decisions you have to remake in six months because you missed something obvious.
It's an experiment in using AI to support judgment, not replace it.
Important decisions deserve better than gut feel, slide decks, or AI slop.
That's the gap I'm working on with Frame.