Frame in Practice
Frame Demo
What You're Actually Seeing in the Demo
The previous post covered the why behind Frame. This one is about the how, specifically, what's happening in the demo video and why the structure looks the way it does.
Two Modes, On Purpose
If you watch the demo, you'll notice there are two distinct modes: Explore and Decide. This isn't a feature split. It's an intentional separation between two different kinds of thinking.
In Explore mode, ideas can be half-formed. You're asking questions, testing assumptions, generating draft language. Nothing is binding. The conversation is provisional , you're allowed to be wrong, change direction, contradict yourself.
In Decide mode, you're editing the decision memo directly. That file is the source of truth. What's written there is what future reasoning builds on. The AI can read the canon. It can't write to it.
That boundary matters more than it might seem. I've watched enough decisions get quietly rewritten by helpful tools and collaborative editing to know that the moment you blur the line between "thinking out loud" and "this is what we decided," you introduce drift that's almost impossible to trace later. Exploration can be collaborative. Commitment has to be conscious.
What's Happening Behind the Scenes During Exploration
When the assistant responds in the demo, it's not working from a loose prompt. Each time, it reads a structured context bundle: the decision memo, relevant strategy context, selected lenses, organizational constraints, and any applicable adaptive leadership guidance. That context is injected fresh every time — no hidden memory, no accumulated state, no drift.
So when you apply a lens: organizational design, political dynamics, execution reality, nothing about the decision changes. Only the perspective does. An org design lens might surface that you've given someone responsibility without authority. A political dynamics lens might reveal that your biggest risk isn't the decision itself but how it lands with a stakeholder you haven't been thinking about. Execution reality might show you that the timeline assumes a capacity your team doesn't actually have.
Same decision. Different things become visible.
One Change Worth Calling Out
In earlier versions, adaptive leadership was a toggle, something you opted into. That felt wrong, and in this version it's been removed.
If a decision is classified as Adaptive or Mixed, adaptive considerations are automatically active. You don't choose to think about loss. You don't opt into resistance. If the work requires people to change how they operate or what they believe, that reality is there whether you select a lens or not.
This matters because adaptive constraints aren't a perspective you apply. They're part of the terrain. Treating them as optional was creating exactly the kind of false choice Frame is supposed to prevent.
Why the UI Is Deliberately Plain
The UI looks simple. That's not a limitation, it's a choice.
There's no "optimize" button. No scoring engine. No ranking of alternatives. Instead, there are small friction points in specific places: you have to save the canon explicitly, you have to rebuild the context bundle after edits, you have to manually promote drafts into the decision memo.
Those constraints are doing real work. They reinforce that clarity isn't something the system produces for you. You earn it by doing the thinking. The design actively resists the temptation to let AI quietly decide on your behalf, because the whole point of Frame is that the human owns the decision, and ownership requires friction.
What to Watch For
As you watch the demo, a few things are worth paying attention to:
Missing assumptions surface fast once they're actually written down. This surprised me early on, gaps that were invisible in conversation became obvious the moment the decision packet existed as a document.
Different lenses genuinely reshape the same decision. Organizational structure reveals one set of risks. Political dynamics reveal another. Execution reality reveals a third. The decision doesn't change. Your understanding of it does.
Pressure-testing changes confidence, not by collapsing uncertainty but by bounding it. You don't end up more certain. You end up knowing more precisely what you're uncertain about, which turns out to be much more useful.
And nothing "final" happens in chat. The chat is a thinking surface. The decision file is the record. If adaptive work is relevant, it's derived from the classification, not selected from a menu.
What This Changes Over Time
This version of Frame doesn't promise better outcomes through automation. It does something subtler: it increases the probability that when you commit to a path, you actually understand what you're giving up, what will resist, what can't break, what would make you revisit, and what assumptions you're standing on.
The first decision is useful. The second is better. By the fifth, patterns start to emerge — where authority consistently mismatches responsibility, where strategy erodes through small "reasonable" choices, where execution capacity is systematically underestimated, where adaptive losses get repeatedly ignored.
Frame becomes less about any single decision and more about trajectory. Not just "what did we decide?" but "how do we decide?" and whether the way we decide is making us better or slowly worse.
What Frame Isn't
It's not replacing expertise or eliminating intuition. It's not removing politics. It's not guaranteeing correct answers or accelerating every decision.
It's for decisions where trade-offs are real, constraints are binding, ambiguity can't be eliminated, and the cost of getting it wrong or having to re-decide is high. For people already operating under those conditions, the shift is small but meaningful. Better framing doesn't guarantee success. It materially improves the quality of the choice you make and the resilience of that choice when conditions change.
If You're Trying to Decide Whether This Is Relevant
Ask yourself where decisions are being quietly remade in your organization. Where strategy drifts through small compromises. Where political realities show up late instead of early. Where "we didn't realize that would matter" keeps happening.
If none of that resonates, Frame is probably unnecessary.
If it does, the demo isn't really about AI. It's about building a practice around the decisions that matter most and doing it consistently enough that it changes trajectory, not just outcomes.