My personal framework for human-AI interface design
As I’ve increasingly integrated AI into my daily work & thought processes over the past year, I've been observing how a tool’s interface choices shape how I interact with the AI and how I think while I’m doing it.
In general when I’m using AI interfaces I want to:
feel like I’m in control
know what the AI can do
treat the AI as a thought partner
When I don’t get that, it’s often because of user interface choices — not limitations of the underlying AI models. So I wanted to identify some design goals that I think get a system much closer to the experiences I’m looking for.
This isn't meant to be a comprehensive or general-purpose framework for human-AI interface design (that would be absurd) but rather a personal set of guidelines I'm developing to shape my own work, thinking, and evaluations, particularly for conversational AI tools that function as a blend of thought partner, cognitive extension, and copilot.
The Core Principles
1. Reciprocity
Whatever capabilities the AI has, the human should have equivalent direct access through a user interface.
Why this matters:
Reciprocity prevents the AI from becoming a gatekeeper to functionality and maintains a balance of power. It also creates a shared understanding of possibilities between the user and the AI, so collaboration leads to learning and empowerment rather than dependency.
Plus — by giving the end user options for when to work through the AI and when to work independently — reciprocity lets human users choose the balance of command vs conversation in their inputs and keep a conversational flow.
2. Transparency
The AI's context — including tool specs, wiring, and usage results — should be visible to the human user.
Why this matters:
Quality of outputs depends heavily on what’s happening behind the scenes: what tools the AI knows about, what information it’s storing or retrieving, what intermediate steps it’s taken, and of course how it’s been instructed to behave.
When these elements are hidden, users can’t form accurate mental models of the system and can only probe at it entirely from the outside. That makes it harder to collaborate effectively — and harder to anticipate its limitations, interpret its choices, and correct its misunderstandings.
Opaque tools add to the difficulty because they make even external probing feel risky. In an email client, for example, if I push a sparkle button or chat with a sidebar assistant, will I get a harmless quick summary, a lengthy new document I didn’t want, a complete reorganization of my inbox archives, or an email accidentally sent without my permission?
Exposing the AI’s context and how it’s situated with an application environment makes the system’s behavior more predictable, debuggable, and learnable by making it legible in advance.
3. Inceptability
The user should be able to manipulate the conversation history at any point.
Why this matters:
The ability to edit conversation history — in full, including AI-provided messages — empowers users to refine their thinking process, make efficient use of tokens, and explore the relationship between inputs and outputs. (It also serves to remind users that the AI’s past messages are part of the input, which is too easy to forget.)
On a more concrete level, it allows users to correct misconceptions, inject information that emerged later, or experiment with different framings of the same problem. It recognizes that thought isn't always linear or perfectly articulated on the first try, and that the conversation is a chaotic system. So users should to be able to reshape, refine, and retry a conversation as their understanding evolves.
4. Conversational atomicity
Users should be able to organize their interactions into well-bounded conversations — each isolated in design & state, fully reversible, and manipulable as a unit.
Why this matters:
Treating conversations as distinct, portable units allows users to organize their thinking, control how the AI accumulates knowledge or adopts styles, and maintain clearer privacy boundaries.
A good conversation should be fully detachable from the system it operates in, able to leave no permanent trace. This kind of reversibility creates a safety net for experimentation: adding to memory, editing a shared doc, or even publishing a website are all actions you can roll back simply by deleting or undoing the conversation that caused them.
I want conversations to behave like floppy disks: you insert one, use it for a specific purpose, then eject it. Each disk stores its own blend of memories and applications. You can label it, duplicate it, hand it off, or load it into a different machine later. And in a perfect world, any changes it causes should be easy to trace back and reverse just by popping the disk back out.
Current systems often pretend to isolate conversations more than they do — state and style leaks across them invisibly, and external side effects are barely even auditable.
And features like splitting, merging, summarizing, annotating, sharing, or passing a transcript into a new conversation would let users treat conversations as modular building blocks: not just chat logs, but workflow primitives.
5. Control over persistence
The user should have full visibility into and control over any long-term memory the AI system maintains.
Why this matters:
This one is fairly self-evident: it’s fundamental for user agency and privacy.
It’s also inherently tied to the previous four principles — users should be able to directly manipulate long-term memory, and should easily understand what information is ephemeral and what will persist for future interactions.
6. Sliding-scale autonomy
Users should have fine-grained control over the AI's level of autonomous action based on their own risk assessment.
Why this matters:
An AI that sends unsupervised emails, deploys website edits, and runs database updates without requesting permission every time sounds terrifying! — unless that’s why you chose to use it today in the first place and intentionally connected it to specific contacts, sites, and databases.
On the flip side, an AI that requests permission every time it wants to search the web, check the weather, or check its long-term memory sounds unusable — unless you’re role-playing or want a completely fresh perspective.
Users should be able to specify which actions the AI is allowed to do whenever it decides it’s appropriate; which actions must require some formal review and confirmation from the user; and which actions must only be taken when a user directly requests them.
This principle acknowledges that different users have different comfort levels with AI autonomy, and that the same user might want different levels of control in different contexts.
7. Non-linearity
Interactions should not be artificially constrained to sequential, linear flows.
Why this matters:
The assumption that conversational interfaces must follow a strict linear timeline is an unnecessary constraint inherited from human conversation patterns. While it’s a helpful metaphor, it slows down thought and limits provisional exploration.
Users should be able to maintain multiple parallel threads of a conversation with an AI, similar to how we manage working knowledge by juggling & organizing simultaneous browser tabs, or how a database developer uses a SQL workbench with several queries running at once.
On a very practical level: these models are very sensitive to their context, and you can often get dramatically better results by guiding their attention narrowly with one instruction, question, or comment at a time. I often want to fork a conversation by sending three or four completely independent messages and then merge the results back into a single track.
Why I think this framework matters
Current conversational interfaces often treat AI as a mix of power tool and oracle, rather than a new kind of thought partner. They prioritize simplicity and immediacy over user control and transparency — and foster misleading illusions of predictability and authority.
As AI systems grow more capable and integrated into our lives, I hope their interfaces grow more human. Not more anthropomorphic: more manipulable, exploratory, and tangible.
In the real world, tools and conversations help us think in our own rhythms, experiment freely, and build new capabilities over time. AI tools and AI conversations should too, not lock us in to rigid workflows and teach us learned helplessness.