When AI output outgrew the chat window
Chat is good at answering questions. It’s not built for creating something you need to keep. Responses disappear into scrollback. Edits mean re-prompting. A campaign brief or structured report feels fragile inside a conversation thread.
At the same time, multiple product teams were embedding generative AI into their applications. Each team had different requirements and formats. The challenge: design a workspace that works across all of them without starting from scratch for each.
AI was generating great content. Users couldn’t do much with it.
The gap wasn’t AI quality. It was what happened after. A document arrived in a chat bubble. To use it: copy it, paste it elsewhere, reformat it, share it. Each step broke the connection between what the AI produced and how the user could act on it.
“How do we create a persistent workspace where AI output doesn’t just land, it lives? Where users can iterate on it, own it, and take it somewhere?”
Before: AI responses lived and died in the conversation scroll
The Canvas Resource: a new kind of output
The core concept was the Canvas Resource: a structured, editable block that lives in the AI interface and behaves differently from a chat message. Four properties defined it from the start:
Editable
Full formatting and structure control. The AI generates; the user owns.
Interactive
Give feedback, request changes, iterate inline without leaving the workspace.
Actionable
Canvas content can trigger downstream actions: updating an object, sending a message, launching a workflow.
Composable
Resources fit inside larger flows. A brief or a table isn’t standalone: it’s part of something bigger.
This gave cross-functional teams a shared vocabulary. Alignment happened because everyone agreed on what a Canvas Resource needed to do, wherever it appeared.
A transition that feels earned, not abrupt
When an AI response meets the criteria for a Canvas Resource (long-form, structured, meant to persist), Canvas activates automatically. Users don’t switch modes. They just keep working. Resources link back to the conversation as anchors, so moving between dialogue and document feels like one continuous thought.
AI response in Search becomes an editable Canvas resource
Full-screen editing was non-negotiable. Enterprise users creating briefs or structured plans need space to think. Full-screen removes all chrome and puts content first.
Canvas within the app layout
Full-screen mode for focused work
All Canvas states: Search link, Canvas view, full-screen editing
A text editor became a content system
We started with a rich text editor. Canvas quickly expanded as new needs emerged: tables, email templates, form components. The four properties (editable, interactive, actionable, composable) became the test for whether a new format belonged in Canvas or not.
The Canvas type system: documents to structured data to email templates
Design as the integrating layer
Multiple PMs with different visions for how AI output should behave in their context. My job was partly to design, partly to mediate. I authored a unified interaction specification in Figma: shared behaviors, edge cases, and system logic across all Canvas implementations. It gave stakeholders something concrete to align around.
The spec also made engineering conversations more grounded. We could point to specific interactions and have real discussions about feasibility instead of negotiating in abstractions.
The unified interaction spec: shared behaviors, edge cases, system logic
The tensions we held
Consistency vs Flexibility
Shared behaviors had to work for a marketing brief and a code output. The framework needed principles, not rules.
Automatic Trigger vs Explicit Control
Canvas activates when output meets criteria. Clear rules for when it should, and deliberate restraint for when it shouldn’t.
AI Continuity vs Clean Workspace
Users needed conversation context and document space. The transition model kept both without forcing a choice.
A workspace that became infrastructure
- Canvas became a shared AI workspace layer across products: one model for how AI output behaves, wherever it appears.
- Eliminated copy-paste workflows for enterprise users. Content goes from generation to editing to system action without leaving the interface.
- The interaction spec reduced ambiguity and became the reference document for engineering implementation across teams.
Enterprise AI design isn’t really about interface styling. It’s about defining how system-aware content behaves across contexts. The hardest problems aren’t visual. Get the mental model right and every subsequent decision gets easier.