Canvas

Giving AI output a place to live,
evolve, and become something real

C3 Generative AI — Canvas Workspace
Company C3.ai
My Role Product Designer
Scope Designed Canvas for structured, long-form AI output across products
Team Multiple PMs, Engineering, Data Science
Tools Figma
01 — Context

When AI output outgrew the chat window

Chat is good at answering questions. It’s not built for creating something you need to keep. Responses disappear into scrollback. Edits mean re-prompting. A campaign brief or structured report feels fragile inside a conversation thread.

At the same time, multiple product teams were embedding generative AI into their applications. Each team had different requirements and formats. The challenge: design a workspace that works across all of them without starting from scratch for each.

02 — The Problem

AI was generating great content. Users couldn’t do much with it.

The gap wasn’t AI quality. It was what happened after. A document arrived in a chat bubble. To use it: copy it, paste it elsewhere, reformat it, share it. Each step broke the connection between what the AI produced and how the user could act on it.

“How do we create a persistent workspace where AI output doesn’t just land, it lives? Where users can iterate on it, own it, and take it somewhere?”

Before Canvas — ephemeral output in chat

Before: AI responses lived and died in the conversation scroll

03 — Concept

The Canvas Resource: a new kind of output

The core concept was the Canvas Resource: a structured, editable block that lives in the AI interface and behaves differently from a chat message. Four properties defined it from the start:

✏️

Editable

Full formatting and structure control. The AI generates; the user owns.

↩️

Interactive

Give feedback, request changes, iterate inline without leaving the workspace.

Actionable

Canvas content can trigger downstream actions: updating an object, sending a message, launching a workflow.

Composable

Resources fit inside larger flows. A brief or a table isn’t standalone: it’s part of something bigger.

This gave cross-functional teams a shared vocabulary. Alignment happened because everyone agreed on what a Canvas Resource needed to do, wherever it appeared.

04 — Design

A transition that feels earned, not abrupt

When an AI response meets the criteria for a Canvas Resource (long-form, structured, meant to persist), Canvas activates automatically. Users don’t switch modes. They just keep working. Resources link back to the conversation as anchors, so moving between dialogue and document feels like one continuous thought.

Search to Canvas transition flow

AI response in Search becomes an editable Canvas resource

Full-screen editing was non-negotiable. Enterprise users creating briefs or structured plans need space to think. Full-screen removes all chrome and puts content first.

Canvas view (contextual)

Canvas within the app layout

Full-screen editing mode

Full-screen mode for focused work

Canvas full experience — all states

All Canvas states: Search link, Canvas view, full-screen editing

05 — Expansion

A text editor became a content system

We started with a rich text editor. Canvas quickly expanded as new needs emerged: tables, email templates, form components. The four properties (editable, interactive, actionable, composable) became the test for whether a new format belonged in Canvas or not.

Canvas resource types — text, table, email, form

The Canvas type system: documents to structured data to email templates

06 — Alignment

Design as the integrating layer

Multiple PMs with different visions for how AI output should behave in their context. My job was partly to design, partly to mediate. I authored a unified interaction specification in Figma: shared behaviors, edge cases, and system logic across all Canvas implementations. It gave stakeholders something concrete to align around.

The spec also made engineering conversations more grounded. We could point to specific interactions and have real discussions about feasibility instead of negotiating in abstractions.

Canvas interaction spec — Figma documentation

The unified interaction spec: shared behaviors, edge cases, system logic

07 — Trade-offs

The tensions we held

Consistency vs Flexibility

Shared behaviors had to work for a marketing brief and a code output. The framework needed principles, not rules.

Automatic Trigger vs Explicit Control

Canvas activates when output meets criteria. Clear rules for when it should, and deliberate restraint for when it shouldn’t.

AI Continuity vs Clean Workspace

Users needed conversation context and document space. The transition model kept both without forcing a choice.

08 — Outcome

A workspace that became infrastructure

Key learning

Enterprise AI design isn’t really about interface styling. It’s about defining how system-aware content behaves across contexts. The hardest problems aren’t visual. Get the mental model right and every subsequent decision gets easier.

Prototype

Explore the prototype

Paste Figma embed here
Previous project Agentic
Workflows
All work ✦
Next project AI Assistant
Component