Get the course! 15h video + UX training
9 min read

Design Patterns For AI Interfaces

New design patterns to design better experiences that drive value to users and to businesses. Never trust people who get only 1 scoop of ice cream.


Friendly note: get a friendly UX bundle for Smart Interface Design Patternsย ๐Ÿฃ and Measuring UXย ๐ŸŽ โ€” with live UX training coming up. Jump to table of contents.

So you need to design a new AI feature for your product. How would you start? How do you design flows and interactions? And how do you ensure that that new feature doesnโ€™t get abandoned by users after a few runs?

In this newsletter, Iโ€™d love to share a very simple but systematic approach of how I think about designing AI experiences. Hopefully it will help you get a bit more clarity about how to get started.

Design Patterns for AI Interfaces

Design Patterns for AI Interfaces, a practical guide by Sharang Sharma.

The Receding Role of AI Chat #

One of the key recent shifts is a slow move away from traditional โ€œchat-alikeโ€ AI interfaces. As Luke Wroblewski wrote, when agents can use multiple tools, call other agents and run in the background, users orchestrate AI work more โ€” thereโ€™s a lot less chatting back and forth.

AI Experience Paradigm by Luke Wroblewski

Messaging UI slowly starts feeling dated, and chat UI fades into background. By Luke Wroblewski.

In fact, chatbots are rarely a great experience paradigm โ€” mostly because the burden of articulating intent efficiently lies on the user. But in practice, itโ€™s remarkably difficult to do well, and very time-consuming.

Chat doesnโ€™t go away of course, but itโ€™s being complemented with task-oriented UIs โ€” temperature controls, knobs, sliders, buttons, semantic spreadsheets, infinite canvases โ€” with AI providing predefined options, presets and templates.

AI Experience Paradigm by Luke Wroblewski

Agentic AI design patterns, with more task-oriented UIs, rather than chat. By Luke Wroblewski.

__

There, AI emphasizes the work, the plan, the tasks โ€” the outcome, instead of the chat input. The result are experiences that truly amplify value for users by sprinkling a bit of AI in places where it delivers real value to real users.

To design better AI experiences, we need to study 5 key areas that we need to shape.

โœ๏ธ 1. Input UX: Expressing Intent #

Conversational AI is a very slow way of helping users expressing and articulating their intent. Usability tests show that users often get lost in editing, reviewing, typing and re-typing. Itโ€™s painfully slow, often taking 30-60s for input.

As it turns out, people have a hard time expressing their intent well. In fact, instead of writing prompts manually, it's a good idea to ask AI to write a prompt to feed itself.

Illustration: How users can express their intent in AI interfaces.

Flora AI allows you to modify images and videos via nodes.

With Flora AI, users can still write prompts, but they visualize their intent with nodes by connecting various sources visually. Instead of elaborately explaining to AI how we need the pipeline to work, we attach nodes and commands on a canvas.

Illustration of Output UX

With Krea.ai, users can move abstract shapes (on the left) to explain their goal to AI and study the outcome (on the right).

With input for AI, being precise is slow and challenging. Instead, we can abstract away the object we want to manipulate, and give AI precise input by moving that abstracted object on a canvas. Thatโ€™s what Krea.ai does.

In summary, we can minimize the burden of typing prompts manually โ€” with AI-generated pre-prompts, prompt extensions, query builders and also voice input.

๐Ÿ—‚๏ธ 2. Output UX: Displaying Outcomes #

AI output doesn't have to be merely plain text or a list of bullet points. It must be helpful to drive people to insights, faster. For example, we could visualize output by creating additional explanations, based on userโ€™s goal and motivations.

Illustration of Output UX

Visualizing outcome through style lenses. By Amelia Wattenberger.

For example, Amelia Wattenberger visualized AI output for her text editor PenPal by adding style lenses to explore the content from. The output could be visualized in sentence lengths and scales Sad โ€” Happy, Concrete โ€” Abstract etc.

Illustration of Output UX

Aino.ai, an AI GIS Analyst for urban planning.

The outcome could also be visualized on a map, which of course is expected for an AI GIS analyst. Also, users can access individual data layers, turn them on and off and hence explore the data on the map.

We can also use forced ranking and prioritizations to suggest best options and avoid choice parallysis โ€” even if a user asks for top 10 recommendations. We can think about ways to present results as a data table, or a dashboard, or a visualization on a map, or as a structured JSON file, for example.

๐Ÿ–๏ธ 3. Refinement UX: Tweaking Output #

Users often need to cherry-pick some bits from the AI output and bring them together in a new place โ€” and often they need to expand on one section, synthesize bits from another section, or just refine the outcome to meet their needs.

Illustration of Output UX

Adobe Firefly suggests options and sliders to adjust the outcome.

Refinement is usually the most painful part of the experience, with many fine details being left to users to explain elaborately. But we can use good old-fashioned UI controls like knobs, sliders, buttons etc. to improve that experience, similar to how Adobe Firefly does it (image above).

Illustration of Output UX

Presets living on the side in Elicit, an example by Maggie Appleton.

We can also use presets, bookmarks and allow users to highlight specific parts of the outcome that theyโ€™d like to change โ€” with contextual prompts acting on highlighted parts of the output, rather than global prompts.

Illustration of Output UX

Tweaking specific parts of the outcome, on Grammarly.

๐Ÿ—“๏ธ 4. AI Actions: Tasks To Complete #

With AI agents, we can now also allow users to initiate tasks that AI can perform on their behalf, such as scheduling events, planning and deep research. We could also ask to sort results or filter them in a specific way.

Illustration of Output UX

Suggesting actions on Elicit, an example by Maggie Appleton.

But we can also add features to help users use AI output better โ€” e.g. by visualizing it, making it shareable, allowing transformations between formats, or also posting to Slack, Jira etc.

โš™๏ธ 5. AI Integration: Where Work Happens #

Many AI interactions are locked within a specific product, but good AI experiences happen where the actual work happens. It would be quite unusual to expect a dedicated section for Autocomplete, for example, but we do so for AI features.

Illustration of AI Integration UX
Illustration of AI Integration UX

DoveTail AI integrates in plenty of platforms, from Jira and Notion to Slack and Teams, where the actual work happens.

The actual boost in productivity comes when users rely on AI as a co-pilot or little helper in the tools they use daily for work. It's seamless integrations into Slack, Teams, Jira, GitHub etc. โ€” the tools that people use anyway. Dia Browser and Dovetail are great examples of it in action.

Wrapping Up #

Along these 5 areas, we can explore ways to minimize the cost of interaction with a textbox, and allowing users to interact with the points of interest directly, by tapping, clicks, selecting, highlighting, bookmarking.

Many products are obsessed by being AI-first. But you might be way better off by being AI-second instead. The difference is that we focus on user needs and sprinkle a bit of AI across customer journeys where it actually adds value.

And: AI products don't have to be AI-only. There is a lot of value in mapping into the mental models that people have adopted over the years, and enhance them with AI, similar to how we do it with browser's autofill, rather than leaving users in front of a frightening and omnipresent text box.


Useful Resources #

Related articles