At this point, almost every software domain has launched or explored AI features. Despite the wide range of use cases, most of these implementations have been the same ("let's add a chat panel to our app"). So the problems are the same as well.
Capability Awareness
Open-ended interfaces to AI models have the same problem as every "invisible" interface that came before them. Without a clear set of affordances, people don't know what they can do. The vision of these invisible UIs was always something like "Voice interfaces will work when you can ask them anything". Today it's "AI chat interfaces will work because you can tell them to do anything". Sounds great but...
In reality, even extremely capable systems (like extremely capable people) have limitations. They do some things well, some things ok, and other things poorly. How you ask them to do things also matters as different phrasings yield different results. But without affordances, these guideposts are as invisible as the UI.
I'm pretty certain this is the biggest problem in AI product interfaces today: because large-scale AI models can do so many things (but not all things or all things equally well), most people don't know what they can do nor how to best instruct/prompt them.
- Some ways to manage capability awareness with product design:
- Make the AI Models do the Prompting: let AI models rewrite and optimize people's initial prompts for better outcomes.
- Suggested Questions in Conversational UI: give people a sense of what capabilities an AI chat interface has.
Context Awareness
If capability awareness is knowing what an AI product can do, context awareness is knowing how it did it. The fundamental question here is "what information did an AI product use to provide an answer?" But there's lots of potential answers especially as agents can make use of an increasing number and variety of tools. Some examples of what could be in context (considered in an AI model's response):
- It's own training data? If so, when was the cut off?
- The history of your session with the model? If so, going how far back?
- The history of all your sessions or a user profile? If so, which parts?
- Specific tools like search or browse? If so, which of their results?
- Specific connections to other services or accounts? If so...
You get the idea. There's a lot of stuff that could be in context at any given point, but not everything will be in context all the time because models have context limits. So when getting replies people aren't sure if or how much they should trust them. Was the right information used or not (hallucinations)?
- Some ways to manage context awareness with product design:
- Background Agents Reduce Context Window Issues: background agents encourage people to use a different context window for each of their discrete tasks.
- Enhancing Prompts with Contextual Retrieval: transform people's instructions into optimized prompts written by adding useful context automatically.
- Streaming Citations: add citations to the relevant articles, videos, PDF, etc. being used to answer a question in real-time.
Walls of Text
While writing has done an enormous amount to enable communication, it's not the only medium for conveying information and, often, it may not be the best. Despite this, most AI products render the streams of text emitting from AI models as their primary output and they render them in a linear "chat-like" interface. Unsurprisingly, people have a hard time extracting and recalling information by scrolling through long blocks of text.
As the novelty of AI models being able to write text wears off, people increasingly ask for visuals, tables, and other formats like slides, spreadsheets as output instead of just walls of text.
- Some ways to mange walls of text with product design:
- Usable Chat Interfaces to AI Models: design solutions for managing lengthy AI model responses.
- The Receding Role of AI Chat: reducing the need to chat back and forth with an AI model to get things.
- Streaming Inline Images: return not only streaming text and citations but inline images as well.
And More..
Yes, there's other issues with AI products. I'm not suggesting this is a complete list but it is reflective of what I'm currently seeing over and over in user testing and across multiple domains. But it's still early for AI products so... more solutions and issues to come.