What 10 Interviews Taught Me About Desirable AI Products

Learning from users about what makes a product desirable

Rania Bailey
4 min readJun 10, 2024

Introduction

There are a broad range of feelings exhibited regarding AI technologies, particularly recently proliferated LLMs, which is the specific type of “AI” discussed in these interviews. Reactions to these tools and the features that use them ranged from high avoidance to enthusiastic embracement, with most participants landing somewhere in the middle. The key themes that distinguished desirable AI products included respectfulness, a focus on user-centric utility, and the trustworthiness of training data. An AI product or feature that accounts for these factors increases its desirability.

Image generated using ChatGPT-4o

Respect

Being respected is an essential aspect of having a positive experience. AI tools that respect user expertise and authority are more likely to be received well than tools that (even accidentally) patronize or condescend to users. As humans, we enjoy enacting our capabilities, and user-centered AI tools must be careful to maintain opportunities to do so even while reducing the work allocated to users (1). It’s essential that the work that the AI alleviates is work that users did not want to do.

An example of a tool that respects its users is Notion. Notion invites users to ‘Use Notion AI as a partner in thought to get started, or when you feel stuck on what’s next,’ leaving lots of space for users to express and develop their own ideas (2). The implicit message being given to users is that their ideas are valuable, and that the AI can help the user develop it further, but that the AI will not replace the user’s creativity and thoughtfulness.

Notion goes even further, citing limitations of the AI assistant in its current form. They specify the risks associated with using it (misinformation, bias) and invite the user to learn more about potential bias (2). This shows respect for the user’s practice of critical thinking and gently encourages them to be thoughtful in using the AI, both expressions of respect for how the user already operates.

Utility

Users are often already operating in efficient, thoughtful ways, and do not want to integrate new tools just to start using new tools — especially ones that are considered to be unreliable. However, when those new tools integrate seamlessly into existing workflows instead of scrambling or re-organizing them, users find value in AI assistance. One of the ways to achieve this is by honing in on the small, boring tasks that make up larger endeavors.

Focusing on small tasks within larger efforts provides the kind of utility users find convenient. It also reinforces the respect for the users’ capabilities by granting them easy oversight of the AI tools’ output and by making it easy for the user to override the AI when appropriate. Small errors are easy to remedy and can be caught before growing into much larger risks or failures, further increasing the resilience of the system, and thereby increasing its utility.

That said, there are still larger tasks for which AI can prove to be a valuable asset. Analyzing large data sets to identify anomalies or previously unknown correlations takes advantage of AI’s distinct ability to process large volumes of information in ways that human minds cannot. This can sometimes lead to “shortcut learning”, in which an AI discovers spurious correlations between inputs (3). If shortcut learning is recognized as the discovery of correlations, and the inference of correct classification is successfully withheld (no easy task), then this AI use case offers a promising tool for making observations that may not have been possible without using AI. Extending human capabilities, while encouraging the application human oversight and skepticism, contributes to the utility of an AI product. This increases its desirability, too.

Trustworthiness

Humans desire to be trustworthy, and human oversight and skepticism consistently applied to AI outputs increases the trustworthiness of those outputs. When a user leverages those outputs, then, they can be more confident that the information they’re using is trustworthy — and by extension, that they themselves are worthy of being trusted. This is a very desirable feeling. Achieving trustworthiness in a product dramatically enhances its desirability, and nothing contributes more to this than transparent and consensually acquired training data.

In my interviews, users consistently expressed concern over the fair and consensual acquisition of model training data. Some participants described preferring less powerful models with trusted, legally and consensually acquired training data over models with greater capabilities, but less reliable — or even unknown — initial inputs.

Many users expressed frustration with the disrespect shown to fellow humans — artists, authors, and other creators — when training data is acquired without freely given consent or creator knowledge. When creators can consent to sharing their data, though, and especially if they’re compensated for it, overall user trust in the system increases. Knowing that training data can be trusted activates a “trust network”, increasing the perception of the AI’s reliability and increasing users’ curiosity about AI possibilities.

Conclusion

There are many opportunities to respectfully respond to user concerns over rapidly developing AI technologies. While there may not be a perfect solution to balancing the drive to explore possibilities, the risks posed to the public, and the utility of AI tools in a business environment, there are certainly ways to design AI products with respect and safety in mind, and the products that do so tend to be received better by their audiences than those that do not. Respecting users’ expertise, focusing on utility over capability, and using trustworthy data are all significant opportunities to increase the likelihood that an AI product is deemed desirable by its audience. An AI product that addresses these concerns is designed with humans in mind, and contributes significantly to developing a healthy human-AI interaction paradigm.

Footnotes

1 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2944661/

2 https://www.notion.so/help/guides/using-notion-ai

3 https://pubmed.ncbi.nlm.nih.gov/38744921

--

--

Rania Bailey
Rania Bailey

No responses yet