Articles
A Product-Led Approach: Designing AI People Will Trust
How desirability in AI depends on clarity, control, and confidence, not technical sophistication.

In Part 2 of this series, we explored feasibility: whether an AI idea can be built given current data, technology, and constraints. That phase is often misunderstood as a requirement to prove everything up front. In reality, early feasibility is not about certainty; it is about learning.
In our experience, the most effective teams use feasibility to answer a far more pragmatic question: is this idea concrete enough to be tested with real users today? Not optimized. Not production-ready. Simply real enough to expose assumptions and surface friction.
This distinction matters more than it first appears. Across organizations, we repeatedly see technically successful prototypes consume months of effort, only to stall once they encounter real users. The costliest failure in AI is rarely a model that performs poorly; it is a model that performs well and solves the wrong problem, or solves the right problem in a way people do not trust, understand, or adopt.
Desirability, therefore, sits immediately after initial feasibility. Once an idea appears plausible, the next priority is to validate whether it creates meaningful value for people before deeper technical investment begins.
You do not need perfect data, a fully automated pipeline, or production-grade models to learn whether an AI solution will be trusted, adopted, or valued. In fact, waiting for “readiness” often delays the most important insights.
Early desirability testing focuses on experience, not performance. It helps answer questions such as:
- Do people understand what this system is trying to help with?
- Does it fit naturally into how they work or interact?
- Does it increase confidence, reduce effort, or remove friction?
- Where does it create discomfort, hesitation, or resistance?
Organizations can test desirability through lightweight approaches such as concierge or human-in-the-loop experiences or safe, small pilots to find answers to the above questions.
These techniques allow teams to learn quickly and cheaply. If desirability is weak, the organization can pivot, or stopbefore committing to costly infrastructure, data programs, or organizational change.
What does desirability mean in the context of AI?
In product terms, desirability reflects whether a solution solves a problem people actually care about, in a way that fits their context and behavior. For AI, this question is amplified.
AI systems don’t merely support actions; they often influence or mediate decisions. As a result, desirability extends beyond usefulness into trust, clarity, and perceived control.
A technically advanced system that feels opaque or unpredictable can create more friction than benefit. Conversely, a simpler model that explains itself clearly and behaves consistently often gains faster acceptance.
Desirability in AI is therefore not about impressiveness. It is about whether people feel confident enough to rely on it when outcomes matter.
Two lenses on desirability: internal use and customer value
Desirability is always evaluated from the perspective of a specific user. In AI initiatives, the user typically falls into one of two categories: internal users or external customers.
Not every AI solution needs to satisfy both at the same time. What matters is being explicit about which lens applies, because each reveals different risks and requires different forms of validation. Blurring these lenses often leads to solutions that are technically sound but undesirable in practice.
Internal desirability: The gap between deployment and adoption
Internal desirability applies when AI is designed to support employees, teams, or internal decision-makers. In these cases, the central question is not whether the technology works, but whether people will choose to rely on it under real operating pressure.
Internal desirability is shaped by lived experience. It depends on:
- how naturally the system fits into existing workflows,
- whether it reduces effort, uncertainty, or cognitive load,
- how responsibility and accountability are shared between humans and AI,
- and whether incentives make usage feel safe rather than risky.
An internally desirable AI solution is one that people trust enough to use under real pressure, not because they are instructed to, but because it genuinely helps them do their job better.
Customer desirability: The gap between features and perceived value
Customer desirability applies when AI directly affects the customer experience, such as recommendations, personalization, automation, or decision support exposed outside the organization.
Here, desirability is not about adoption mechanics; it is about perceived value. Customers judge the solution based on what they experience:
- does it save time or reduce friction,
- does it feel relevant, fair, and respectful,
- does it behave consistently and predictably,
- and does it align with expectations around control, privacy, and transparency.
A customer-desirable AI solution is one where the value is immediately apparent without explanation, and where the presence of AI actually strengthens trust rather than raising concerns.
When both lenses apply
In some AI initiatives, internal and customer desirability are not separate concerns, they apply simultaneously.
This occurs when an AI system supports internal users, but the effects of its recommendations or actions are directly experienced by customers. In these cases, internal trust and customer confidence are tightly coupled.
A common example is AI-assisted customer service and support.
Consider an AI copilot that suggests responses, summarizes customer issues, or recommends next actions for support agents. From an internal perspective, desirability depends on whether agents feel the system genuinely helps them, reducing cognitive load, improving judgment, and supporting faster resolution without making them feel constrained or monitored.
From the customer’s perspective, the same system is judged entirely differently. Customers do not see the AI; they experience the outcome. They care whether responses feel relevant, empathetic, and consistent, and whether issues are resolved effectively, not merely faster.
If agents do not trust the system, they ignore or override it, leading to inconsistent usage. If customers perceive interactions as scripted, rushed, or impersonal, trust erodes, even if internal efficiency metrics improve. In both cases, the AI may be technically sound yet undesirable in practice.
When both lenses apply, desirability must be validated across the full chain: how the AI supports internal decision-making, and how those decisions translate into customer experience. A useful test is simple: if an internal user hesitates to explain the AI’s influence on an outcome to a customer, desirability has not yet been proven.
How can organizations understand human desirability?
Desirability emerges from observation and empathy, not assumptions.
Organizations should look for moments where people hesitate, double-check, or feel exposed, internally or externally. These are often the places where intelligence adds the most value.
Four practices help anchor desirability:
- Observe real behavior, not stated preferences.
- Design for augmentation before automation.
- Make limitations and uncertainty visible.
- Test in real contexts, not controlled demos.
Closing the loop: back to feasibility, then viability
Once desirability is demonstrated - through real usage, not enthusiasm - the organization can return to feasibility with sharper focus.
At that point, deeper feasibility work becomes purposeful: improving data quality, strengthening models, hardening infrastructure, and addressing edge cases that matter. The effort is guided by evidence of value, not speculation.
Only after feasibility and desirability reinforce each other does it make sense to assess viability: the long-term economics, operating model, and strategic fit of the solution.
In the next part of this series, we will explore viability, examining how organizations determine which AI initiatives are worth sustaining, scaling, and backing over time.
Contact




