Blog

Why Your Team Doesn't Trust AI and How You Can Change It


4 September, 2025
Sjoerd Pieksma

Sjoerd Pieksma

Imagine hiring a new employee that has the ability to process all your order data in seconds and predict your upcoming orders with 95% accuracy, however, no one knows how he does it. Artificial Intelligence promises to revolutionize how we work, yet many teams remain hesitant. In our 6th edition of Data & AI Monitor, just 28% of professionals say they trust AI outputs as much as human judgment. Almost half are skeptical or outright distrusting.

This hesitation is nothing new. Every major technological shift, from the internet to smartphones to self-driving cars has been met with skepticism. AI is definitely no exception, especially because AI failures make headlines, job security fears linger, and opaque algorithms leave users in the dark.

But this AI distrust doesn’t have to be a permanent resident. Companies and organizations can move beyond this problem by addressing concerns head-on and demonstrating AI’s value thoughtfully. In this way, managers can transform skepticism into confidence.


What’s fueling the AI distrust

It is easy to imagine being the subject of AI decisions, much like being handed a magic 8-ball; a black-box ball that is tasked with making important decisions for your team. Nobody can see inside it, and most people don’t seem to understand how it works, but occasionally someone shakes it and makes a ‘trusted’ decision. However, since we do not know the reasoning behind the magic 8-ball decision, it might also be  confusing and hard to explain. Would everyone trust it from the get-go?

This is how many employees view AI. The technology often operates as a sort of oracle of Delphi, the tool is an inscrutable force, powerful yet mysterious. When an AI-powered hiring tool rejects a candidate that everyone seems to like or perhaps a diagnostic system suggests a treatment that is unusual, the users will naturally question its judgment. In the lack of transparency of how the tool works and what its rules are, skepticism will naturally propagate.

Naturally, the problem is further amplified by users worried about their job security. As AI is more and more tasked with automation, employees worry that their expertise may become obsolete. A marketing team will probably reject AI content tools, fearing their creative skills will be devalued. A financial analyst is automatically prone to distrusting predictive models, concerned they’ll eventually replace human judgment entirely.

To make matters worse, AI failures will often feature quite more loudly than its successes. From biased recruiting algorithms to chatbots dispensing dangerous medical advice or content farms producing AI slop or copies of copyrighted works, these stories reinforce the perception that AI is often unreliable and, in the wrong hands, can be dangerous. These headlines will cause distrust even in the most savvy and wise teams, thus skepticism is definitely understandable.

The main point we are trying to make is that every tool that is forced from above, without instructions and collaboration from domain experts, will automatically be doomed to fail. Employees distrust systems that operate in isolation, that are called on to make, or contribute, critical decisions without “human in the loop” oversight. Your employees are looking forward to helpful tools that can make their work easier, but they want to work with AI, not being managed by it.

Building trust one brick at a time

The path towards AI acceptance for your employees and colleagues begins with demystification, or in other words, creating awareness and understanding. Instead of treating AI as a puzzling black box, take the time to explain it in simple terms. If you are using a tool that analyzes which customers are buying tickets, at what time and price, simply state that AI will evaluate behavioral patterns to identify potential issues. With understanding comes trust, so even a basic explanation of how this new AI tool functions can be fundamental in taking that first step toward trust and adoption in your organisation.

Our report indicates that advancing AI maturity requires more than simple and generic training, it demands tailored approaches that meet the specific needs of different industries, organizations, and roles. Clearly, interpretable AI models are much easier to work with for people in- and outside your organization. So clarifying which decision tree is being used for its reasoning (for example, “examining potential issues of credit score history”), is a fundamental step. For more complex tools or models, it can be useful to provide employees with clear audit trails that let them simply retrace the AI’s logic and how it came to a particular conclusion. All steps towards full transparency will transform AI from a mysterious oracle into a clear and predictable collaborator.

Also, we have mentioned tools that are forced from above. Often, managers think that employees will embrace every tool that makes work easier, but positioning still matters. AI has to be described as an assistant, not a replacement. As our report indicates, almost half of our respondents agree that AI will reduce the need for people, and this is a worry that companies need to address. For example, positioning the use of AI within a sales team should let the other team members to bring more value, as they are free to focus on client relationships, while AI can be focused on the more boring and mundane data entry work. Medical teams can recognize the importance of AI when it comes to a tool that suggests potential solutions, but still leaves the final decision to them. It is important to position AI tools as partners that make work easier, so that resistance will slowly fade over time.

A good idea is to start with small or more mundane tasks. AI can first be tested out in areas where any mistake will not be a huge issue, but the benefits (in cost or time saving) can still be much appreciated. Automating expense reports or meeting scheduling proves value without risking major errors. Then, once those small tasks have been established, the tool can be gradually expanded to more critical functions. It is important to establish those small victories for your team, as when they see that AI is being positioned as a collaborator and is indeed pulling its weight, they will (slowly) become advocates of change.

Still, human oversight remains essential. AI should be called upon to inform decisions, not be left alone to make them unilaterally. A content generator can easily draft emails to customers, but there should always be a final human edit before those emails are sent out. This four-eyes approach, or perhaps the “human in the loop”, will not only prevent public (and costly in terms of reputation) mistakes but also allow teams the time required to build confidence in AI’s capabilities.

AI Implementation: the key is better data

Our report identifies data quality as the single biggest barrier to successful AI adoption, with 51% of organizations citing it as their top challenge, which makes it more than double any other technical hurdle. 

Digging deeper, less than half of the companies surveyed report having clear and enforceable data policies, with 33% achieving proper data ownership structures and 40% describing their data strategy as “clear and actionable”. 

These statistics reveal why so many AI initiatives fail to gain traction. Teams quickly lose faith in systems that, because of fragmented data sources, are providing inconsistent results or will make decisions based on outdated or incomplete information. Finally, they can't explain their outputs because the underlying data lacks lineage.

Bad data will poison AI trust

The companies we surveyed are clear in highlighting several specific data management failures that directly undermine AI credibility:

Legacy Data Trap

Manufacturing and healthcare sectors, with only 21-23% of organizations effectively citing AI value, show how legacy systems can easily lead to vicious cycles. When AI models ingest decades of inconsistent operational data, their outputs will automatically inherit those flaws. Employees recognize these imperfections immediately and bad results will follow suit, breeding skepticism.

Governance Void

Financial services firms, despite relatively mature AI adoption on the market (with 40% using generative AI), still seem to struggle with neutral or disagreeing responses from 40% of teams about AI trustworthiness. And what is the main culprit? Only a little more than half of the respondents seem to actively monitor AI risks, which means letting models run without proper data quality safeguards.

Ownership Gap

Professional services lead in AI integration (with half of the respondents reporting clarity on use cases), but even in this sector, a third of the companies that responded cite data ownership as unfinished business. In the lack of clear ownership of the customer, product or operational data, AI tools will be left alone to make decisions without accountability. Not only will employees notice, but also customers.

Broken data will also break your trust

These data management failures create tangible distrust patterns, as our report shows only 28% of employees trust AI output as much as human output. Along with the normal human distrust, this issue also arises from data opacity.

If an AI tool is fed incomplete profiles or broken data, AI will start making outdated forecasts or decisions that will not make sense. In regulated industries such as healthcare, where 21% of respondents mentioned tracking AI value, poor data governance can often lead to manual verification of each and every decision made by AI. This, in effect, nullifies the benefit of AI and can also lead to employees being irritated.

These patterns show why trust is not something that can come from automation; instead companies have to earn it through positioning and rigorous data management. While 51% of organizations cite poor data quality as their most significant barrier to AI adoption, only 17% rank "setting up required technologies" as the top success factor. This suggests treating data quality as an afterthought, with companies more focused on building algorithms rather than stable foundations (which is also the lowest-ranked success factor in the report), this means setting AI initiatives up for failure before they even begin.

The message from the data is crystal clear. Until organizations fix their data quality and governance issues, AI distrust will remain the norm rather than the exception. The next wave of AI adoption won't be driven by better algorithms, but by better data practices. Those who recognize this first will gain a decisive advantage, not just in technology implementation, but in workforce confidence and organizational buy-in.

The real AI dilemma isn't about capability, since this is a tool that can easily do complicated jobs without problems, but it's now about credibility. And that all starts with the data we feed our systems.

Change through trust and better data

Trust in new technologies can only grow through consistent, positive experiences. When teams see AI making their jobs easier rather than harder, when they understand how it works, and when they retain control over important decisions, skepticism gives way to confidence.

The organizations that succeed with AI won’t be those with the most advanced technology, but those that bring their people along on the journey and that harness the right data to feed their tools. In the end, AI is not out to replace human judgment, but rather to amplify it. The future belongs to those who can harness both artificial intelligence and human confidence.

For more precious insights into how companies perceive AI and how these tools can effectively improve your organization’s efficiency and performance, be sure to take a look at our Data & AI Monitor, which can be downloaded here.

Sjoerd Pieksma

Data&AI Strategy Consultant

Contact

Let’s discuss how we can support your journey.