Blog
Ethics and Trust in the Age of AI. Takeaways from GoData Fest Panel Discussion

Artificial Intelligence is no longer a futuristic concept; it's woven into the fabric of our daily lives and business operations. From supply chain optimization to medical diagnostics and customer service chatbots, AI's potential is staggering. But with great power comes great responsibility, a sentiment echoed not simply by the friendly neighbor Spider-Man, but by every expert grappling with the ethical implications of this technology.
At GoDataFest 2025, Xebia organised a panel about ethics and trust in the age of AI, which took on some of the main issues around this discussion. The panelists were Giovanni Lanzani (Managing Director data & AI at Xebia), Anindita Misra (Director of AI and Governance at Brenntag) and Chozhan D M, (Engineering Manager at Blacklane). The conversation is no longer about if we can build powerful AI, but about how we can build AI that is responsible, fair, and, above all, trustworthy. So, what do we really mean when we talk about ethics and trust in AI, and how can organizations move from abstract principles to concrete action? Who is responsible, ultimately, when things go wrong?
What is Ethics in AI? More Than a Buzzword
At its core, ethics in AI is about ensuring that these powerful systems are used in ways that are right and just. But as one speaker in panel noted, ethics can feel vague. "What is right to you may not be right to me." This subjectivity is precisely why the conversation must expand to encompass responsibility.
Ethics in AI is a multi-dimensional challenge, touching upon:
- Fairness and Bias: Is AI making decisions that are equitable, or is it perpetuating and amplifying existing human biases hidden in the data it was originally trained on?
- Transparency and Explainability: Can we understand why an AI arrived at a particular decision? Nobody will ever trust a "black box".
- Privacy: How is user data being collected, used, and stored? Is consent being respected, or are boundaries being crossed?
- Accountability: When something goes wrong, who is responsible? A clear line of accountability is not negotiable.
- Safety and Security: Is the system robust against misuse, attacks, or simply making dangerous errors?
Ultimately, as Anindita Misra succinctly put it, ethics is about values. It's about the individual values of the people building the systems and the core company values of the organizations deploying them. "If you built your own business model on manipulating how people feel, what can we expect you are using AI for," Giovanni Lanzani starkly observed, highlighting that AI often amplifies pre-existing business models, for better or for worse.
What is ultimately Trust in AI?
If ethics is the compass, trust is the destination. Trust is the fundamental ingredient that determines whether an AI system will be adopted and used effectively. But how do we define trust between a human and a machine?
It boils down to a few key expectations:
- Consistency and Reliability: The system must provide fair and consistent answers, no matter how many times you ask, given the same input parameters. We trust what we can rely on.
- Explainable Reasoning: We trust decisions when we understand the causation behind them. An AI shouldn't just output a result; it should be able to explain its "thought process" in a way that resonates with the business user, not just the data scientist.
- Safety: Do we feel safe using this system? This encompasses both data security and the physical safety of AI-driven machinery. Trust evaporates if we fear the technology will be misused or cause harm.
A startling statistic from the discussion highlights the current trust gap: only 28% of leaders trust AI outputs as much as human output. We say “only” but even this percentage might be surprising for some people in this moment, as Chozhan D M observed. A company entrusting their business decisions to a black box? That would be quite risky. Closing this gap is the most critical challenge facing the widespread adoption of AI today.

The Building Blocks: How to Foster Trust in AI Systems
Right now, AI is being looked at with fear especially by low skilled workers. Many companies feel a kind of FOMO if they are not jumping ship to AI. But how many workers are actually being replaced by AI and what is the real value brought by this new technology?
With trust comes value, since if nobody trusts your AI and its results or decisions, that investment in the technology is absolutely wasted. Building trust is the first step towards building value and it is not about a single action but a cultural and technical commitment. The panelists outlined several concrete building blocks:
1. Transparency by Design
Transparency isn't about revealing complex code. It's about providing clear, business-aligned reasoning. For example, if a supply chain AI recommends "Route B," it must explain why, citing factors like cost, speed, and sustainability that align with the company's strategic objectives. As Giovanni Lanzani put it, change management is essential to evolve culture and link AI to business value (in terms of hard data). This shifts the paradigm from a "black box" to a "glass box," where the inner workings are visible and understandable.
2. Ruthless Accountability
There must always be a "human in the loop." This means establishing clear ownership, "a name, not just a role", for every AI system. Who monitors its health? Who is accountable if it fails? Who does an employee escalate with concerns? A clear, human-centric accountability framework is essential for creating psychological safety and mitigating risk.
3. Widespread AI Literacy
Trust cannot be built in a vacuum. Literacy is not just for data scientists; it's for everyone in the organization, from the C-suite to the front lines. Everyone who interacts with AI must understand its capabilities, limitations, and the basic principles of how it works. This demystifies the technology and empowers employees to use it critically and effectively.
4. A Foundation of Fairness and Robust Data
An AI is only as good as the data it consumes. If the training data is biased, the AI's decisions will be biased. Actively working to identify and mitigate bias is crucial. Furthermore, the system must be technically robust, providing repeatable, reliable performance and secured against vulnerabilities.
5. Privacy as a Prerequisite
In an age of data-driven learning, respecting user privacy is paramount. This involves practicing data minimization, which means sticking only to the data absolutely necessary for the task, and employing techniques like anonymization and pseudonymization. As Chozhan D M warned, the risks extend beyond individual identification to broader societal profiling when sensitive data is pooled in external clouds.
From Theory to Practice: Concrete Steps to Take Now
Before clarifying how and why AI works in your company, ask if your company needs it. As Chozhan puts it “don’t do AI just for the sake of doing AI, no AI where it is not needed”. First find value in it, then think about how to implement it in your organization.
The path to trustworthy AI is a journey, but here are the first steps your organization can take:
- Start with Explainability: Make it a non-negotiable requirement for any AI system you build or buy. Use techniques like SHAP or LIME to visualize which factors drive a decision.
- Establish Clear Governance: Create a cross-functional responsible AI board. Implement a risk-based framework where high-risk systems undergo rigorous audit, while lower-risk innovations can flow more freely. Make sure the escalation process works when something goes wrong.
- Don’t Oversell: Be honest about the capabilities and error rates of your AI. As Giovanni Lanzani advised, "always explain the trade-offs." This builds long-term credibility over short-term hype.
- Invest in Change Management: Trust is earned through experience. Run pilots where AI recommendations can be overridden by human experts, and then measure the results. Showing how many times, AI was right, and quantifying the value lost in overrides, is a powerful way to win hearts and minds.
- Ask Tough Questions of Vendors: If you rely on third-party AI, demand transparency. How is our data handled? How does your model ensure fairness? Conduct regular audits to ensure reality matches the contract.
The Human / AI hybrid future
The journey toward ethical, trustworthy AI is not a technical challenge alone; it is a human one. It requires a fundamental commitment to literacy, responsibility, and transparency. It forces us to scrutinize our own values and business models. And it reminds us that the goal of AI should not be to replace humanity, but to augment it, handle the repetitive so we can focus on the creative, the complex, and the empathetic.
In the end, building trust in AI is about building systems that reflect the best of our humanity: systems that are fair, accountable, and, above all, designed to serve us all. The age of AI is here. Let's ensure it's an age we can all trust.
Watch the full discussion below and mark your calendar! GoData Fest 2026 is going to happen on October 26-30, 2026.
Written by

Klaudia Wachnio
Our Ideas
Explore More Blogs

Introducing XBI Advisor; Start auto-consulting your BI environment.
Ruben Huidekoper, Camila Birocchi, Valerie Habbel
Contact


