Closing the Trust Gap in the Era of AI Hallucinations


Listen to the article (8:50)

Artificial intelligence (AI) is becoming an integral part of how organizations innovate, manage risk, and deliver value. Yet the rise of AI hallucinations has widened what many call the “trust gap.” For leaders, this moment is less about deciding whether to use AI and more about determining how to integrate it responsibly while maintaining transparency, accountability, and control.

What Are AI Hallucinations?

AI hallucinations are instances when an artificial intelligence system generates outputs that sound confident and authoritative but are factually incorrect, fabricated, or unsupported by its underlying data. For example, an AI chatbot might confidently cite a research study that does not exist or attribute a well-known quote to the wrong historical figure. These errors often occur because the system is designed to predict patterns rather than validate truth, leading it to fill in gaps with plausible, but inaccurate, information.

How AI Hallucinations Impact the Workplace

In the workplace, AI hallucinations can lead to flawed decision-making with high-stakes consequences. In the case Gauthier v. Goodyear Tire & Rubber Co., a lawyer submitted a legal brief containing citations to two court cases that did not exist and quotations that could not be found in the cited sources. The attorney later admitted to using a generative AI tool to draft the filing without verifying the information. The court sanctioned the lawyer, imposing a financial penalty and requiring continuing legal education on AI use.

Beyond errors themselves, the perception that AI is unreliable can erode employee and stakeholder confidence, slowing adoption and innovation.

Drawing on his expertise in decision-making and AI governance, Professor Adam Mersereau of UNC Kenan-Flagler Business School notes that “The issue of transparency remains very important [because] it is basically impossible to understand exactly how a Gen AI model comes up with its responses…This means that it is hard to pinpoint issues and correct mistakes.” He draws a parallel to workforce management, explaining that “this is…similar to how we are used to managing human workers, who can behave unpredictably, rather than software, which we can debug by finding and correcting specific errors.”

Adam also notes, “There is…the fundamental challenge that we humans have trouble trusting a system we don’t really understand.” Compounding this, he warns of instability in generative models. “You can give a Gen AI model the same prompt repeatedly and get different responses each time. Again, this unreliable behavior erodes trust.”

Understanding the AI Trust Gap

The trust gap reflects the growing distance between AI’s potential and organizational confidence in its outputs. Recent research shows this gap is widening: while AI adoption continues to rise across industries, trust in its results has not kept pace. A 2025 KPMG study found that 66% of people use AI tools regularly, yet only 46% say they trust them.

These findings present a leadership challenge. Closing the AI trust gap requires reframing the human role in AI systems from passive oversight to active leadership, where judgment, accountability, and governance play central roles.

Adam underscores that even as the technology improves, the trust gap remains a pressing issue. He notes that “the newest large language models are much better about hallucination. At least, their hallucinations are less common and less obvious.” He points to several reasons for this progress. “One is that the latest models have larger ‘context windows’ so they effectively have more memory to draw from.” He adds that “the newest models are also trained to use outside resources (databases, calculators, web browsing) when appropriate,” and that at the enterprise level, “AI increasingly uses RAG (retrieval-augmented generation), which means that it is often using an authoritative set of reference documents that the organization has curated.”

Yet despite these advancements, Adam cautions that improvements alone are not enough to ease vigilance. As he emphasizes, “I believe that the ‘trust gap’ is still a big deal.”

Establishing Guardrails for Ethical AI Use at Work

To use AI responsibly, organizations must put clear guidelines and regulations in place. These should include:

  • Technical safeguards such as continuous monitoring, diverse training data, and rigorous validation processes.
  • Procedural safeguards like bias audits, model testing, and strong data governance policies.
  • Ethical safeguards that prioritize fairness, transparency, and accountability across departments.

Another essential component of establishing effective guardrails for organizational AI use is recognizing that AI is not the right tool for every project or task. Some use cases inherently lack the transparency or reliability needed for high-stakes decisions, and leaders who push adoption without acknowledging those limitations risk undermining both trust and outcomes. Exercising ethical restraint—knowing when not to use AI—is just as important as pursuing innovation. When technology choices are aligned with the organization’s mission and values, AI adoption strengthens trust rather than weakens it.

Together, these practices form a foundation that can reduce bias, limit edge‑case failures, and build confidence in AI‑enabled decision‑making.

How Leadership Can Close the AI Trust Gap

Organizations can strengthen trust in their AI systems by intentionally redesigning their workflows to clarify where AI offers the greatest value and where human expertise must guide decisions.

As Adam explains, “organizations which are successful at reaping the benefits of AI are thinking through their workflows with an eye towards understanding where AI can bring the most benefits and where human judgment is [still] essential.” He notes that this kind of intentional process mapping ensures that AI is used thoughtfully rather than automatically.

To ensure that AI enhances decision‑making rather than replaces human judgment, organizations can also take the following steps to help close the AI trust gap:

  • Use phased deployment. Starting with low-risk use cases helps leaders test assumptions, identify vulnerabilities, and build organizational confidence. AI can then be gradually scaled and integrated into additional areas of operation.
  • Audit current AI initiatives for trustworthiness. Regularly evaluate where AI is deployed, how it is being monitored, and whether its outputs align with organizational values and compliance requirements.
  • Invest in education and governance frameworks. Equip leaders and teams with the knowledge to identify risks and opportunities. Strong governance structures demonstrate a commitment to ethical use, not just efficiency.
  • Lead with transparency. Communicate openly about the role AI plays in processes and decisions. Acknowledging both capabilities and limitations builds credibility with employees, customers, and partners.
  • Foster cross-functional collaboration. Bring together leaders from technology, compliance, risk, HR, and strategy to shape AI governance. Shared ownership strengthens oversight and aligns AI use with organizational goals.

Reclaiming Trust: Navigating the Future of AI

AI hallucinations are a clear reminder that trust in technology must be earned rather than assumed. Organizations that approach AI with a commitment to transparency, accountability, and thoughtful implementation are best positioned to navigate this evolving landscape. When trust becomes a guiding principle for innovation, AI can support not only greater efficiency but also more durable, long‑term success.

Related Content

Discover how our transformational learning experiences deliver results for our corporate sector clients.

  • EQ and AQ: Essentials of Modern Talent Strategy

    Why EQ and AQ matter in modern talent strategy, and how emotional intelligence and adaptability help leaders build trust, upskill teams, and thrive through change.

  • How to Lead With Clarity in Uncertain Times

    In uncertain times, inaction presents risk. Learn how strategic alignment helps organizations embrace uncertainty, move decisively, and avoid the hidden costs of waiting.

  • Better Data, Better Systems: How to Improve Generative AI

    Learn what generative AI bias is and its impact on organizations. Get practical advice that organizational leaders can use to reduce bias in generative AI and improve their team’s use of AI tools.

  • Empowering Emerging Healthcare Leaders for a Resilient Future

    Learn how emerging healthcare leaders can navigate AI, cybersecurity risks, and rapid industry disruption through strategic development and innovation.

  • Reimagining Healthcare Talent for a Stronger Tomorrow

    Discover how healthcare organizations can overcome the talent shortage by building a strong Talent Brand and Virtual Talent Bench to attract and retain top professionals.

Contact Us About Your Organization's Needs

Pavilion with columns and red flowering bushes at UNC
Linkedin logo Facebook logo Instagram logo Email envelope icon