Lasse Andresen
February 18, 2025

AI’s next crisis: Trust will decide who wins and who fails

AI’s next crisis: Trust will decide who wins and who fails

AI is advancing at an unprecedented pace - but with every leap forward, new risks emerge. Right now, the biggest threat isn’t flawed AI - it’s blind trust in AI.

Businesses are rapidly adopting AI to automate tasks, make decisions, and integrate with critical systems. But AI is no longer just a productivity tool - it’s evolving into something more. AI agents are quickly becoming popular in enterprises, autonomously retrieving information, analyzing data, executing tasks, and interacting across departments. In haste to not be left behind, many businesses are handing over access without questioning whether AI should be trusted with it.

AI will soon touch every part of the enterprise, unlocking immense opportunities. But without the right safeguards, businesses risk security breaches, operational failures, and compliance nightmares.

The next phase of AI isn’t just about who builds or implements the smartest AI - it’s about AI that enterprises, consumers and authorities can trust.

The cost of misplaced trust

Recent events have shown us what happens when technology advances faster than safeguards. When we place blind trust in AI, the consequences can be unpredictable - and often costly.

Take Copilot, for example. An employee casually asked it to summarize company emails - only to realize it was pulling in confidential executive messages, HR records, and confidential files.

Then there’s DeepSeek. A user was shocked to find that chat histories weren’t just stored - they were accessible to others. Private conversations, sensitive queries, and even confidential corporate data were suddenly at risk.

These aren’t just isolated cases. They reflect a broader pattern: AI is moving faster than our ability to secure it. If enterprises don’t implement strong access controls, AI could misuse or expose data in ways that businesses can’t predict or control.

The next AI trust failure won’t be a surprise - it’ll be a consequence of failing to set the right guardrails for AI implementations. This begs the question, which companies will learn from these mistakes before they become the next news headline?

The danger of blind trust in AI

Blind trust in AI often stems from a lack of understanding of its inherent risks. Expanding the attack surface, AI introduces new vectors for adversarial manipulation, data poisoning, and unintended exposure. Without rigorous oversight, it can be exploited, compromising security, privacy, and regulatory compliance.

AI is only as reliable as the data it processes. Incomplete, biased, or inconsistent data leads to flawed outputs, reinforcing systemic biases and generating misleading conclusions. This not only undermines decision-making but erodes confidence in AI-driven systems.

Unchecked data retrieval mechanisms pose additional risks. Poorly configured AI models can access and surface sensitive or confidential information, leading to regulatory breaches and reputational harm. As organizations scale AI adoption, robust governance and control frameworks are critical to ensuring AI remains secure, ethical, and aligned with business objectives.

While some organizations place too much trust in AI, others assume they have it under control - only to realize too late that AI operates in ways they never fully anticipated.

The illusion of control

Many organizations believe they have AI under control because they manage it like many other enterprise systems - setting permissions, enforcing firewalls, and monitoring activity. But AI doesn’t operate like traditional software. It doesn’t just follow predefined workflows; it actively searches, connects, and generates new information in ways that weren’t explicitly programmed.

Once AI is deployed, it could already be interacting with sensitive customer data, financial systems, or proprietary information - without clear oversight or control. For example, a chatbot granted access to customer service records might inadvertently reveal financial insights. While, an AI-powered assistant analyzing internal emails could surface confidential information - similar to the Copilot incident.

What feels like controlled access may, in reality, be closer to unrestricted exposure. And without data security mechanisms in place - such as lineage tracking, real-time governance, and trust scoring - businesses risk making critical decisions based on unreliable or unverifiable AI outputs as well.

As AI adoption accelerates, the companies that fail to rethink how they manage AI access and the integrity of the data fueling it will be the ones most exposed to risk.

Who wins and who fails?

The future of AI in the enterprise isn’t just about innovation - it's about trust and how well companies control it.

Winners will be the companies that recognize trust in AI must be proactively designed, not assumed. They will prioritize not only adaptive security models and real-time governance but also focus on data trust - ensuring the data that feeds their AI systems is accurate and reliable. These companies will implement AI-specific access controls to safeguard data integrity, allowing AI to remain a powerful and safe force within their organizations.

Failures will come from businesses that do not take proactive measures to ensure the right granular and dynamic guardrails for AI deployments. These companies could face data leaks, compliance violations, and operational disruptions - not because AI is flawed, but because they failed to govern it responsibly.

AI isn't just evolving; it's becoming an integral part of how we do business. Companies that understand this shift and act accordingly can build AI that is trusted, secure, and aligned with their goals. Trust in AI can’t be an afterthought  - it has to be designed from the start.

Learn more.

Keep updated

Don’t miss a beat from your favourite identity geeks