AI trust: Can causal AI make artificial intelligence reliable?



As artificial intelligence advances, AI trust remains a critical roadblock to widespread adoption.

According to theCUBE Research’s latest analysis, discussed in the “Next Frontiers of AI” podcast, enterprises struggle with AI-driven decisions that often appear as inscrutable “black boxes,” leading to skepticism from business leaders. A lack of AI trust has slowed deployments in approximately 60% of enterprises, according to a McKinsey & Company Inc. study cited by theCUBE Research’s Scott Hebner.

Hebner was joined by Marc Le Maitre, chief technical officer of Scanbuy Inc., a global leader in mobile marketing and advertising technology. A mobile communications and data privacy veteran, Le Maitre has long been a proponent of explainable AI and has seen firsthand how new advancements are making AI-driven decisions more transparent. Their discussion explores how enterprises can build trust in AI systems.

The problem with current AI models

Hebner opened the discussion by highlighting the major problem facing AI today: “Business leaders already lack trust in AI outcomes, which is slowing down deployments,” he said. “With AI agents promising to help make business-critical decisions, the trust issue will become even more pronounced. Are businesses really going to blindly trust AI agents? I seriously doubt it.”

Le Maitre agreed, pointing out that traditional AI models — notably large language models — rely heavily on statistical correlations rather than causal reasoning. “We hired a really clever data scientist fresh out of college, and we spent months trying to understand the inner workings of these black-box machines,” he said. “It quickly became clear that we were never going to get full transparency. That realization spurred me on to find a way to make AI more explainable.”

The emergence of causal inference

After years of experimentation, Le Maitre and his team at Scanbuy stumbled upon causal AI. “We were searching for a term that accurately described what we were doing,” Le Maitre said. “Explainable AI wasn’t quite it because explaining something doesn’t mean someone will understand it. We tried understandable AI, traceable AI and even trustworthy AI. Then, in 2022, we discovered causal AI. [Actually], it found us.”

Unlike conventional AI models that rely on correlation, causal AI helps determine the underlying cause-and-effect relationships that drive decisions. “If you prove the cause, you at once prove the effect,” Le Maitre said, quoting Aristotle. “Conversely, nothing can exist without its cause. That’s the foundation of causal AI. It allows us to ask ‘what if’ questions, evaluate counterfactuals and truly understand the reasoning behind AI-driven outcomes.”

Causal AI in advertising technology

For Scanbuy, causal AI has been transformative in programmatic advertising. The industry has long suffered from ‘signal loss’ — the inability to precisely target users due to privacy regulations and data limitations. AI can help address this issue, but businesses need AI trust to confidently rely on targeting decisions.

“In advertising, you’re always making decisions about who to target,” Le Maitre said. “Traditionally, we relied on deterministic data — data we knew to be true, like explicit customer opt-ins. But that only gives us a small fraction of the audience we need. Predictive models helped expand our reach, but with causal reasoning, they were often unreliable.”

By implementing causal AI, Scanbuy significantly improved the accuracy of its audience targeting, according to Le Maitre. “With causal AI, we can not only identify the right people to target, but also explain why they were chosen,” he said. “This allows for a level of transparency that traditional AI models simply can’t provide. The results speak for themselves: Our approach has delivered a 10-times return on investment.”

Beyond advertising: The future of causal AI

While Scanbuy’s use of causal AI in advertising is impressive, the implications of this technology extend far beyond ad tech, according to Hebner. “If the goal is for AI to increasingly mimic how humans think so that they can help humans do things better, humans are causal by nature, so AI will have to become causal by nature,” he said.

The benefits of causal AI are particularly evident in decision-heavy industries such as finance, healthcare and supply chain management, according to Hebner. “Businesses operate on cause and effect,” he said. “Every action taken has consequences. If companies don’t understand the root causes of their challenges, they’ll struggle to make meaningful improvements.”

Le Maitre echoed this sentiment, emphasizing the potential dangers of relying solely on correlation-based AI models. “There’s a concept called ‘model dementia’: As models are trained on the output of other models, they start losing touch with the true distribution of data,” he said. “This creates a spiral where AI systems become increasingly detached from reality. Without causal AI, we risk creating a self-referential feedback loop that leads to unreliable, biased decision-making.”

Causal AI as the key to AI trust

Trust will be paramount as enterprises move from task automation to goal-driven AI decision-making. There are three keys to building trust in AI, according to Hebner. They are:

  1. Transparent explanations: AI must communicate in a way that users understand.
  2. Comparative insights: Users should be able to see why one decision is better than another.
  3. Interrogation capabilities: AI should allow users to test alternate scenarios.

Causal AI checks all these boxes, making it a foundational component of the next evolution of AI, according to La Maitre. “With causal AI, we’re not just predicting outcomes; we’re understanding why they happen,” he said. “When you understand the ‘why,’ you gain the ability to intervene, adjust and optimize your decisions in real time.”

Final thoughts: The road ahead

Causal AI adoption is still in its early stages, with only about 10% of enterprises currently leveraging it, according to Hebner. However, as businesses recognize the limitations of traditional AI models and the necessity for explainability, the use of causal AI is expected to expand significantly. Based on Hebner’s analysis of recent market projections, the AI sector is set to expand at a compound annual growth rate of 41%, reaching nearly a billion-dollar market. “Trust is the currency of innovation,” he said. “The more people trust AI outcomes, the more they will buy into new investments,” Hebner said.

“We’re on the leading edge of this transformation,” Le Maitre added. “As causal AI continues to evolve, it will fundamentally change how businesses make decisions, ensuring that AI is not only powerful, but also trustworthy.”

With AI rapidly reshaping industries, the need for transparency, AI trust and accountability has never been greater. Causal AI stands at the forefront of this evolution, offering a compelling solution to the black-box problem and paving the way for a future where AI-driven decisions are not just accurate, but explainable.

For a deeper dive into Hebner and La Maitre’s discussion, part of the  “Next Frontiers of AI” podcast series, check out their full conversation:

Photo: SiliconANGLE/DALL-E

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU



Source link

Leave a Comment

Your email address will not be published. Required fields are marked *