J Studios/DigitalVision via Getty Images
Follow ZDNET: Add us as a preferred source on Google.
ZDNET's key takeaways
Businesses are embracing AI tools despite a lack of trust.
Governance, skills, and data infrastructure determine trust.
This misalignment could be hindering ROI on AI initiatives.
It's no secret by now that many businesses are struggling to achieve tangible ROI from their AI initiatives. A recent study from MIT, in fact, found that as many as 95% of enterprise use cases of the technology have been essentially completely fruitless.
Also: 43% of workers say they've shared sensitive info with AI - including financial and client data
Why the huge rate of failure?
According to a new study conducted by data analytics company SAS and the International Data Corporation (IDC), one of the causal factors is a widespread lack of trust among businesses in the very AI tools they're deploying internally. This, coupled with the intrinsic untrustworthiness of the systems themselves, is the primary barrier preventing ROI, according to the study.
Why it matters
At first glance, that might seem obvious: Of course, if you don't have much faith in a technology, and if it's inherently unreliable, you're not going to incorporate it too deeply into your business.
Want more stories about AI? Sign up for AI Leaderboard, our weekly newsletter.
But businesses have been adopting AI, and on a massive scale: Well over half (65%) of respondents to the SAS-IDC survey said their organizations are currently using AI in some capacity, while an additional 32% said they have plans to begin doing so within the next year. In June, Gartner predicted that up to half of all internal business decision-making processes could be fully automated or at least partially augmented by AI agents.
Also: Too many AI tools? This platform manages them all in one place
The biggest surprise of the new SAS study is that this widespread commercial adoption is taking place even though those same businesses don't seem to have a whole lot of trust in the technology.
Based on a global survey of over 2,300 IT professionals and business leaders, the new study found that more than two-thirds (78%) of respondents have "complete trust in AI," while far fewer (40%) had actually implemented governance and explainability guardrails to ensure that their internal AI systems were trustworthy.
"This misalignment leaves much of AI's potential untapped, with ROI lower where there is a lack of trustworthiness," Chris Marshall, Vice President of Data, Analytics, AI, Sustainability, and Industry Research at IDC, said in a statement about the new study.
Also: AI is every developer's new reality - 5 ways to make the most of it
The study follows on the heels of recent data that show that many people never trust the information they receive from Google's AI Overviews feature, even as that company continues to make generative AI an ever-more central and conspicuous component within its proprietary search engine, as well as within its Chrome browser and other consumer-facing tools.
Three major roadblocks
The authors of the new study identified three major factors preventing businesses today from trusting their internal AI capabilities, and therefore hindering their capacity to achieve maximum ROI: weak cloud infrastructure, insufficient governance, and a lack of AI-specific skills among their existing workforce.
While the first two can be largely mitigated through third-party partnerships and more technology, the third could prove to be a bit more complicated -- and also, perhaps, fuel fears of job loss.
Also: No, AI isn't stealing your tech job - it's just transforming it
Luckily for employees, recent data has shown that most business leaders are prioritizing training initiatives over layoffs. Adding just one AI-related skill to your resume, moreover, could significantly boost your salary at your next role.
A very human bias
The SAS-IDC study revealed another intriguing phenomenon regarding the evolving relationship between humans and AI: Survey respondents tended to trust generative AI systems much more than traditional machine learning models, despite the fact that the latter are older and more transparent -- they're built with fewer parameters and come with built-in mechanisms that make it easy to understand how they arrive at their outputs -- while the latter are much more opaque, not to mention prone to the occasional hallucination.
Also: AI magnifies your team's strengths - and weaknesses, Google report finds
According to the study authors, this is evidence of a psychological quirk among humans: that we tend to reflexively place more trust in AI systems that seem human over those that are more mechanical.
Generative AI tools like ChatGPT, Gemini, and Claude excel at generating humanlike language, which can create the illusion that they're somehow more than mere algorithms detecting and replicating patterns from troves of training data. In some extreme cases, this illusion can have serious psychological consequences, leading users to form emotional or even romantic bonds with chatbots or a newer category of systems marketed by tech companies as "AI companions."
This capacity, according to the authors, lends these systems an aura of authority.
"The more 'human' an AI feels, the more we trust it, regardless of its actual reliability," they wrote.