CFOtech Asia - Technology news for CFOs & financial decision-makers
Human face digital code pattern open padlock ai trust security concerns

Trust in generative AI rises but safeguards & investment lag

Fri, 3rd Oct 2025

Research commissioned by SAS has revealed that trust in generative AI globally has increased, even as significant gaps in AI safeguard investments remain.

The IDC Data and AI Impact Report: The Trust Imperative surveyed 2,375 respondents from IT and business backgrounds across North America, Latin America, Europe, the Middle East and Africa, and Asia Pacific. Participants consisted of both technology professionals and line-of-business leaders, providing a range of perspectives on current AI use, impact, and trustworthiness in the workplace.

Rise of generative AI

The report notes that IT and business leaders currently express higher trust in generative AI than in other AI forms. Specifically, 48% of respondents reported "complete trust" in generative AI, compared to 33% for agentic AI and just 18% for traditional machine learning-based AI. This trend prevails despite the comparatively recent introduction and explainability challenges of generative AI technologies.

"Our research shows a contradiction: that forms of AI with humanlike interactivity and social familiarity seem to encourage the greatest trust, regardless of actual reliability or accuracy," said Kathy Lange, Research Director of the AI and Automation Practise at IDC. "As AI providers, professionals and personal users, we must ask: GenAI is trusted, but is it always trustworthy? And are leaders applying the necessary guardrails and AI governance practices to this emerging technology?"

According to the study, generative AI's visibility and application now stand at 81%, eclipsing traditional AI at 66%. The emergence of these technologies has also created new risk factors, particularly around responsible deployment and oversight.

Trust gaps and organisational investment

The report identifies a discrepancy between declared trust levels in AI and tangible investments in safeguarding measures. While 78% of organisations report full trust in AI, only 40% have committed resources to formal AI governance, explainability initiatives, or ethical safeguards. Ethical matters and practical risks remain significant, as respondents highlighted ongoing concerns with data privacy (62%), transparency and explainability (57%), and ethical usage (56%).

Quantum AI is also showing growth in trust levels, with 26% of respondents expressing complete trust and nearly one-third familiar with the technology. However, most real-world applications of quantum AI are still at an early development stage.

ROI connected to trust practices

The research shows that organisations prioritising trustworthy AI - defined as those investing significantly in governance frameworks, responsible AI policies, and relevant technologies - are 60% more likely to report doubling the return on investment for AI projects. Only a minority are currently prioritising these measures: just 2% of respondents named the development of an AI governance framework as a top organisational priority, and less than 10% have introduced a responsible AI policy.

Respondents were categorised as either trustworthy AI leaders or followers. Those in the leader category, who invest in practices to make AI systems more reliable and transparent, report a 1.6 times higher likelihood of achieving doubled returns on their AI investments compared with followers.

Challenges with data management

Data quality and governance remain crucial for trusted and effective AI. Respondents cited three primary obstacles to successful AI implementation: weak data infrastructure, poor governance processes, and insufficient AI expertise. 49% of organisations reported that noncentralised or suboptimal cloud data environments were a major hurdle, followed by insufficient governance processes (44%) and a shortage of skilled specialists (41%).

Difficulty accessing relevant data sources is the leading problem for AI deployment, according to 58% of survey participants. Additional concerns included data privacy and compliance (49%) and data quality (46%).

"For the good of society, businesses and employees - trust in AI is imperative," said Bryan Harris, Chief Technology Officer at SAS. "In order to achieve this, the AI industry must increase the success rate of implementations, humans must critically review AI results, and leadership must empower the workforce with AI."

The study suggests that as AI systems become more ingrained in critical operations, the foundations and quality of data play a growing role in both maximising returns and reducing potential risks.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X