CFOtech Asia - Technology news for CFOs & financial decision-makers
Overwhelmed apac bank compliance office idle ai singapore malaysia

APAC banks struggle with manual compliance amid AI push

Thu, 29th Jan 2026

Compliance teams across banks and asset managers in Singapore, Malaysia and Australia still rely heavily on manual processes, despite growing interest in artificial intelligence, according to a joint survey by Fenergo and Risk.net.

The survey of 110 risk, financial crime and compliance professionals found that 66% described their compliance workload as "heavy" and manual. More than half, 54%, reported periodic know your customer review backlogs. A further 45% cited high false-positive rates across KYC, screening and transaction monitoring.

The research also pointed to a gap between exploration and deployment of AI in compliance functions. Some 54% said they were exploring AI use cases. Only 34% had started implementation. Another 13% said they were not using AI at all.

Manual pressure

The findings highlight workload constraints in areas such as customer reviews and ongoing monitoring. Respondents linked these pressures to backlogs and alerts that require investigation, alongside operational processes that still depend on staff time and judgement.

Fenergo said the region's operating environment added complexity for compliance teams. The company pointed to differences in language and regulation across markets and the effect on data consistency.

"Compliance in APAC continues to feel unusually manual, largely due to the region's linguistic and regulatory complexity," said Bryan Keasberry, APAC Head of Market Development, Fenergo. "Institutions are operating across fragmented regulatory regimes, diverse languages and complex data environments. That makes data consistency and quality difficult to achieve, and without those foundations, AI adoption inevitably slows."

Barriers cited

Respondents ranked operational efficiency as the main driver for AI investment, ahead of cost reduction and task automation. The survey identified data quality as the biggest obstacle to AI adoption. Integration with legacy systems followed. Respondents also raised regulatory compliance concerns.

The results suggest many institutions still see foundational work as a prerequisite for broader deployment. Data issues and system integration challenges can affect how models perform and how results get operationalised in production environments.

Automation limits

The survey found limited familiarity with agentic AI in compliance teams. Only 6% said they were "very familiar" with agentic AI in compliance. Another 53% described themselves as "somewhat familiar". A further 29% said it was "not familiar at all".

That caution also appeared in attitudes to automation. Some 66% said they would only feel comfortable with partial automation. Another 33% preferred significant automation. No respondents selected full automation.

"The challenges faced by firms today reflect the realities of legacy operating models and the need to balance innovation with regulatory accountability," said Keasberry. "For organisations at earlier stages of AI adoption, keeping human oversight in the loop remains essential. Regulators expect AI systems to be explainable and well governed. The findings suggest institutions want to demonstrate control over how AI models operate and how decisions are reached, particularly in high-risk areas such as KYC, AML and fraud," he added.

Where AI fits

Despite a cautious stance on full automation, respondents signalled growing interest in more advanced uses of AI. The survey found that 44% were considering agentic AI. Respondents cited transaction monitoring, fraud detection and sanctions screening as the main areas of interest.

The results indicate institutions see scope for AI systems to take on parts of investigative and monitoring workflows. They also suggest that adoption decisions still depend on governance models, explainability requirements and the ability to demonstrate accountability over outputs.

Regulatory focus

The research framed adoption in the context of regulatory scrutiny and expectations about controls. Firms in the region operate across several regulators and supervisory approaches. Respondents reported continued emphasis on explainability, governance and oversight as institutions assess AI systems for compliance uses.

"What this means for markets such as Singapore, Malaysia and Australia, is that compliance transformation cannot be rushed, " said Keasberry.

"Regulators are supportive of innovation, but expectations around governance, explainability and accountability are rising. Institutions need to strengthen data foundations and embed controlled, human-led automation first, before scaling more advanced AI capabilities as confidence and regulatory clarity continue to develop.

The next phase of compliance transformation in APAC will be shaped by steady progress in data quality, platform integration and trust in automation as institutions move toward scalable, regulator-ready deployment."