Since the European Union's AI Act went into effect in August 2024, companies have been required to be more transparent about the use of artificial intelligence and carefully manage risk. Yet Highberg research shows that more than half of Dutch organizations are not yet properly prepared.
Why this law matters
The AI Act aims to make AI systems safe, ethical and transparent. And that is badly needed, especially in sectors where AI is used for customer monitoring, risk assessment or detecting unusual transactions. Think of algorithms for anti-money laundering checks or customer profiling.
What does this mean concretely for AML institutions?
The AI Act and AM intersect exactly where things get exciting: risk-based decision making. Suppose you use an algorithm for transaction analysis or monitoring customer behavior. Then the AI Act requires not only that you can explain its operation, but also that you structurally assess whether the system does not produce discriminatory or unlawful outcomes.
Checklist: Is your organization already in compliance mode?
Have you identified which AI systems are active within your organization?
Is it clear what level of risk these systems have?
Is it ensured that people can review or reverse AI decisions?
Have you documented how your AI systems work and how risks are monitored?
Are your compliance and IT teams aligned?
The AI Act raises the bar for organizations deploying AI in high-risk contexts - and that emphatically includes AML institutions. Those who invest in transparency, risk assessment and internal collaboration now not only avoid fines or reputational damage, but also build trust with regulators and customers.