AI Governance in Financial Entities: Navigating Compliance and Risk under DORA and the AI Act
- Koen Vanderhoydonk
- 2 days ago
- 4 min read

Authors: Camelia Iantuc (senior associate), Raluca Bita (associate)
The financial sector experiences significant transformation through Artificial Intelligence (“AI”) which streamlines financial institution processes like fraud detection and prevention, as well as back-office automation and anti-money laundering compliance. While AI delivers unprecedented efficiencies, it also introduces several risks and vulnerabilities —from cybersecurity threats and data breaches to algorithmic biases and operational failures.
Both the Regulation (EU) no. 2022/2554 on digital operational resilience for the financial sector (“DORA”) and the Regulation (EU) no. 2024/1689 laying down harmonised rules on artificial intelligence (the “AI Act”) address the challenges related to risks regarding digital services and the use of AI, while also defining responsibilities for financial institutions’ management bodies.
1. AI Governance under DORA
DORA has entered into force on 17 January 2025, applying directly across the EU without requiring implementation into national legislation. DORA aims to enhance the operational resilience of financial entities against Information and Communication Technology (“ICT”) risks, including risks posed by AI-powered systems, provided that the services performed through such systems fall under the definition of ICT services that are in scope of DORA.
DORA imposes several requirements and obligations for the management bodies of financial institutions in terms of ICT and AI risk management:
· Risk management framework
Financial entities must maintain an internal governance and control framework that ensures an effective and prudent management of ICT risks. Financial entities that use AI technologies need to include AI governance as part of their broader risk management framework. This implies that financial entities must incorporate AI-specific risks such as bias and adversarial attacks into their comprehensive entity-wide risk management frameworks and implement efficient AI risk reporting and mitigation strategies. This entails that financial entities, over the following period, shall need to carefully scrutinize their internal risk management procedures to take into account not only the requirements of DORA but also to ensure that they take into consideration “modern” risks deriving from the use of AI technologies.
Financial entities could create multidisciplinary teams, that include members from the IT, Legal, Human Resources and Risk Management, for the monitoring of AI performance and they may choose to appoint AI governance structures, such as compliance officers or ethics committees, in order to further ensure a sound risk management framework.
· Governance and responsibilities of management bodies
Under DORA, the management bodies of financial entities shall define, approve, oversee and be responsible for the implementation of all arrangements related to the ICT risk management framework. In this regard, management bodies of financial entities must put in place ICT governance policies, strategies and risk tolerance levels.
Furthermore, management bodies of financial entities must oversee the effectiveness of AI controls, security protocols, and transparency measures and ensure AI models align with the financial entities’ risk management strategies.
Complying with these requirements is of the utmost importance for management bodies, considering that, according to DORA, the management bears the ultimate responsibility for managing the financial entity’s ICT and AI risk. It is important to note that, so far, it remains unclear which is the exact management body which will bear this responsibility in case of financial entities with a more complex management structure: will it stop at the upper management or sip down to the supervisory bodies as well?
· Third-party risk management
Under DORA, management bodies must approve and periodically review a financial entity’s policy on arrangements regarding the use of ICT services provided by ICT third-party service providers. Thus, management bodies of financial institutions relying on AI-powered systems provided by third parties must verify that third-party AI providers comply with DORA’s cybersecurity and resilience standards.
In addition, financial entities are required to conduct due diligence processes prior to engaging in contractual relations with AI service providers and to monitor them on a continuous basis.
Simply put, AI outsourcing does not transfer responsibility under DORA, the financial entities remaining responsible for any ICT and AI related risks.
2. AI Governance under AI Act
The AI Act will become applicable on 2 August 2026 and requires financial institutions to establish clear governance structures for high-risk AI applications, ensuring transparency, fairness, and human oversight.
As regards the AI applications must commonly used by financial institutions, namely credit scoring, fraud detection and AML compliance may be considered high risk AI, requiring strict governance, bias mitigation and complying with the transparency obligations, while chat-bots or robo-advisors are generally considered limited risk AI, for which only compliance with basic transparency obligations is required.
As regards financial institutions, per the AI Act, the obligations regarding AI governance shall be deemed to be fulfilled by complying with the rules on internal governance arrangements or processes pursuant to the relevant EU legislation, including those provided in DORA.
· Human oversight requirements
Under the AI Act, high-risk AI systems must incorporate human supervision features which serve to protect fundamental rights and ensure financial fairness. AI systems must not function independently during essential financial activities including credit scoring, credit approvals and risk assessments. Thus, financial institutions need to implement active human supervision for AI-generated decisions, in order to maintain accountability and prevent unchecked errors or biases. Essentially, AI should serve as a decision-making enhancement rather than a complete replacement of human decisions.
· Bias audits for AI models
In addition to human oversight, the AI Act provides that high-risk AI systems must use quality datasets that represent diverse data points as part of its measures to reduce algorithmic bias and discrimination. Financial institutions must perform regular fairness audits to detect and correct biases within AI systems applied for credit scoring, loan approvals, and fraud detection. Through these audits, financial institutions verify that their AI systems operate without unintentional discrimination.
In conclusion, financial entities must develop comprehensive legal and regulatory structures to address AI-related risks concerning bias, transparency, cybersecurity, and ethical decisions. Financial institutions must comply with the requirements under DORA and the AI Act to ensure that AI-driven processes follow risk management standards and governance oversight controls.
In this context, AI governance is not merely a compliance exercise but a legal imperative that safeguards the institution against regulatory scrutiny and liability. By aligning AI governance structures with DORA’s ICT risk management provisions and the AI Act’s high-risk AI requirements, financial institutions can demonstrate regulatory compliance, operational resilience, and due diligence in the deployment of AI-driven financia