Thursday, November 14, 2024

European AI Act: Implications for the financial services industry

Must read


The European Commission’s forthcoming Artificial Intelligence Act (‘AI Act’) is set regulate the use of artificial intelligence in the European Union. Once approved, the AI Act will become the world’s first comprehensive AI law. Experts from FiSer Consulting outline the key pillars of the new regulation and what its impact will be on the financial services sector.

As part of its digital strategy, the EU wants to regulate artificial intelligence to ensure better conditions for the development and use of the technology. Proposed in April 2021, the AI Act says that AI systems that can be used in different applications are analysed and classified according to the risk they pose to users. The different risk levels will mean more or less regulation.

In June 2023, a significant step was taken in the adoption of the AI Act following agreement by members of parliament. Country officials have now begun discussions on the final form of the law.

Source: Fiser Consulting

Purpose of the EU AI Act and its key provisions

Supported by two objectives – fostering AI adoption and mitigating technology-induced risks – the EU AI Act sets forth the European vision of trustworthy AI. The Act’s purpose includes:
• Protection of European citizens against AI misuse
• Guaranteeing transparency and trust
• Catalysing innovation without sidelining safety and privacy

Adopting a pragmatic, risk-adjusted approach, the AI Act groups AI applications into four categories:
1) Unacceptable risk: such AI applications, perceived as harmful to foundational rights, face an outright ban
2) High risk: an extensive review precedes the deployment of these AI applications, with a suite of legal prerequisites in place
3) Limited risk: organizations would face transparency obligations
4) Low and minimal risk: these applications largely remain unregulated, ensuring fluid innovation

The AI Act regulates AI practices and systems by setting strict requirements for transparency, measures and governance.

Scope and implementation timelines

The jurisdiction of the AI Acts goes beyond the origin of AI development, including any AI system introduced, marketed, or operational within the European Union.

The EU AI Act’s journey started with its proposal in April 2021. By the end of 2022, the European Council embraced a generalized approach to the legislation. The Act received the European Parliament’s final reflection May 2023. The pathway to implementation involves the harmonization of the Act and translating overarching requirements into tangible, technical standards.

The expected start of the implementation phase is now set for the beginning of 2025.

Enforcement and challenges

Though visionary, the EU AI Act is not without challenges. Its enforcement demands precise monitoring and continuous review. Potential inflexibility and exceptions present possible shortcomings.

In theory, classification of AI based systems is straightforward, however, in real life it creates difficulties as determining the specific risk level is a complex and layered process. The success of the AI Act will rely on global regulatory harmonization, encouraging a global AI network secured in shared ethical principles.

Specific impact on the Financial Services sector

In line with the global influence of the General Data Protection Regulation (GDPR), the AI Act is proposed to emerge as a universal benchmark in defining AI’s ethical use, irrespective of geographical borders. For the financial sector, a domain that is linked with AI for fraud detection, algorithmic trading, risk analysis, and enhanced customer experiences, the Act presents a double-edged sword.

Financial institutions must adjust their AI systems, ensuring they comply to the Act’s directives, especially around high-risk systems like credit scoring. The AI Act stresses the necessity of transparent, interpretable AI models and mandates the use of unbiased, premium-quality data. Non-compliance may result in financial penalties.

However, for institutions, aligning with the AI Act at the same time means winning consumer trust, ensuring ethical AI operations, and potentially achieving competitive differentiation in the market. This would imply a commitment from organizations in terms of resources (money and time) and investing in specific regulatory knowledge.

A ‘wait and see’ approach is not recommended. Financial institutions should proactively evaluate their AI systems to determine which ones are prone to high-risks scenarios of the Act. Conducting a comprehensive gap analysis against the essential requirements outlined in the Act would be advisable.

Conclusion

Europe’s forthcoming AI Act stands as evidence to the continent’s proactive approach to AI governance. It strives for a balance between innovation and ethics. For the financial sector, this Act isn’t just about compliance; it’s an opportunity to redefine the ethos of financial services in the age of artificial intelligence.

Latest article