Message from the CEO, Nina Schindler
Artificial Intelligence (AI) is influencing a societal and economic transformation, sparking discussions on its long-term effects, ethical implications, and risks across various domains, including banking. The landmark EU AI Act, introduced in 2021, represents a pivotal stride in global regulation, aiming to make algorithms fairer, more transparent, and compliant with legal and ethical standards. Operating on a ‘risk-based approach’, the Act establishes a common framework for adapting horizontal rules to the different assessed risk levels an AI system poses to users.
Many cooperative banks are increasingly experimenting with AI in various operational and business spheres to enhance efficiency and enrich customer experiences, from the adoption of AI technologies in automated back-office processes to contract review, KYC data collection, and customer service chatbots. Given their long-term commitment to retail clients, cooperative banks have been actively advocating for the Act, endorsing its pursuit to make the technology safe and advantageous for users. The EACB has called for aligning the AI Act obligations with existing EU financial regulations to prevent burdening banks with additional requirements.
With the Plenary vote having taken place this month and the impending Council adoption expected later in April, we are now entering its decisive implementation phase. The Commission is expected to issue various related delegated acts and guidelines and oversee the required standardisation process. Cooperative banks must be involved in their drafting to ensure AI standards seamlessly integrate with established financial sector practices and fit cooperative banking specificities and requirements. While leveraging AI technologies, cooperative banks remain dedicated to upholding human interaction. Striking a delicate balance between technological advancement and personal touch, cooperative banks are taking proactive steps to shape responsible AI practices.
3 Questions to Mr Axel Voss, Member of the European Parliament
Mr Axel Voss studied law at the Universities of Trier, Freiburg and Munich. Since 1994, he is working as a lawyer. He became a Member of the European Parliament in 2009, and is EPP-coordinator for the Committee on Legal Affairs as well as deputy member of the Committee on Civil Liberties, Justice and Home Affairs. From 2020 to 2022 he was member and rapporteur in the special Committee on Artificial Intelligence. Besides questions of European Law, his main area of expertise is the digitisation of our daily life.
________________________________________
- From your perspective, what are the key provisions within the AI Act that you believe are the most significant?
The AI Act seeks to regulate the deployment of artificial intelligence (AI) systems within ethical and legal boundaries while fostering innovation. It adopts a risk-based approach with stricter requirements for higher-risk systems. The Act emphasises the importance of using high-quality data for AI training to mitigate biases and ensure fairness, while also requiring transparency regarding data sources and quality. Users have the right to understand the logic behind automated decisions affecting them, promoting transparency and trust in AI technologies. Human oversight is mandated for certain high-risk AI systems to ensure intervention in critical decisions. Additionally, high-risk AI systems must undergo conformity assessment procedures before being placed on the market, ensuring compliance with legal requirements and ethical standards. National supervisory authorities will enforce the AI Act, cooperating with the European Commission to ensure consistent application across member states, streamlining regulatory efforts, and facilitating cross-border cooperation in addressing AI-related challenges.
- Considering the global nature of AI development, how does the AI Act aim to position European businesses competitively, and what measures are in place to ensure fair competition with non-EU entities?
Indeed it remains key within the regulation of AI to ensure openness for innovation. We cannot successfully regulate the risks of AI if technologies are not developed in the EU. In order to safeguard European competitiveness, we introduced a research exemption and space for regulatory sandboxes. Nevertheless, proper implementation of the AI Act will play a huge role in ensuring whether we remain competitive in AI. In the implementation phase, we need to simplify the compliance with the AI Act for EU providers and deployers, in particular by avoiding unnecessary bureaucracy, clarifying legal uncertainties and better supporting innovation in AI.
- With the AI Act’s implications on various sectors, particularly finance, how do you envision its influence on shaping financial practices related to AI adoption in the banking and financial services industry?
The Act's emphasis on transparency and accountability will require banks and financial services firms to provide clear explanations of their AI algorithms' decision-making processes.
Moreover, the Act's ethical guidelines will influence the types of AI applications adopted by financial institutions, prioritising those that align with ethical standards and societal values. This could lead to a shift towards AI solutions that prioritise fairness, inclusivity, and non-discrimination.
Overall, the EU AI Act should serve as a catalyst for responsible AI adoption in all sectors, including the banking and financial services sector.
Second Opinion from Mr Gilles Saint-Romain, Head of Digital Public Affairs, Group BPCE
Mr Gilles Saint-Romain, is Head of Digital Public Affairs at Groupe BPCE, the 2nd largest banking group in France, and Chair of the EACB Working Group on Digitalisation and the Use of Data (DUD WG). Gilles works daily to maintain an open and fruitful dialogue with the European Institutions, throughout the process of drafting, adopting, implementing and enforcing digital developments in EU financial legislation. He is also member of the Expert Group on European Financial Data Space set up by the European Commission to provide it with advice and expertise in the field of data sharing in the financial sector.
______________________________________
Looking ahead to the new landscape shaped by the recent important and formal adoption of the Artificial Intelligence (AI) Act, it is essential to keep a keen eye on the ticking clock of its implementation phase. The Commission has already begun its work on various aspects of implementation, marking a step forward in regulating the deployment of AI systems within ethical and legal boundaries.
One crucial aspect of implementation revolves around standardisation efforts led by the European Committee for Standardisation (CEN) and the European Committee for Electrotechnical Standardisation (CENELEC). These bodies are tasked with delivering European standards by April 2025, including standards on risk management systems for AI systems. For entities like banks, this will necessitate complementing current procedures and practices, requiring additional effort to ensure compliance with the new regulations.
Furthermore, the Commission is set to develop guidelines on the practical implementation of the AI Act, particularly focusing on defining AI systems and delineating high-risk use cases. Notably, creditworthiness assessment and credit scoring are among the high-risk AI system use cases identified and specific for the banking sector. Commissioner McGuinness, speaking at the 8th FinTech and Regulation Conference, emphasised the need for guidance specific to the finance sector, hinting at forthcoming collaboration with European Supervisory Authorities.
Implementation has to be done right. Ensuring that horizontal workstreams arising from the AI Act are functional for the banking sector is vital. Cooperative banks must be involved in the development of guidelines to foster inclusivity and representation. Equally imperative is ensuring that standards and guidance account for the financial sector’s specificities and requirements, aligning with existing, robust risk management and supervisory processes. Compliance with AI horizontal standards should seamlessly integrate with established financial sector practices.
As part of the implementation of the AI Act, it is crucial to address the implications for competition within the EU and at global level. The EU has long been striving to bolster its competitiveness on the global stage, particularly in emerging technologies such as AI. There is a delicate balance to strike between regulation and maintaining competitiveness, especially that of European companies, including banks, in Europe as well as in the global arena. As AI continues to reshape industries worldwide, the EU must ensure that its regulatory framework enables companies to compete effectively and contribute to European growth and innovation.
We can’t discuss AI without addressing data, which is essential for training artificial intelligence systems. Open Finance represents a natural evolution of the market that, when combined with AI, holds significant potential for innovation. It is important that the regulatory framework currently under discussion (FIDA) does not unbalance existing ecosystems and instead supports the emergence of strong European market players.
As we embark on the journey of AI regulation, collaboration, inclusivity and alignment with sector-specific needs will be key to realising the full potential of AI while safeguarding ethical and legal principles.