Message from the CEO, Nina Schindler
"The topic of Artificial Intelligence (AI) has been high on the EU’s agenda for a number of years now, and rightly so. AI has big potential to get greater value out of all the data that is circulating in our economy, including in the financial sector. Indeed, looking at its commercial capability, the different economic blocks around the world are carefully observing each other’s progress. The European Council took a clear stance on where it wants Europe to be in this context by concluding in October 2020 that “the EU needs to be a global leader in the development of secure, trustworthy and ethical Artificial Intelligence”. However, certain reservations cannot be swept under the carpet. Different stakeholder communities, for example, have concerns about the possible bias that applications relying on the use of AI can generate. It is no surprise that the European Commission submitted a proposal for a Regulation laying down harmonised rules on AI. Co-operative banks see the potential of AI to improve their services to customers and members but are also aware of its possible downsides. Thus, EACB members look at the Commission’s proposal with an open mind as it subscribes to its ambition to ensure that Europeans can trust what AI has to offer."
3 Questions to Mattias Levin, European Commission, DG FISMA, Deputy Head of Unit - Digital Finance
Mattias Levin is the Deputy Head of Unit of the Digital Finance Unit of the European Commission’s Financial Stability, Financial Services and Capital Markets Union DG ("FISMA"). At DG FISMA, he has previously worked on regulation related to banks, investment firms, conglomerates and financial market infrastructures. Before joining DG FISMA, he was a member of the Bureau of European Policy Advisers (BEPA), a think tank attached to the President of the European Commission. Prior to joining the Commission, Mattias was a Research Fellow at the Centre for European Policy Studies (CEPS). Mattias studied at the London School of Economics, Lund University and the Institut d’Etudes Politiques of Strasbourg.
_____________________________
- After three years from the announcement of the European Commission's President Ursula von der Leyen, the long-awaited legislative proposal on Artificial Intelligence (AI) is on the table and ready to be discussed by the European Parliament, the Council, and many stakeholders willing to have their say on this very interesting proposal. Could you please elaborate on its key aspects impacting the financial services sector?
The proposal for a regulation on Artificial Intelligence is another manifestation of one of the key priorities of the European Commission, as set out in e.g. the digital finance strategy, namely to ensure that Europe leads the way on digital making the benefits of digital finance available to European consumers and firms, based on European values and a sound regulation of risks.
Artificial Intelligence can bring benefits to consumers and firms but if not designed or used properly also risks in terms of safety, security and fundamental rights may arise. The Commission has therefore in recent years elaborated policies using the broad range of instruments at its disposal – including funding – with the aim to help AI develop and ensure that AI technologies work for people.
The proposal for a regulation fits into this broader context. It lays down the rules that would apply to AI systems placed or put into service in the internal market. While horizontal in nature, it applies a risk-based approach. Some AI systems exhibit unacceptable risk and are hence prohibited (e.g. social scoring). Others exhibit very limited risks and are hence permitted without any restrictions. In between lies AI systems that will be permitted subject to compliance with AI requirements and ex-ante conformity assessment (“high-risk systems” listed in Annex II and III of the proposal).
One of the categories or high-risk AI systems are those that may affect peoples’ access to essential public or private services. Within this category, the regulation identifies one AI system in the area of finance, namely AI systems intended to be used to evaluate the creditworthiness of persons or establish their credit score, with the exception of AI systems developed by small scale users for their own use.
This list is not set in stone. Other AI systems, including in financial services, may over time be captured by the regulation should risks evolve in line with a methodology and process set out in the regulation to keep pace with fast technological and market developments in this area.
- Co-operative banks have been exploring the possibility offered by AI systems with the GDPR in mind. When we look at the proposal, there are a couple of elements relevant to us, in particular in the area of credit scoring and creditworthiness assessment. Banks are one of the earlier adopters of AI systems. Combining the definition of AI system together with the techniques and approaches of Annex I of the proposal, we observe that the scope of the Regulation is becoming quite wide as it also includes rule-based or traditional statistical approaches. What is the rationale behind this approach?
The aim is to have a neutral and future-proof definition of AI. This is to capture both techniques that are not yet known or developed while at the same time cover those approaches that are already known. This includes traditional, symbolic AI, machine learning as well as mixed or hybrid approaches. These techniques, however, should not be seen in isolation. The other conditions listed in the definition must also be fulfilled. These are aligned with an internationally recognised definition developed by the OECD.
As regards credit scoring models in particular, these indeed often use statistical approaches. However, if we put the focus on high-risk scenarios - the scope of the AI regulation - some statistical models have the same properties and risks to fundamental rights as other complex machine learning models when widely scaled and applied in real-world scenarios. They could also suffer from bias, be complex and unpredictable in their outcomes so we need a proper documentation and management of all these risks.
- The proposal foresees a conformity assessment for AI systems used for creditworthiness assessments and credit scoring by credit institutions. Would you envisage the development of a new dedicated conformity assessment process or could credit institutions rely on the conformity assessment analysis already foreseen under the present supervisory review and evaluation framework?
The conformity assessment procedure for compliance with the new AI requirements is stipulated in the AI regulation itself. For credit scoring and trustworthiness assessment, this is self-assessment by the provider (see Annex VI of the proposal), which will be fully integrated into the existing supervisory review and evaluation process done by the financial supervisory authorities under the Credit Institutions Directive 2013/36/EU. This will help financial supervisory authorities evaluate the conformity with the new AI requirements as part of the issues checked during the regular evaluations and reviews of the regulated credit institutions.
Second Opinion from Gilles Saint-Romain, Head of Digital European Public Affairs at Groupe BPCE
After 10 years in charge of retail public affairs, Gilles has been in charge of digital public affairs at Groupe BPCE, the 2nd largest banking group in France, for five years now.
Gilles works daily to maintain an open and fruitful dialogue with the European Institutions, throughout the process of drafting, adopting, implementing and enforcing digital developments in EU financial legislation. This mission is based on a Groupe BPCE’s internal coordination bringing together experts from main business lines in order to analyse new regulatory initiatives related to digital and to define Groupe BPCE’s positions, including representation in different committees and working groups of the banking industry in France, Europe and at international level. Gilles has a Marketing and European studies education background.
_____________________________
Groupe BPCE operates in the retail banking and insurance fields in France via its two large co-operative networks, Banque Populaire and Caisse d’Epargne, along with Banque Palatine. Thanks to our co-operative model, we enjoy a long-term vision of banking relationships and an approach that gives priority to human relations. It embodies the very meaning of the commitment we make to our customers, co-operative shareholders, partners, and employees.
As a longstanding and active member of the EACB, I am proud to lead the work of the EACB Digitalisation and Use of Data Working Group (DUD WG), which looks at the EU’s data- and cybersecurity-related policy discussions with a view to analysing their impact on co-operative banks. The DUD WG has been looking at the topic of AI for a while now. It has looked at policy documents more focused on the finance sector in particular, such as the ESAs’ 2016 and 2017 Discussion Papers on automation in financial advice and on the Use of Big Data by Financial Institutions as well as on FinTech in 2017, the European Commission 2017 consultation on a ‘Fintech: a more competitive and innovative European financial sector’ and more recently responded to the White Paper on AI in 2020. The EACB DUD WG is presently scrutinising the AI proposal in more detail. BPCE’s input to the group will be the following:
Artificial Intelligence is not an end in itself for Groupe BPCE. It is first and foremost a well-considered trade-off between meeting market needs and regulatory contingencies (data protection) and the search for performance (the potential for innovation). Our group has been working for many years on data-related topics and keeps on working towards including more data at every stage of its strategy, whether it is to help our advisors better identify and meet their clients’ needs, to improve our operational efficiency or to better anticipate and manage our risks. We have strong convictions and high ambitions related to data and welcome the AI proposal both for its proposed technology-neutral and future-proof definition of AI and for its proposed risk-based approach to enable a proportionate regulation.
We particularly support the recognition that EU legislation on financial services already includes internal governance and risk management rules and requirements which are applicable to regulated financial institutions in providing those services, including when they make use of AI systems. We also welcome the decision to designate the authorities responsible for the supervision and enforcement of the financial services legislation (including where applicable the European Central Bank) as competent authorities for the purpose of supervising the implementation of this Regulation. Ensuring coherent application and enforcement of the obligations under the new AI Regulation and relevant rules and requirements of the Union financial services legislation is of paramount importance.
We share the European Commission's view that it is appropriate to integrate the conformity assessment procedure and some of the providers’ procedural obligations in relation to risk management, post-marketing monitoring and documentation into the existing obligations and procedures under the CRD.
Regarding the European Commission’s wish to encourage providers of non-high-risk AI systems to create codes of conduct intended to foster the voluntary application of the mandatory requirements applicable to high-risk AI systems, but also to apply additional requirements on a voluntary basis, we agree on the principle and convene that it could generate a virtuous behaviour on the market. Nevertheless, this may lead to a multiplication of different voluntary codes with very different levels of commitments, which may ultimately lead to confusion on the part of users and consumers. Moreover, those codes of conducts could represent a new regulatory layer that could hinder innovation and, at the end, go against the original goal of the European Commission to be proportionate in the approach.
Finally, Groupe BPCE considers it important that the EU succeeds in its ambition to spearhead the development of new global norms to make sure AI can be trusted both for the benefit of consumer privacy protection and for a level playing field globally.