The EACB welcomes the opportunity to respond to the European AI Office consultation regarding the relevant Commission guidelines on the application of the definition of an AI system and the prohibited AI practices established in the AI Act.
Regarding the definition of an AI system, we would like to clarify the distinction between traditional risk models and AI systems. Companies often provide systems for various risk models, such as credit scoring and creditworthiness assessments, which are built using statistical techniques like regression, mathematical models, or decision trees. While these models may derive parameters through statistical analyses prior to deployment, the post-deployment model can be a deterministic, rule-based solution defined by human decision-making.
We believe that a machine-based system that creates, for example, credit score recommendations for human credit decision-makers is not an AI system. If a statistical model is implemented as fully deterministic and transparent rules by human developers, it should no longer be considered AI. The same rationale should apply to decision tree models, which are traditionally statistical but often coded as transparent, human-confirmable rules.
Concerning prohibited AI practices, we call for guidance in several areas:
1. Harmful manipulation and deception: We request clear definitions of what constitutes ‘significant harm’ to the financial interests of individuals and groups, along with the introduction of thresholds or criteria for determining financial harm. Similarly, in the context of exploiting vulnerabilities, the guidelines should define ‘significant harm’ and explicitly exempt AI systems used in regulatory compliance for protecting vulnerable clients in the financial sector. This would prevent unintended restrictions on systems designed to support such individuals.
2. Unacceptable social scoring: We seek guidance to ensure that the use of transaction data for constructing payment behaviour scores does not fall under prohibited practices. These systems are essential for assessing financial health of customers and should be distinguished from harmful social scoring activities. In relation to emotion recognition, guidance is needed on the boundaries of permissible systems, particularly those analysing text-based emotional inferences. Additionally, the definition of ‘workplace’ should be narrowed, and concrete examples of safety exceptions, such as detecting and preventing fraudulent activities, should be provided.
3. Biometric categorisation: There is ambiguity regarding which systems are prohibited versus classified as high-risk. Clarification on whether systems inferring characteristics not explicitly listed in Article 5(1)(g) are automatically classified as high-risk is crucial for ethical and compliant deployment. Lastly, in crime risk assessment, anti-fraud and AML/CTF systems that rely on objective data should be explicitly exempted from prohibited practices, as they play a vital role in ensuring financial security and preventing illegal activities. These clarifications will help align AI applications with the objectives of the AI Act while maintaining protections for fundamental rights and ethical considerations.