Desafíos actuales de la Inteligencia Artificial

78 Desafíos actuales de la Inteligencia Artificial 4.1.Algorithmic credit scoring systems and their discriminatory outcomes In the financial services sector, creditworthiness AI systems play a crucial role in assessing the credit risk of individuals and businesses to evaluate the likelihood of a borrower default- ing on a loan (Aggarwal, 2021). These systems analyse a wide range of data, including fi- nancial history, employment status, and other personal information, to generate credit scores and risk assessments (Langenbucher, 2020). The accuracy and fairness of these assessments are critical, as they directly impact individuals’ access to financial services and credit. In this sense, creditworthiness is a field that has attracted a lot of attention concerning claims of algorithmic discrimination (Bellucci et al ., 2010; Clarke and Rothenberg, 2018; Bicharra Garcia, 2023). In the EU context, the recent Consumer Credit Directive 7 excludes certain data categories from the information that can be used to assess the creditworthiness of an individual, includ- ing special categories of personal data, as detailed in its Article 18(3). The exclusion of special categories of personal data from the information that can be used to assess the creditworthiness of an individual has the aim to prevent discriminatory outcomes, as it was pointed out in the decision of the Finish National Non-Discrimination and Equality Tribunal, that concluded that an algorithmic credit scoring system that based the decision of refusal of granting a credit on the combination of characters such as gender, age and place of residence was discriminatory. 8 Although special categories of personal data cannot be used during the deployment of these AI systems, they might play a fundamental role in its training phase to detect and correct biases. Otherwise, we risk creating an AI system based around an individual that is not fully representative of the current societal landscape. As noted by Langenbucher (2022), before the deployment of these AI systems, it is necessary to train them; by doing so, AI developers create a picture of who is and who is not creditworthy in a given moment. If the societal configuration that supported that model changes, it is only reasonable to expect that it also is modified. If not, the AI systems might be biased towards certain group of individu- als whose situation has changed. For example, it has been highlighted that women had been discriminated against as AI systems were trained on an ideal debtor who was a man (Kelley et al., 2023). As such, to avoid having the AI system treating more unfavourably certain people, it is necessary to debias it, even if that implies processing special categories of personal data. 4.2.Algorithmic credit scoring systems and the FiDAR proposal One of the two regulatory instruments that will compose the European Financial Data Space is the FiDAR (now in the proposal stage). Expanding on the success of open banking, 7 Directive (EU) 2023/2225 of the European Parliament and of the Council of 18 October 2023 on credit agree- ments for consumers and repealing Directive 2008/48/EC PE/22/2023/REV/1 OJ L, 2023/2225, 30.10.2023. 8 See, for a summary of the decision: https://www.yvtltk.fi/en/index/opinionsanddecisions/decisions.html#

RkJQdWJsaXNoZXIy NTEwODM=