Desafíos actuales de la Inteligencia Artificial
76 Desafíos actuales de la Inteligencia Artificial Articles 9 to 15. In this context, Article 10 is dedicated to data and data governance require- ments for high-risk AI systems. One of the concerns behind the use of AI is its potential discriminatory effect. In this regard, the literature has warned that machine learning techniques can deduce special cat- egories of data from innocuous proxy variables, rejecting the idea of fairness through una- wareness (Williams et al., 2018; Hoffmann et al ., 2022). Consequently, data plays a crucial part in ensuring that AI systems do not cause such situations. As such, it has been highlighted that to effectively avoid discriminatory outputs in machine learning it is relevant that protected at- tributes are available during model training and evaluation to account for subtle correlations and test and optimise fairness metrics (Deck et al ., 2024). The protection against discrimination is explicitly included in several recitals of the AI Act, e. g. 31, 56, 57, 58, 59 or 67. Given the conflict regarding fairness through unawareness and the potential to cause discrimination of AI systems, Article 10(5) AI Act allows for the processing of personal data that constitutes special categories of personal data under GDPR. 6 However, Article 10(5) does not provide a carte blanche for the processing of these special categories of data but rather it establishes certain conditions for this to occur. First, it requires that the detection and correction of bias cannot be achieved in any other way; in this sense, it requires a concrete necessity for such data. In second place, the data in question are subject to limitation on its re-use and to the state of art security and privacy-preserving measures. Thirdly, data are subject to measures to ensure that is secured, protected, subject to suitable safeguards, including strict controls and access record keeping. Fourth, data are not to be transmitted, transferred or otherwise accessed by other parties besides the entity addressing the bias in the AI system. As a fifth requirement, data used is deleted after bias detection and correction or at the end of its retention period. And finally, in sixth place, the records of processing activities, as required under GDPR, include a justification of the necessity of such processing and why it was not possible to achieve the objective by processing other data. One of the main questions that can emerge from this preliminary analysis of Article 10(5) is where AI developers will get access to these data to address the possible existence and, if so, the tackling of the identified bias. While the Article does not provide an answer to this question, Recital 68 would hint at the use of data available through data spaces. However, Recital 68 merely addresses the access to such data and does not answer how to comply with the strict requirements set by Article 10(5). In this respect, we can identify some questions. Firstly, if information in a data space is used to mitigate biases, does this limit the possibility of transmitting, transferring or otherwise provide access to other parties as listed in Article 10(5) to the data in the data space? Secondly, if data from the data spaces is used, the deletion obligation affects only the dataset created to 6 Since it goes beyond the scope of the paper, we will merely mention that this provision has been discussed in the literature with regards to whether it creates a new legal basis for the processing of special categories of personal data beyond those contemplated under Article 9(2) GDPR (van Bekkum and Zuiderveen Borgesius, 2022). As an argument against the creation of a new exception, Recital 70 of the AI Act adds that this processing can be done’(…) as a matter of substantial public interest within the meaning of Art. 9(2)(g) [GDPR]’.
Made with FlippingBook
RkJQdWJsaXNoZXIy NTEwODM=