Desafíos actuales de la Inteligencia Artificial

Enforcing AI regulation in France: a legal framework beyond the AI act 69 March 2023. 59 The suspected violations include illicit personal data collection via web scrap- ing without user consent or proper legal basis. Privacy concerns have also arisen regarding the training of AI models based on user-generated content on social media. Meta’s announcement inMay 2024 that it was updating its privacy policy to allow for the training of its AI models with user data sparked controversy over GDPR compliance. In its updated policy, Meta claimed that it had legitimate interests to train its AI models on the content users generated on Facebook and Instagram, including personal data, thus justifying bypassing user consent for this type of data processing. However, following several GDPR complaints – including before the CNIL –, Meta postponed its AI features in Europe. 60 Putting pressure on regulators, Meta cited concerns over innovation and competitiveness on the continent. 61 Furtherprivacyissuesrelatetotheuseof generativeAIsystemswhentheyhavealreadybeendeployed. In its investigations into OpenAI, the Italian data protection authority is concerned that ChatGPT’s lack of an age verification system exposes minors to inappropriate content. 62 In April 2024, the Austrian data protection authority received a complaint directed at OpenAI for the inherent inaccuracies and lack of transparency in data generated byChatGPT. 63 Despite requests for data access and rectification, OpenAI contends that it cannot rectify generated data, thus potentially infringing uponGDPRprovisions. 64 These recent developments in regulatory scrutiny over generative AI highlight the need for a common European approach in AI regulation. The CNIL and other data protection authorities have indeed started cooperating in this space, both for the adoption of legislation 65 and in enforcement actions. 66 59 Garante per la protezione dei dati personali, Provvedimento del 30 marzo 2023 [9870832]. 60 META, “Building AI Technology for Europeans in a Transparent and Responsible Way”, 10 June 2024, https:// about.fb.com/news/2024/06/building-ai-technology-for-europeans-in-a-transparent-and-responsible-way/. 61 Ibid. 62 Other concerns relate toChatGPT’s tendency to produce false information (“hallucinating”), particularly regarding indi- viduals, raising significant implications for GDPR’s obligations on personal data accuracy and user rights in this regard. 63 NOYB, “ChatGPT provides false information about people, and OpenAI can’t correct it”, 29 April 2024, ht- tps://noyb.eu/en/chatgpt-provides-false-information-about-people-and-openai-cant-correct-it. 64 Ibid. 65 In 2018, the CNIL already highlighted that it was actively involved in shaping international guidelines and ethical standards for AI development (CNIL, “Rapport d’activité 2018”, 15 April 2019, https://www.cnil.fr/sites/cnil/ files/atoms/files/cnil-39e_rapport_annuel_2018.pdf, p. 31). A few years later, it was involved in shaping the AI Act based on the European Commission’s proposal of 2021. Indeed, the authority collaborated with its European peers within the European Data Protection Board (EDPB) to assess the proposal and make recommendations. This cooperation resulted in the publication of an opinion in which data protection authorities highlighted the important overlaps between AI and data protection regulation and the challenges in aligning the AI Act with the GDPR (EDPB-EDPS, “Joint Opinion 5/2021 on the proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act)”, 18 June 2021, https://www.edpb.europa.eu/system/files/2021-06/edpb-edps_joint_opinion_ai_regulation_en.pdf) . 66 Given the many privacy issues arising from the development and deployment of generative AI systems, various data protection authorities within the EDPB have established a taskforce dedicated to interpreting the GDPR

RkJQdWJsaXNoZXIy NTEwODM=