Desafíos actuales de la Inteligencia Artificial
88 Desafíos actuales de la Inteligencia Artificial 2.1.Opinions that deny the application of the DSA over Generative AI applica- tions. A segment of legal literature summarily rejects the notion that Generative AI applica- tions might fall under the DSA. For example, while Hacker, Engel, and List acknowledge that the DSA might regu- late AI-generated content on traditional social media platforms, they take a firm stance against applying the DSA to Generative AI applications themselves 16 . They recognise that the risk profile of Generative AI applications raises content misuse issues similar to those of traditional digital platforms, potentially elevating misinformation, disinforma- tion, fake news, and hate speech to unprecedented levels. Nonetheless, they argue that the DSA, in its current form, is inadequate for regulating content produced by Genera- tive AI. Their primary argument is that Generative AI applications do not fit within any established intermediary categories under the DSA. They quickly dismiss the possibility of classifying Generative AI tools as “mere conduits” or “caching” providers and focus on whether they could be considered “hosting” providers. However, they conclude that this is not feasible, as the legislative definition in art. 3(g) of the DSA confines hosting intermediaries to storing information provided by and at the request of service recipients. They argue that the relevant content is generated by the AI applications themselves, not the service recipients 17 . In a follow-up paper, Hacker, Engel, and Mauer (replacing List) maintain their original stance, asserting that Generative AI applications cannot be regulated under the DSA as they do not fit into any intermediary categories 18 . They add that the DSA was not designed to address content produced via Generative AI applications 19 . They reiterate their core argument: Generative AI applications fall outside the DSA’s scope because they do not align with the definitions of existing intermediaries. In this follow-up, they aim to reinforce their position by pointing to the CJEU decision in L’ Oreal 20 . They assert that, as per the decision of the Court in L’ Oreal, service providers lose DSA immunities if they “provide assistance” in terms of content management and presentation, thus leaving their neutral 16 See HACKER, Philipp; ENGEL, Andreas; LIST, Theresa, “Understanding and Regulating ChatGPT, and Other Large Generative AI Models”, available at https://verfassungsblog.de/chatgpt/ (last access 30.07.2024). 17 Ibid, where they note: “... The trick with LGAIMs, however, is that the relevant content is decidedly not provi- ded by the user, but by the LGAIM itself, having been prompted by the user via the insertion of certain query terms (e.g., “write an essay about content moderation in EU law in a lawyerly style”)...”. 18 See HACKER, Philipp; ENGEL, Andreas; MAUER, Marco, “Regulating ChatGPT and other Large Gene- rative AI Models”, FAccT ‘23: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, (2024), pp. 1112-1123. 19 Ibid, p. 1118. 20 Case C-324/09 L’Oréal SA and Others v eBay International AG and Others [2011] ECR I-6011, ECLI:EU- :C:2011:474.
Made with FlippingBook
RkJQdWJsaXNoZXIy NTEwODM=