Desafíos actuales de la Inteligencia Artificial

Generative AI content misuse and the DSA 89 stance. They argue that Generative AI applications exceed this threshold, making them ineligible for inclusion in the DSA 21 . It is important to note that, in both publications, the authors do not deny the usefulness of the DSA’s content moderation tools. In their regulatory recommendations, they essentially advocate for a DSA-style content moderation system for Generative AI applications. In a similar vein, Lilian Edwards, Igor Szpotakowski, Gabriele Cifrodelli, Joséphine San- garé, and James Stewart give little prospect to the application of the DSA to Generative AI applications 22 . Although the starting point of their paper is not the problem of content moderation but a very insightful analysis of the terms and conditions of Generative AI pro- viders, they take a more holistic approach to the regulation of such applications. They are, in general, sceptical of a phenomenon they call “private ordering”. With this eloquent term, they describe what they see as the effort of Generative AI providers to present themselves as neutral intermediaries, shifting all the risks of their operations to their end users via an elaborate crafting of their terms and conditions and other self-regulatory tools. As they neatly summarise their key findings “Model providers are seeking all the benefits of neutrality in terms of deferring liability and responsibility to users, while still gaining all the advantages of their position in terms of profit and power. This suggestion is bolstered by the way all or most of the providers in our sample behaved as if they were indeed platforms under the ECD (now DSA) and the DMCA in terms of content moderation; accepting DMCA notices for takedown, removing repeat infringers etc, as if this would provide them with safe harbours like any other “platform”” 23 . It is in this context that they review whether the private ordering devised by Generative AI applications can find support under existing law and, more specif- ically, the DSA. On that front, they deny any such possibility. They assert that the DSA still aligns with the original policy framework of the Electronic Commerce Directive, which as- sumed platforms merely stored and shared user-generated content, granting them immunity to avoid unlimited liability for the actions of their users 24 . Initially, EU electronic commerce intermediaries were simply “messengers” facilitating information exchange and were only liable when alerted to illegal use of their services. However, like Hacker et al., they argue 21 HACKER, Philipp; ENGEL, Andreas; MAUER, Marco, “Regulating ChatGPT and other Large Generative AI Models”, FAccT ‘23: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Trans- parency, (2024),, p. 1118: “... To the contrary, CJEU jurisprudence shows that even platforms merely storing user-generated content may easily lose their status as hosting providers, and concomitant liability privileges under the DSA (and its predecessor in this respect, the E-Commerce Directive), if they “provide assistance” and thus leave their “neutral position”, which may even mean merely promoting user-generated content (CJEU, Case C-324/09, L’Oréal para 116). A fortiori, systems generating the content themselves cannot reasonably be qualified as hosting service providers. Hence, the DSA does not apply...”. 22 See EDWARDS, Lilian; SZPOTAKOWSKI, Igor; CIFRODELLI, Gabriele; SANGARÉ, Joséphine; STEWART, James, “Private Ordering and Generative AI: What Can We Learn From Model Terms and Conditions?”, CREATe Working Paper (2024) available at https://zenodo.org/records/11276105 (last access 30.07.2024). 23 Ibid, p. 20. 24 Ibid.

RkJQdWJsaXNoZXIy NTEwODM=