Desafíos actuales de la Inteligencia Artificial
90 Desafíos actuales de la Inteligencia Artificial this does not apply to Generative AI tools, which create content rather than just hosting user content 25 . They question the inclusion of Generative AI under the immunity regime, noting these tools are not just passive victims of user-created content risks but are themselves content creators 26 . Despite this, they recognise that the DSA provides valuable tools for transparency and content moderation and suggest it should be amended to address Generative AI appli- cations 27 . 2.2.Intermediate positions: Generative AI tools as search engines. Other commentators take a more intermediate approach. While they acknowledge the problem of classifying Generative AI applications under the DSA and their imperfect fit within existing intermediary categories, they search for a more flexible solution by exploring whether Generative AI tools could be regulated as search engines. Sophie Stalla-Bourdillon views intermediary categories under the DSA as flexible concepts 28 . She uses search engines as an example, noting they also do not perfectly fit within the three basic intermediary types (mere conduit, caching, hosting) but are still regulated by the DSA. She explores the possibility of classifying Generative AI appli- cations as hosting providers but questions whether, under art. 3(g) of the DSA, it is the users or the AI that provides the content 29 . She then focuses on search engines, drawing 25 Ibid, p. 21. 26 Ibid, p. 21: “... In policy terms, relieving a model provider of liability is inappropriate because they are not a mere hapless victim of risks deriving from user-created content, but the creator of the content themselves by allowing users to query the model …”. 27 Ibid, p. 22: “... The problem then is that the DSA does not, as the ECD did, just provide liability exemptions; it also demands positive steps of hosts, platforms and VLOPs of varying natures. And these are steps that, ju- dging by our research above, are exactly what are needed to protect the B2C users of the generative AI sector … These provisions would be extraordinarily useful in meeting the procedural vices identified above in model T&Cs and would transform their generally hostile and unfair governance approach to disempowered users. There seems no good policy reason why these rules should not be applied to foundation models. At present, model providers have their cake and eat it; they assert exemption from liability by passing risk via their terms and conditions to users, but evade the new positive obligations of the DSA. This is unjust. We suggest therefore that the DSA is already not fit for purpose and should be amended to bring foundation models within its scope as soon as possible …”. 28 STALLA-BOURDILLON, Sophie, “What if ChatGPT was much more than a chatbox? What if LLM-as- a-service was a search engine?”, available at https://peepbeep.blog/2023/04/03/what-if-chatgpt-was-much- more-than-a-chatbox-what-if-llm-as-a-service-was-a-search-engine/ (last access 30.07.2024). 29 Unlike Hacker et. al and Edwards et. al, nonetheless, she views this issue are more open to debate by noting “... Although the category of hosting services is probably the closest one to LLMaaS, it is not a perfect fit either. With LLMaaS, the stored information, i.e., the model output, is not [strictly speaking] provided [could provided also mean triggered?] by the recipient of the service, although its storage is performed at the request of the recipient of the service. With this said, the model input is provided by the recipient of the service and stored at the request of the recipient of the service. Would considering the model input sufficient to make LLMaaS a
Made with FlippingBook
RkJQdWJsaXNoZXIy NTEwODM=