Desafíos actuales de la Inteligencia Artificial
Generative AI content misuse and the DSA 85 Indeed, over the subsequent years, the proliferation of AI-generated content has expand- ed significantly 6 , both in terms of the diversity of their applications 7 and in terms of the sophistication 8 of the techniques employed 9 . It remains evident that malicious actors could employ the same techniques to disseminate fake, misleading, or outright false information to the public. Such incidents have already been recorded. For example, in the initial weeks following the outbreak of the conflict between Russia and Ukraine, a deepfake video surfaced featuring Ukraine’s President, Volodymyr Zelenskyy 10 . In this manipulated footage, he appeared to concede that Ukraine had been defeated and urged Ukrainian soldiers to lay down their weapons. More recently, amidst an increasingly polarised political climate in the West, various social media platforms have been inundated with AI-generated photographs that purportedly depicted the dramatic pursuit and arrest of former President and current presidential candidate Donald Trump 11 . 6 For an overview and a timeline of the early period of AI generated content see Homeland Security, “Increasing Threats of Deepfake Identities”, available at https://www.dhs.gov/sites/default/files/publications/increasing_ threats_of_deepfake_identities_0.pdf (last access 30.07.2024). 7 For instance, in 2018, Jordan Peele, in collaboration with Buzzfeed, utilised the likeness of former President of the United States, Barack Obama, to create a public service announcement (PSA) aimed at highlighting the issue of untrustworthy news sources online. This was a particularly timely experiment given the emerging threat posed by deepfakes. Peele’s video was an exemplary demonstration of the so-called lip sync technique, which involves several steps to materialise. At the conclusion of his experiment, Jordan Peele revealed his own face while mani- pulating the likeness of Barack Obama. This move was intended to demystify the process and to alert internet users to this potential threat. For an overview of Jordan Peele’s deep fake project see SILVERMAN, Craig, “How To Spot A Deepfake Like The Barack Obama–Jordan Peele Video”, available at https://www.buzzfeed.com/ craigsilverman/obama-jordan-peele-deepfake-video-debunk-buzzfeed (last access 30.07.2024) and ROMANO, Aja, “Jordan Peele’s simulated Obama PSA is a double-edged warning against fake news”, available at https:// www.vox.com/2018/4/18/17252410/jordan-peele-obama-deepfake-buzzfeed (last access 30.07.2024). 8 Another example utilising a more sophisticated technology is the deep fake video of Richard Nixon making a fake moon landing speech. Beginning in 2019, media artists Francesca Panetta andHalsey Burgund, affiliated with theMassachusetts Institute of Technology, embarked on a collaborative project with two artificial intelligence companies, Canny AI and Respeecher. Their ambitious endeavour aimed to create a posthumous deepfake video. The resulting synthetic footage portrays former President Richard Nixon delivering a speech that he had never intended to give, a full fifty years after the Apollo 11 mission. In this case, the creators deployed the more sophisticated technique called “puppet deepfake” or “puppet master”, a process by virtue of which the head movement and facial expressions are transferred in real-time to the liking of another individual. For amore detailed outlook on this incident seeDELVISCIO, Jeffery, “ANixonDeepfake, a ‘Moon Disaster’ Speech and an Information Ecosystem at Risk”, available at https://www.scientificamerican.com/ video/a-nixon-deepfake-a-moon-disaster-speech-and-an-information-ecosystem-at-risk1/ (last access 30.07.2024). 9 For a detailed analysis about deep fake technology, techniques, potential use cases and detection see FARID, Hany, “Creating, Using, Misusing, and Detecting Deep Fakes”, Journal of Online Trust and Safety, (2022) 1(4), pp. 1-33. 10 For this particular incident but also for the deployment of deep fakes in warfare more general see TWOMEY, John Joseph; LINEHAN, Conor; MURPHY, Gillian, “Deepfakes in warfare: new concerns emerge from their use around the Russian invasion of Ukraine”, available at https://theconversation.com/deepfakes-in-warfare- new-concerns-emerge-from-their-use-around-the-russian-invasion-of-ukraine-216393 (last access 30.07.2024). 11 For the wider dimensions of this incident see GARBER, Megan, “The Trump AI Deepfakes Had an Uninten- ded Side Effect”, available at https://www.theatlantic.com/culture/archive/2023/03/fake-trump-arrest-ima- ges-ai-generated-deepfakes/673510/ (last access 30.07.2024).
Made with FlippingBook
RkJQdWJsaXNoZXIy NTEwODM=