Desafíos actuales de la Inteligencia Artificial
86 Desafíos actuales de la Inteligencia Artificial While the examples mentioned above primarily involve AI manipulation that comes from organised experts and necessitates a certain level of technical expertise and access to substantial resources, the increasing availability of Generative AI ap- plications to a broader spectrum of internet users significantly lowers the barrier to entry. This expanding accessibility paves the way for a more extensive deployment of synthetic content by individuals who may lack considerable technical skills or resourc- es. Consequently, the accessibility of these user-friendly AI tools “democratises” the creation of synthetic media, enabling a vast array of users to generate and disseminate manipulated content with relative ease. Recent research confirms that the availability of Generative AI tools and their utilisation by non-institutional average users should not be underestimated but, on the contrary, that the diffusion of productivity tools enhances the potential for the production and distribution of disinformation, fake and misleading content 12 . The problem of AI-driven content misuse is not solely related to human geographies and tool accessibility. It also concerns the functioning of democratic institutions and the way users engage with digital information. As already pointed out in literature 13 , the risks posed by AI-powered misuse of content could be clustered in four major categories: a) Manipulation of elections. Deepfakes can distort democratic processes by spreading false material to influence election outcomes. Both foreign states and individuals with basic technical skills pose a risk of deploying such deepfakes, b) Exacerbate social divisions. In a politically charged environment, deepfakes can deepen social divides by presenting manipulated videos that reinforce polarising views on issues like economic inequality, race, and sexuality. This can lead to increased societal discord and, in some cases, incite harmful actions, c) Erode trust in institutions. Deepfakes can undermine trust in public institutions by depicting fake scenarios of misconduct by officials, such as police bru- tality or judicial corruption. This erosion of trust poses significant risks to democracy and public safety and d) Undermine journalism and user engagement with media. The growing difficulty in distinguishing real from fake content undermines trust in the media. Deepfakes increase the challenge for journalists to verify information quickly, leading to public scepticism and potentially causing news outlets to hesitate in publishing stories. Lesser-known deepfakes that don’t attract immediate attention can still cause long-term harm by subtly influencing public opinion. This new mode of engagement with digital applications and the associated risks raise several critical issues concerning content supervision, curation, moderation, and control. 12 See HASSOUN, Amelia; BORENSTEIN, Gabrielle; OSBORN, Katy; McAULIFFE, Jacob; GOLD- BERG, Beth, “Sowing “seeds of doubt”: Cottage industries of election and medical misinformation in Bra- zil and the United States”, New Media & Society, (2024) 0(0) (Online First https://journals.sagepub.com/ doi/10.1177/14614448241255379, last access 30.07.2024). 13 WALDEMARSSON, Christoffer, “Disinformation, Deepfakes & Democracy”, available at https://www. allianceofdemocracies.org/wp-content/uploads/2020/04/Disinformation-Deepfakes-Democracy-Waldemar- sson-2020.pdf (last access 30.07.2024), pp. 10-11.
Made with FlippingBook
RkJQdWJsaXNoZXIy NTEwODM=