Desafíos actuales de la Inteligencia Artificial

A review of high-risk artificial intelligence (AI) systems that assess social security eligibility 259 nationalities, leaving thousands without or with minimal benefits. The question of how these algorithms actually work has added a layer of distress for those affected, who had no choice but to know. It is yet another example of government being separated from the premise – the social protection function – where the solution (end-to-end) is more important than just get- ting the processes right. At the same time, a project called “Meu INSS” (My Social Security), which is part of Brazil’s goal to have all services digitized showed how heavy automation has its downfalls. But the project exacerbated a spike in automatic benefit denials: over 869,000 applications were denied by machine last year, more than double. These rising rejection rates - frequently for reasons valid only through inflexible, erroneous standards of eligibility, exacerbated by wider digital divisions in society - illustrate the potential injustice engendered by automation. According to the data from United Nations Development Programme, in Brazil 25% of in- habitants spend their life without access to internet. Similar cases are emerging in these countries, underlining a trend of the state withdraw- ing from the provision of social protection. While automation helps in this regard, it often puts the burden on the most vulnerable to navigate difficult and opaque systems designed to force people to prove their eligibility for benefits. Changing hands for this responsibility can sometimes slow down the resolution of inappropriate denials, as systems prioritise process correctness over individual needs. In addition, the consequences of corrupt databases cannot be overstated. Potentially damaging and unfair outcomes can result, as in the case of Australia and Brazil, with AI systems that may act on incorrect or insufficient data. Instead, social protection becomes an exact science dependent on AI - it should be a social science close to human needs. These systems were operating long before the European regulation requiring a fundamental rights impact assessment was published. Such assessments could have prevented some of these in the future, had we only ensured that principles such as fairness, transparency and human dignity were applied to the assessment before AI systems found their way into social protection. As shown in the introduction, the European regulation imposes a high-risk classifica- tion on the systems responsible for the implementation of rights to social security and social assistance benefits, mainly because they are not only systems for access to public assistance, but also support for people in situations of social risk: old age, pregnancy, illness, accidents at work, unemployment, homelessness. The aim of these systems is to implement legislative decisions (because the social risk to be protected is provided for by law) in very critical situa- tions that affect the life, health and above all the livelihood of people in vulnerable situations. However, without explicitly making this specific distinction, Annex III of the European regulation states more generally that these systems are high risk, but that it is necessary to monitor these experiences in order to review their operation. As we can see, the European regulation on AI is indeed comprehensive and aims to strictly regulate AI systems related to eligibility for state support of social services and benefits, which have been praised and treated as high-risk AI systems. However, it is clear from the detailed requirements that the centrality lies in the subjection to the process, with other relevant issues

RkJQdWJsaXNoZXIy NTEwODM=