Frank Pasquale: “Platforms treat workers like data, not people”

Frank Pasquale: “Platforms treat workers like data, not people”

Artificial Intelligence (AI) is already part of our lives and affects us in ways we cannot imagine. Frank Pasqualeis a professor at the Brooklyn Law School, a member of the Committee National Artificial Intelligence Advisory of the United States and expert in algorithms who has been investigating these systems for decades.

Author of the books ‘The Black Box Society’ and ‘New Laws of Robotics’, the academic closed This Tuesday the cycle of debates on the technological future in a conference organized by the Observatori Social de Fundació ”la Caixa” at CaixaForum Macaya.

The public administration uses algorithms to determine subsidies and financial companies to establish interest. Are these systems amplifying discrimination against the poorest?

Yes, there is on many occasions a profound amplification of discrimination. There is evidence that in the public sector the automation of certain procedures has had a negative impact. The private sector raises even more concerns, because companies can know so many private things about people, such as their health or their salary, and that can cause discrimination.

The algorithmsdraw conclusions from data about past behaviors. This means that if, for example, the classes with the worst socioeconomic conditions tend statistically to have more health problems, the AI ​​used by health insurance penalizes them. How can there be such a thing? progress?

Exactly. The problem with many algorithms is that they repeat the patterns of the past and project them into the future. And it is also true that these systems present themselves as scientific methods, when they are really historical: they are based so deeply on past knowledge that they repeat it. This penalization of certain groups is a common practice in the police, since they use racial profiles that have class biases. It is also seen in the world of finance. There is a lot of talk about inclusion, but certain people can be included in a predatory or subservient way. A bank can detect those who pay fines for their late payments and this directly harms the person targeted.

“Google's AI has no conscience, but it knows how to find the right words to fake it.” I was experiencing “new feelings”. Is it likely that a computer system will begin to have consciousness?

I have been following this debate for some time and it is something that cannot be scientifically proven, it is a human perception. The problem is that people who believe that AI can have feelings and other human attributes are mistaking what a perception is as evidence. The Google system is not aware, but it is aware of it. using all the words he can to emulate that they have a conscience. This is a faking of human emotions rather than actual emotions themselves.

In 2015 you published ‘The Black Box Society’. Have there been any improvements in the performance checks of the algorithms since then?

The world can be divided into three blocks. In Europe the concept behind the General Data Protection Regulation (GDPR) is extremely strong and wise, but enforcement of the law has been slow and not very effective. In the United States there are some efforts to give responsibility to the algorithm. And in China there are many efforts to regulate Big Tech but that regulation serves to preserve the power and control of the government, not to protect privacy.

Mo values ​​the AI regulation that the European Union will approve. this summer?

I think it is a wise law to look to the future by categorizing AI systems according to their risk. Even so, I think that it is not being Putting enough money into ensuring compliance, and that these systems should be risk-assessed before they are created for licensing, not once they've been applied.

However , that law excludes the military use of AI that is already being used. deploying on European borders. Is there a risk of targeting vulnerable groups like migrants?

Yes, using these systems with migrants or to distinguish legitimate refugees carries many great risks and that should be included in the law that the EU is preparing. There is so much evidence of discrimination at borders that it is hard to imagine that AI could do such a task without supervision.

It has been shown that facial recognition systems fail especially with darker skins.

Spain prepares the use of facial recognition on its border with Morocco What? What dangers does this technology entail?

There are three great dangers. The first is that inaccurate data helps to normalize repression, such as when systems do not recognize women after they have cut their hair. The second is discrimination, such as when they misidentify or fail to recognize dark-skinned or trans people. And the third has to do with the alienation produced by being identified wherever you go and the political power that this entails. People deserve to have privacy even in public, but there are already airlines that manage the boat by identifying the passenger with facial recognition. And even so, it doesn't seem like the most efficient option.

Russia reportedly used smart drones in Ukraine< strong>. Is there a need for a total ban on autonomous weapons or a regulation that allows their defensive use?

An international regulation would have to be reached to make the use of these weapons is safe. However, the Russian invasion of Ukraine has highlighted fears that, in a wartime context, over-regulation of military AI could disadvantage countries that comply with the rules and give advantage. who doesn't.

How can there be an international agreement if there are countries that, like the US, block it because they are investing in the development of military AI?

The problem is that the EU can't get a commitment from Russia that it won't invest. in that armament, so as the US cannot obtain a credible commitment that China will do so. the same. That lack of trust works in both directions and explains why the arms race is already under way. on going. The best thing would be to achieve a consensus like the one that has been established against the proliferation of nuclear weapons. we see the US and China competing in a race for AI hegemony…

The great powers must agree to a de-escalation and stop investing in weapons of mass destruction to deal with climate change, inflation, scarcity and the welfare state. It is not only terrifying to think about the damage that killer robots can cause, but also about the resources that they absorb that could go to other things. n is affecting the world of work. Have they come to steal our jobs or to complement us?

They will not cause a jobless future, but will change the type of work. That can mean great job opportunities. I think that, more than worrying about ourselves, robots will end the work of many people, the key is to focus on how to make the transition to a more productive economy.

Automation of the jobs are leading to a growing impact on workers. Last week, the Spanish Government presented a measure to help employees to demand greater transparency of those algorithms.

Yes, it is a major change in which there is greater surveillance and that is of great concern. The first step is to give more power to the workers. The second has to be empowering unions and civil society groups to help workers better understand how those systems work.

“The most advanced and modern business model is a return to the piecework of the 18th century”

Is platform capitalism causing us to accept a labor model that, in the name of innovation, treats workers like machines? it is. The logic of the platform economy is to treat workers as streams of data, not as people, but as entities that maximize profits for the company by doing the greatest volume of work in the cheapest and fastest way. the shortest time possible. One way to do this is to train them to act more sensitive to the algorithms, there are many examples of tampering. We are seeing how the most advanced and modern business model represents a return to the piecework of the 18th century. Statements such as ‘labor flexibility’ and ensure that there is dignity at work and autonomy for the worker.

Previous Article
Next Article