EU: AI Regulation Must Prohibit Social Scoring

Human Rights Watch

Last week, Human Rights Watch, La Quadrature Du Net and EDRi shared a proposal with the European Council and the European Parliament to strengthen the regulation's prohibition on social scoring. Investigations in France, the Netherlands, Austria, Poland and Ireland have revealed that AI-based social scoring systems are disrupting people's access to social security support, compromising their privacy, and profiling them in discriminatory ways and based on stereotypes about poverty. This is making it harder for people to afford housing, buy food, and make a living. The joint proposal urges the EU to adopt critical amendments to the regulation that would address these harms and halt the spread of AI-based social scoring systems.

Proposed amendments to the proposal for an EU AI Act - Social scoring ban

Although the AI Act proposals introduced by EU policymakers seek to restrict AI-based social scoring, their current wording could allow practices and systems that facilitate the spread of AI-based social scoring across the EU.

Current proposals would classify most AI systems used to determine and allocate public assistance benefits and services as "high-risk." But many of these systems are social scoring systems that draw on a wide range of personal and sensitive data to assess whether beneficiaries are a fraud "risk" that should be investigated and ultimately sanctioned. The AI Act should ban these systems because they lead to disproportionate and detrimental treatment of people based on their socio-economic status, and unduly restrict their rights to social security, privacy, and non-discrimination.

The groups undersigned urge the Council of the European Union and the European Parliament to adopt the following amendments to the European Commission's proposal for an AI Act:

Recital 17

AI systems providing social scoring of natural persons for general purpose by public authorities or on their behalf that evaluate, classify, rate or score the trustworthiness or social standing of natural persons may lead to discriminatory outcomes and the exclusion of certain groups. They may violate the right to dignity and non-discrimination and the values of equality and justice. They can also infringe on the rights to social security and an adequate standard of living.

Such AI systems evaluate or classify the trustworthiness of natural persons based on multiple data points and time occurrences related to their social behaviour in multiple contexts or known or predicted personal or personality characteristics. The social score obtained from such AI systems may lead to the detrimental or unfavourable treatment of natural persons or whole groups thereof in social contexts, which are unrelated to the context in which the data was originally generated or collected or to a detrimental treatment that is disproportionate or unjustified to the gravity of their social behaviour. Such AI systems should be therefore prohibited.

Recital 37

Another area in which the use of AI systems deserves special consideration is the access to and enjoyment of certain essential private and public services, including healthcare services, and essential services, including but not limited to housing, electricity, heating/cooling and internet, and benefits necessary for people to fully participate in society or to improve one's standard of living. In particular, AI systems used to evaluate the credit score or creditworthiness of natural persons should be classified as high-risk AI systems, since they determine those persons' access to financial resources or essential services such as housing, electricity, and telecommunication services. AI systems used for this purpose may lead to discrimination of persons or groups and perpetuate historical patterns of discrimination, for example based on racial or ethnic origins, disabilities, age, sexual orientation, or create new forms of discriminatory impacts. Considering the very limited scale of the impact and the available alternatives on the market, it is appropriate to exempt AI systems for the purpose of creditworthiness assessment and credit scoring when put into service by small-scale providers for their own use. AI systems provided for by Union law for the purpose of detecting fraud in the offering of financial services should also be considered as high-risk under this Regulation.

Natural persons applying for or receiving public assistance benefits and services from public authorities are typically dependent on those benefits and services and in a vulnerable position in relation to the responsible authorities. If AI systems are used for determining whether such benefits and services should be denied, reduced, revoked or reclaimed by authorities, they may have a significant impact on persons' livelihood and may infringe their fundamental rights, such as the rights to social security, non-discrimination, human dignity or an effective remedy. Those systems should therefore be classified as high-risk. Nonetheless, this Regulation should not hamper the development and use of innovative approaches in the public administration, which would stand to benefit from a wider use of compliant and safe AI systems, provided that those systems do not entail a high risk to legal and natural persons. Similarly, AI systems intended to be used to make decisions or materially influence decisions on the eligibility of natural persons for health and life insurance also have a significant impact on persons' livelihood and infringe their fundamental rights such as by limiting access to healthcare or by perpetuating discrimination based on personal characteristics. These systems should therefore be prohibited if they involve the evaluation, classification, rating or scoring of the trustworthiness or social standing of natural persons which potentially lead to detrimental or unfavourable treatment or unnecessary or disproportionate restriction of their fundamental rights.

Finally, AI systems used to dispatch or establish priority in the dispatching of emergency first response services should also be classified as high-risk since they make decisions in very critical situations for the life and health of persons and their property.

Article 3

(2) 'provider' means a natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed with a view to placing it on the market or putting it into service under its own name or trademark, whether for payment or free of charge;

'development' of an AI system means the collection, labeling and analysis of data in connection with the training and fine tuning of the system, and any testing and trialing of the system prior to placing it on the market or putting it into service.

Article 5

1.The following artificial intelligence practices shall be prohibited:

[…]

(c)the development, placing on the market, putting into service or use of AI systems by public authorities or on their behalf for the evaluation or classification of the trustworthiness of natural persons or groups thereof over a certain period of time based on their social behaviour or known, inferred or predicted personal or personality characteristics, with the social score potentially leading to detrimental or unfavourable treatment of certain natural persons or whole groups, or unnecessary or disproportionate restriction of their fundamental rights.

Evaluations, classifications or scores covered by Article 5(1)(c) include but are not limited to those relating to the person or group's education, employment, housing, public assistance benefits, health, and socio-economic situation.

either or both of the following:

(i)detrimental or unfavourable treatment of certain natural persons or whole groups thereof in social contexts which are unrelated to the contexts in which the data was originally generated or collected;

(ii)detrimental or unfavourable treatment of certain natural persons or whole groups thereof that is unjustified or disproportionate to their social behaviour or its gravity;

ANNEX III

HIGH-RISK AI SYSTEMS REFERRED TO IN ARTICLE 6(2)

[…]

5.Access to and enjoyment of essential private services and public services and benefits:

(a)AI systems intended to facilitate means-testing and other eligibility-related determinations of be used by public authorities or on behalf of public authorities to evaluate the eligibility of natural persons for public assistance benefits and services, as well as to grant, reduce, revoke, or reclaim such benefits and services, provided that these systems are not prohibited under Article 5(1)(c);

(b) AI systems intended to be used for making decisions or materially influencing decisions on the eligibility of natural persons for health and life insurance, provided that these systems are not prohibited under Article 5(1)(c);

(b) (c) AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems put into service by small scale providers for their own use or used for the purpose of detecting financial fraud;

(c) (d) AI systems intended to classify emergency calls by natural persons or to be used to dispatch, or to establish priority in the dispatching of emergency first response services, including by police and law enforcement, firefighters and medical aid, as well as of emergency healthcare patient triage systems.

Justification

The proposed amendment to Article 5 addresses the unacceptable risks posed by automated, wide-ranging social scoring systems, deployed by public administrations and private companies in Europe.

These systems are already operational in several countries. In France, La Quadrature du Net has repeatedly denounced and documented a scoring algorithm used by France's family benefits funds, known as CAF. CAF has entrusted an algorithm with the task of predicting which recipients of social benefits should be considered as "untrustworthy" and further controlled by its services. The system gives a score to each recipient, assessing the "risk" that he or she will supposedly represent to the social assistance system. Constructed from the hundreds of data points that CAF has on each claimant, this automated score is then used to select those who will be investigated, and then sanctioned.

La Quadrature du Net has categorized this scoring algorithm as "a policy of institutional harassment" of people based on their socio-economic status. While it is a clear example of social control based on a police logic of generalized suspicion, and sorting and continuous evaluation of people's movements and activities, it is alarming that such a system would not be covered by the EC or Parliament's proposed social scoring bans under the AI Act.

CAF's scoring algorithm is part of a broader trend. In France, other public institutions, funds, and tax authorities are developing their own rating algorithms. In the Netherlands, SyRI, a now-defunct risk assessment tool developed by the Dutch government, tapped into employment and housing records, benefits information, personal debt reports and other sensitive data held by government agencies to flag people for fraud investigations. In Austria, the government uses an employment profiling algorithm that controls a person's ability to access job support services and replicates the discriminatory realities of the labor market. All of these systems create an unacceptable risk to people's rights to social security, privacy, and non-discrimination.

These examples underline the growing deployment of AI scoring systems involving the large-scale, unstructured, and automated linking of files pertaining to large groups of citizens, coupled with the processing of their personal data. As these systems are borne from mass data collection and discriminatory profiling, their harms cannot be effectively mitigated or prevented through procedural safeguards; as a result, they should be banned.

These AI systems unduly restrict peoples' access to social benefits leading to violations of their right to social security. AI-based techniques to evaluate or classify individuals as trustworthy or risky does not have a place in a democratic society. As already stated by a number of MEPs in the context of the negotiations of the EP report on the draft AI Act, it is important to acknowledge that if an outcome for an automated AI evaluation is beneficial for one it means that it is unfavorable to others. As a result, we should prohibit social scoring by AI in all circumstances, as it inherently creates harms and unfavorable treatment due to its very nature.

The proposed amendment to Annex III would ensure that non-social scoring AI systems used to evaluate public benefits, related public services, and health and life insurance benefits would still be classified as "high risk." These amendments would also classify all credit scoring systems as "high risk." These systems, such as Germany's SCHUFA scoring, draw on information that is plausibly connected to a person's finances - such as their history of unpaid bills, loans, and fines - to generate a score that estimates a person's likelihood of meeting their payment obligations. This score can in turn interfere with a person's ability to obtain a lease, a credit card, or an internet contract.

Signatories

La Quadrature du Net

Human Rights Watch

EDRi

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.