AI under special surveillance. Artificial intelligence and human rights

Given the latest development of artificial intelligence, it’s time to ask an important ethical question: how should we design and implement AI systems so that they truly serve people, protect their rights and strengthen the principles of democracy?

Loading the Elevenlabs Text to Speech AudioNative Player…

Article 5 of the AI Act regarding banned practices in AI came into force on February 2, 2025. These practices include manipulating people, exploiting their weaknesses, social scoring, predicting crimes based on profiling, creating biometric databases, analyzing emotions in workplaces and schools, categorizing based on sensitive traits, and real-time remote biometric identification in public spaces, except for specific exceptions. Is this enough in terms of human rights, including the rights to dignity, equal treatment, privacy, freedom, fair trial and the right to protect particularly vulnerable persons?

Subliminal or manipulative techniques

Research shows that content generated by large language models (LLMs) is just as persuasive as that created by humans. Generative AI is often used to influence public opinion by distorting facts, impersonating others or evoking negative emotions such as fear or anger. You can find tons of fake news online, for example about presidential elections, effectiveness of vaccines or actions taken by governments. AI modelers and system creators effectively use tried-and-true traditional elements of persuasion. Additionally, systems often have cognitive biases similar to humans’, which can lead to discrimination and perpetuate harmful stereotypes.

Article 5(1)(a) of the AI Act introduces a ban on using subliminal and manipulative techniques that limit the ability to make conscious decisions and can cause serious harm. However, this doesn’t apply to deepfakes or manipulating content. Also, they exclude situations where they do not cause significant harm to a person or group of people. The concept of “serious harm” is difficult to define, and assessing the seriousness of harm depends on various factors such as context, intentions, impact scale and the victim’s susceptibility to specific manipulative actions. The lack of clear criteria in the AI Act in this area can lead to difficulties in interpreting and enforcing regulations, leaving room for subjective judgments in specific cases.

Subliminal techniques are always a form of manipulation, which entails the risk of taking advantage of someone without their consent and awareness. In such cases, the extent of harm shouldn’t matter. Actions like this go against human rights standards and that’s why in many countries, for example in Poland, subliminal advertising is prohibited. Unfortunately, a software called “brain spyware (connecting the brain to a computer) is also being used increasingly often, albeit just in the context of scientific experiments for now. Such technologies can be used to gather private information based on the user’s brain reactions.

Taking advantage of vulnerable groups

Mustafa Suleyman, co-author of the popular book The Coming Wave: Technology, Power, and the Twenty-first Century’s Greatest Dilemma , believes that artificial intelligence will be able to exploit not only the loopholes of financial, legal or communication systems, but also the weaknesses of the human psyche and our biases. Basing on historical data, algorithms can reinforce discrimination as well as create new forms of exclusion.

The ban established in art. 5(1)(b) of the AI Act applies to an AI system that exploits the vulnerabilities related to age, disability, or socio-economic conditions of a person or group of people if its goal or effect is to make a significant change in their behavior in a way that causes or could cause them or other individuals serious harm with a reasonable likelihood.

Using AI to exploit someone’s vulnerabilities based on their mental capabilities, physical disabilities or age should be prohibited because it’s immoral and violates basic human rights, regardless of whether any harm results from it.

Dangers related to exploiting weaknesses don’t only affect vulnerable groups. Depending on many factors, such as quantity, intensity, frequency and quality of exposure to stimuli, everyone is potentially susceptible to manipulation and exploitation, especially consumers and voters.

Scoring or categorization of people

Social scoring systems, like the Chinese Social Credit System, pose potential risks related to using AI to classify people based on their behaviors. Initially, this system was supposed to support the banking development and build trust in financial institutions, but over time it turned into a tool of social control that affects citizens’ lives through sanctions and privileges based on assessments of their behavior. Such solutions raise serious concerns about violating basic human rights, including the rights to dignity, freedom and equal treatment.

In response to these threats, Article 5(1)(c) of the AI Act prohibits the use of AI systems for scoring or categorizing individuals that leads to discriminatory or disproportionate treatment. The effectiveness of this ban, however, depends on its interpretation and enforcing possibilities. It’s crucial to precisely define what constitutes “unfair treatment”, and to identify contexts where assessments are permissible.

From the perspective of human rights protection, the ban contained in this provision should be strictly enforced and should be interpreted in a way that ensures the full protection of citizens. Every form of social scoring, regardless of the purpose, should be treated with the utmost caution. It should also undergo regular audits. Monitoring and control mechanisms, as well as transparency and accountability in the assessment and categorization process, are essential for AI technology to support human actions without creating mechanisms that lead to “digital totalitarianism.”

Predicting crime

The EU has banned the use of AI to predict the risk of committing a crime based solely on profiling or personality traits assessments. A technology advanced enough to predict such risks resembles the vision from Philip K. Dick’s Minority Report. The ban on using AI to predict crime risk solely based on profiling or personal characteristics, specified in art. 5 (1)(d) of the AI Act, reflects the aim to protect basic human rights.

Predictive systems can bring potential benefits, such as preventing crimes or protecting victims, but they entail a risk of mistakes. Errors can lead to stigmatizing individuals as “potential criminals”, violating their dignity and the presumption of innocence. The very prediction can become a form of punishment without due process. There’s a risk of abuses related to accessing such systems, which begs the question: who watches the watchmen? The ban on introducing these kinds of AI systems emphasizes the need to limit technologies that may threaten the principle of equality before the law and the right of defence.

Databases and face recognition

AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage are also prohibited, as per Article 5(1)(e) of the AI Act. This provision aims to prevent the spread of mass surveillance culture and practices that violate basic human rights, with a special focus on the right to privacy. This is a response to concerns about uncontrolled data collection and privacy abuses that have occurred with companies like Clearview AI, who were fined 30.5 million euros in September 2024 by the Dutch data protection authority for creating an “illegal database.”

This regulation highlights the importance of protecting fundamental rights also in the context of CCTV systems. These systems, based on networks of cameras transmitting signals to monitors, are often used for surveillance and security purposes. However, they should not be used in a way that violates the privacy of the individuals recorded in the footage. This ban strengthens the regulations contained in the EU General Data Protection Regulation (GDPR).

Emotion inference

The clear ban on using AI systems to infer a person’s emotions, introduced by art. 5 (1)(f) of the AI Act, applies to workplaces and educational institutions, except for cases justified by medical or security reasons. This ban is aimed at protecting the privacy and mental integrity of individuals in these areas, as well as preventing discrimination, stigma and basic rights violations.

In practice, this means that AI systems will not be allowed to be used for recognizing emotions in the workplace and educational institutions, even with the consent of the person concerned, unless there is a significant justification to protect their health or safety. This could mean that it won’t be possible to analyze a job candidate’s emotions during a job interview, and parents won’t get a sneak peek of the classes that would provide them with information on their children’s stress levels during a dictation.

Article 5 (1)(g) of the AI Act forbids categorizing individuals based on their biometric data to infer their race, political views, union membership, religious or philosophical beliefs, sex life or sexual orientation. This ban is crucial to prevent discriminatory practices that could affect decisions in areas such as employment or housing. Exceptions include situations of legal biometric labeling, for example during border control or in identity documents. This exception can be seen as an attempt to balance the protection of individual rights with practical needs, especially in the context of public safety and law enforcement activities.

Remote biometric identification

In 2020, Robert Williams was arrested in the US based on a mistaken identification from surveillance. Another case is a pregnant woman arrested in 2023 even though the perpetrator captured in the photo wasn’t pregnant. The preventive mechanisms that should avert such errors failed. In real-time identification systems, where decisions have to be made in a flash, the risk of mistakes and their tragic consequences is even greater.

Facial recognition systems in public spaces are among the most invasive technologies and threaten basic human rights. They can lead to erosion of privacy through a sense of constant surveillance, mistaken identifications resulting in wrongful accusations, as well as excessive surveillance limiting freedom of assembly and other rights of a democratic society.

To counter these threats, art. 5 (1)(h) of the AI Act allows to apply remote biometrical identification only in “strictly necessary” situations, such as the search of missing persons, preventing serious threats (like terrorist attacks) or prosecuting the most severe offences.

Although these goals are understandable, there is a lack of precise control mechanisms that would ensure the proportionality of applications. The exceptions should only be applied in special circumstances, respecting the principle of proportionality and under strict supervision. Moreover, appropriate national regulations will be necessary for their application.

Towards ethics

The purpose of these bans is to protect human rights, such as dignity, privacy, equal treatment and freedom, while also limiting the risk of abuse when exceptions are applied. These exceptions, although justified in some respects, require precise supervision and enforcement mechanisms. Ultimately, it’s essential to design ethical AI systems that serve the people without exploiting their vulnerabilities.

Radczyni prawna, założycielka kancelarii Gabriela Bar Law & AI, doświadczona ekspertka w dziedzinie prawa nowych technologii oraz prawa i etyki AI

Doktorka etyki, filozofka, socjolożka, TEDx speakerka. Zajmuje się etyką nowych technologii, uwielbia uczyć. Triathlonistka, maratonka, która pokochała grę w golfa

Badaczka sztucznej inteligencji. Specjalizuje się w analizie interakcji ludzi z dużymi modelami językowymi. Jest także orędowniczką wyjaśnialnej sztucznej inteligencji (XAI) – promuje rozwój algorytmów, które są nie tylko skuteczne, ale także zrozumiałe i etyczne.

Menedżerka projektów z wieloletnim doświadczeniem w takich obszarach, jak edukacja, finanse, chemia, biotechnologia i medycyna. Specjalizuje się w zarządzaniu projektami związanymi ze sztuczną inteligencją, szczególnie w finansach i księgowości. Aktywnie uczestniczy w tworzeniu strategii rozwoju produktów AI dla sektora księgowości i koordynuje innowacyjne projekty w tym zakresie. Jest członkinią społeczności AI&More oraz GRAI. Ukończyła program AI for Managers oraz liczne szkolenia i kursy związane z AI.

Założyciel i CEO/CTO Fundacji SOC TECH LAB. Green Digital Designer, Social & UX/UI Researcher. Projektuje i implementuje zielone produkty cyfrowe wykorzystujące sztuczną inteligencję. Prowadzi audyty dojrzałości cyfrowej, potrzeb cyfrowych oraz szkolenia dla organizacji non-profit i biznesu w zakresie transformacji cyfrowej. Studiował i realizował staże naukowe w Polsce, Wielkiej Brytanii, Islandii, Norwegii i Estonii. Związany z sektorem non profit od 2001 r., a od 2019 r. – z sektorem IT.

Doświadczony ekspert ds. bezpieczeństwa systemów informatycznych, certyfikowany CISSP, ISSAP, audytor wiodący ISO27001. Członek stowarzyszeń ISC2 oraz ISSA Polska, prelegent na konferencjach z zakresu security oraz AI, autor publikacji naukowych, szkoleniowiec. Związany z firmą Krypton Polska jako architekt systemowy.

Działacz społeczny, ekspert ds. nowych technologii i ekspert ds. szkolnictwa wyższego. Fundator i prezes zarządu Fundacji Przyszłości Prawa, prezes Koła Naukowego Prawa Nowych Technologii WPiA na Uniwersytecie Warszawskim. Członek Grupy Roboczej ds. AI działającej przy Ministerstwie Cyfryzacji oraz członek międzynarodowego grona ekspertów ESU Task Force on AI and Digitalisation. Członek Cyber Rady Akademickiej działającej przy Dowódcy Komponentu Wojsk Obrony Cyberprzestrzeni. Ekspert ds. studenckich Polskiej Komisji Akredytacyjnej i międzynarodowy ekspert ds. zapewniania jakości przy Europejskiej Unii Studentów. Prowadzi szkolenia i wykłady na szczeblu ogólnopolskim i międzynarodowym z zakresu etycznego wykorzystywania technologii, w szczególności AI.

“Radca prawny, politolog, Data Protection Expert w Allegro. Ma kilkunastoletnie doświadczenie w obszarze prawa technologii, ochrony danych osobowych i prawnych aspektów działania platform e-commerce.”

Responsible AI & Behaviour Advisor. Wspiera organizację w tworzeniu odpowiedzialnej i etycznej AI, bazując na ekonomii behawioralnej i user experience. Twórczyni Ethical AI & Human Behaviour Framework. Swoją wiedzę zdobywała m.in. na Harvard Business School. Członkini grupy roboczej pierwszego General-Purpose AI Code of Practice EU.

Socjolog AI, niezależny badacz i aktywista samorządowy. Członek GRAI, sekcji praw człowieka i demokracji przy Ministerstwie Cyfryzacji. Doktorant specjalizujący się w badaniach nad konsekwencjami i wpływem generatywnej sztucznej inteligencji na procesy komunikacji społecznej.

Lider Technologii w Orange Innovation Polska. Psycholog i data scientist. Badaczka AI w programie CURIE, zajmująca się m.in. tworzeniem modeli opartych na sztucznej inteligencji dla spersonalizowanych doświadczeń użytkowników cyfrowych. Autorka wystąpień i publikacji naukowych dotyczących odpowiedzialnej AI. Lauretka I miejsca w Konkursie za najlepszą pracę doktorską z informatyki ekonomicznej w Polsce (2022), wyróżniona w rankingu Perspektywy Top 100 Women in AI Poland i Top 100 Women in Data Science Poland.

prawnik, absolwent Wydziału Prawa i Administracji UJ, a także studiów podyplomowych na Uniwersytecie Wrocławskim, Akademii Leona Koźmińskiego (Prawo nowoczesnych technologii) oraz Warszawskim Uniwersytecie Medycznym.

Share

You might be interested in