Hungary’s new biometric surveillance laws violate the AI Act

This blog post is a legal analysis of new legislation in Hungary that uses facial recognition technology in a manner that violates the EU Artificial Intelligence Act.

In March 2025, three amendments aimed to criminalise LGBTQAI+ demonstrations and increase biometric surveillance were rushed through the Hungarian Parliament within 24 hours and without any public debate. These amendments, which entered into force on 15 April, dramatically expand the use of facial recognition technology (FRT) in Hungary, including in the context of minor infractions and peaceful assemblies, such as Budapest Pride.

The Civil Liberties Union for Europe, EDRi, the European Center for Not-for-Profit Law and the Hungarian Civil Liberties Union believe that this broadened application of FRT to track and identify individuals attending banned Pride events and committing even minor infractions violates the EU AI Act and the Charter of Fundamental Rights of the EU.

What has changed in Hungary?

The amendments adopted in March allow the Hungarian police to use facial recognition technology in all types of infraction procedures, not just serious ones. Before the new legislation, the use of facial recognition was only permissible in cases where infractions were punishable by a custodial sentence (which can still be very problematic from a human rights perspective).

Now, however, the use of FRT has been widened to include all infractions – for example, the police can now use FRT to identify people attending a banned Pride march or even for minor violations like jaywalking. This expanded use of surveillance is based on video footage, often recorded at public demonstrations.

What is real-time biometric identification and why is it regulated?

The EU’s Artificial Intelligence Act (AI Act), adopted in 2024, limits the use of real-time remote biometric identification (RBI) in public spaces by law enforcement. RBI involves identifying people as they move through public spaces using biometric data (like face scans), often without their knowledge or consent.

It is regulated because it’s deeply intrusive. It can make people feel like they’re under constant surveillance and discourage them from exercising their rights, such as attending protests or other public demonstrations.

Under Article 5(1)(h) of the AI Act, such real-time biometric surveillance is prohibited except in a few narrowly defined cases, such as finding victims of serious crimes or preventing imminent threats. Even in these cases, strict procedures for its authorisation and use must be followed.

How does Hungary’s law violate the AI Act?

Although the Hungarian system uses still images (such as those from CCTV), it allows automatic comparisons with a government database to identify individuals as part of infraction proceedings – in real- or near-real-time. Hungarian police now have direct connections to the system which – based on our analysis – will enable rapid identification during protests.

According to the AI Act, even systems that work with slight delays count as "real-time" if the identification happens fast enough to still impact people’s behaviour during public events. The Hungarian system, especially in protest contexts, clearly fits this description. The automated facial comparison system now used in Hungary is clearly designed with the ability to use newly generated or recently generated material, and to automatically identify people in the material through direct connection to a system run by the Hungarian Institute for Forensic Sciences.

Therefore, the RBI system in question meets the criteria laid down in the AI Act for a “real-time” system, and is distinct from a “post” system where the input material is generated independently from the use. This is crucial, because “real-time” biometric surveillance is already prohibited by the AI Act, while the retrospective facial recognition is merely classified as “high-risk” with rules regulating such use coming into effect in 2026.

How will this infringe rights and freedoms?

The use of FRT in Hungary risks discouraging people from exercising their fundamental rights, particularly freedom of assembly and freedom of expression. When people know they might be scanned, identified, and punished for participating in a peaceful protest, many will decide not to attend.

This “chilling effect” is something that the AI Act and the EU Charter aim to prevent. By introducing real-time biometric surveillance for low-level infractions, Hungary is violating both the spirit and letter of EU law.

What should be done now?

Hungary’s new legislation allows for the surveillance of people engaging in peaceful protest or committing minor infractions in a manner that is clearly at odds with the AI Act. Allowing this form of AI use undermines free speech, public participation and, ultimately, people’s trust in democracy.

The EU must urgently scrutinise this legislation. The European Commission’s new AI Office, which has among its responsibilities the task of protecting people against AI risks, must ensure that its safeguards are not ignored. This isn’t just a domestic issue. Politicians in capitals across the Union - and people around the world - will be watching to see how the EU reacts.

It’s a test case for how seriously the EU will enforce its own AI rules – and protect people’s rights.

Further reading:

Share

Related articles

HCLU AND THREE PARTNER ORGANISATIONS INTERVENE IN THE POLISH PEGASUS CASE BEFORE THE ECHR

The Hungarian Civil Liberties Union, together with three other human rights organisations – Data Rights France working at the EU level, the Greek organisation Homo Digitalis, and the Spanish (Catalan) organisation Irídia – has intervened in the case of Brejza v. Poland, currently before the European Court of Human Rights. These organisations are connected by the fact that they all operate in countries where the Pegasus spyware has been misused to surveil political opponents, journalists, and human rights defenders. Their intervention aims to present the Court with the serious human rights consequences of spyware abuse and to assist in establishing appropriate legal standards.

Data-protection-based (GDPR) SLAPP cases in Hungary - HCLU’s report is now available

The Hungarian Civil Liberties Union (HCLU, in Hungarian: TASZ) has been addressing data protection (GDPR) -based SLAPP issues for several years. GDPR based SLAPP cases are legal proceedings, where influential individuals try to stifle journalism with the misuse of data protection. We represent numerous affected editorial offices and actively participate in the dialogue on the anti-SLAPP directive at the European level. It is our primary aim to learn as much as possible about this new phenomenon, and to use this knowledge to facilitate meaningful dialogue between the relevant stakeholders.

NGOs Reject "Safe Harbor 2.0," Urge EU and US to Protect Fundamental Rights

Leading human rights and consumer organizations have issued a letter to urge the US and the EU to protect the fundamental right to privacy.