Work on implementing regulations to protect citizens from deepfakes should be reopened
According to Mirosław Wróblewski, President of the Personal Data Protection Office, the solutions proposed by the Ministry of Digitisation regarding deepfakes are a step in the right direction, but they do not provide full protection. It is worth doing more—as other countries are doing.
The President of the Personal Data Protection Office responded to a letter from Dariusz Standerski, Secretary of State at the Ministry of Digitisation. The letter is a response to the President of the Personal Data Protection Office’s statement on the need to introduce solutions aimed at effective protection against the negative impact of deepfakes. In the opinion of the President of the Personal Data Protection Office’s, under the current legal order of the European Union, whose framework is defined, among others, by regulations such as the Artificial Intelligence Act (AI Act) and the Digital Services Act (DSA), as well as under our national regulations, such as the Civil Code, the Criminal Code, and copyright law, we are provided with only a selective and insufficient response to the multifaceted threats posed by deepfake technology. However, this can be changed. One opportunity to improve the situation is to resume work on the regulations implementing the DSA, following the recent veto by the President of Poland.
The President of the Personal Data Protection Office notes that the vetoed bill did not contain any provisions directly related to counteracting deepfakes. Now, this issue can be addressed even more effectively, possible solutions can be analysed, and a more coherent legal framework can be developed to provide effective protection against this technology and its effects.
The letter from the Ministry of Digitisation also mentions measures taken at the EU level:
- Under the DSA, the European Commission is responsible for designating a given provider as a very large online platform (VLOP) and a very large online search engine (VLOSE). Such platforms and search engines are required, among other things, to label AI-generated content that may mislead users (deepfakes). (This follows from the EC guidelines of March 26, 2025, on mitigating systemic risks to electoral processes.)
- The AI Act (in Article 50(4)) requires entities generating images, audio, or video content that constitutes deepfake content to disclose that such content has been artificially generated or manipulated.
- In the draft Digital Omnibus Regulation on AI, COM(2025) 836, the European Commission proposed that the European Artificial Intelligence Office (AI Office) at the European Commission be given powers to facilitate the effective implementation of obligations to detect and label artificially generated or manipulated content;
In the opinion of the President of the Personal Data Protection Office, the above-mentioned legal regulations and actions of the European Commission do not provide tools for full and effective protection against the consequences of abuse related to deepfake technology. The European Union's legal system lacks comprehensive provisions on combating illegal content generated by this technology. Therefore, many EU member states (e.g., France, Denmark, Italy, and Germany) and non-EU countries (e.g., the US) are deciding to introduce specific national legal solutions to guarantee protection against deepfakes.
The Italian legislator has assumed that artificial intelligence is not a neutral technological tool, but a system capable of having real effects on the rights and freedoms of individuals. It therefore requires special supervision. Consequently, the Italian law on artificial intelligence has been designed as part of a broader regulatory system that covers personal data protection, defines civil liability, and affects criminal liability and consumer rights protection. One of these measures was an amendment to the Italian Criminal Code, introducing a definition of the crime of illegally disseminating falsified or modified digital content generated using artificial intelligence.
The Court of Justice of the European Union has drawn attention to the need to provide data subjects with the widest possible protection of their personal data, in particular their image. On December 2, 2025 (C-492/23) concerning the Russmedia website, the Court ruled that the operator of an online trading platform is obliged to verify whether the personal data provided in advertisements is that of the advertiser or the advertiser. Otherwise, the person posting the advertisement on the platform must demonstrate that they have the consent of the data subject.
The interpretation adopted by the Court protects not only famous people but also other internet users from the unlawful use of their image. This is a particularly important ruling given the lack of provisions in the European Union's legal system to combat deepfakes.
It is necessary to introduce solutions into Polish law that will be adapted to the specific nature of deepfake technology and will define and directly sanction the dissemination of harmful content created with its help. It is also a matter of implementing solutions that will systematically regulate issues related to counteracting and protecting citizens from the negative effects of this technology—at the administrative, criminal, fiscal, and judicial levels, according to the President of the Personal Data Protection Office.
Only in this way can real protection be ensured, not only for the privacy of individuals and their personal data, but also for the interests of citizens and the state itself, and in particular for the guarantee of our security.
DPNT.413.31.2025