GDPR.pl – ochrona danych osobowych w UE, RODO, IOD
Portal o unijnym rozporządzeniu o ochronie danych osobowych

Artificial Intelligence (AI) through the eyes of supervisory authorities

Autor: r.pr. Tomasz Osiej, r.pr. Piotr Andruszaniec
Udostępnij publikację:

We asked supervisory authorities about their perceptions and experiences in regard to artificial intelligence – we have the answers!

Artificial Intelligence (AI) through the eyes of supervisory authorities

In today’s world, artificial intelligence is becoming an integral part of our lives, revolutionising the way we function, work and communicate. From simple everyday tasks to complex industrial processes, artificial intelligence is transforming our society and economy on many levels.

Do you like that introduction? Well, it was written by the ChatGPT, a tool based on artificial intelligence (AI). Do not like it? In the proverbial second, we can write three other suggestions for the introduction or even the entire article. This minor test shows how much artificial intelligence has been developing in recent years and, consequently, how various tool based on different types of algorithms, including the aforementioned ChatGPT, are developing. Tools based on artificial intelligence, without a doubt, could be very useful in everyday life. From support in small tasks to complex processes, such as for example advanced disease diagnosis. Unfortunately, AI does not present only opportunities, but also huge risks, including, among others, the protection of privacy and personal data. For example, AI tools can be used to create so-called deepfakes. On the other hand, the lack of control over data that is processed in an automated manner by AI algorithms can be a concern.

It is worth mentioning that in March this year the European Parliament adopted a regulation regarding artificial intelligence (AI Act). Bearing that in mind, as the GDPR.PL portal, we asked supervisory authorities from various countries about their perspective on the development and risks associated with the AI.

Questions for supervisory authorities

We addressed the following questions to the supervisory authorities:

  1. does the supervisory authority see AI as a threat or an opportunity regarding data protection? If so, in which area in particular and why?
  2. independently of EU regulations (AI Act), is the supervisory authority planning any local action? Awareness campaign, national legislation, guidelines, trainings or anything else?
  3. has the supervisory authority already issued any decisions regarding artificial intelligence or machine learning?

Majority of the indicated authorities shared their insights with us, including the Polish supervisory authority, which we summarise below.

AI – an opportunity or a threat?

Most of the supervisory authorities that we received responses from indicate that AI tools may represent both an opportunity and a threat. According to the supervisory authorities, the use of AI tools is undoubtedly a major challenge in the context of data protection. On the other hand, some of the regulators (e.g. the German supervisory authority from Hamburg), submits that AI-based tools can be used to even increase the protection of personal data. For example, the abovementioned supervisory authority noted that it is currently supporting the Hamburg Senate in introducing an AI-based tool that erasures personal data from court judgments, prior to their publication.

Supervisory authorities (e.g. the Spanish supervisory authority) indicate that, without questioning the benefits that could come from the use of AI tools, the protection of individual rights and freedoms must always be the first consideration. The Spanish regulator has indicated that it constantly monitors AI-related technologies in order to be able to react in real time to any threats related to them.

Regulators also noted (e.g. the Austrian supervisory authority) that, as a general rule, the GDPR is neutral technologically. This means that the GDPR should not inhibit the development of new technologies, including those that are related to AI. On the other hand, both the AI Act and possible other legal regulations in this area should not limit the application of data protection legislation or the powers of supervisory authorities.

Some examples which were identified by the German supervisory authority (Schleswig-Holstein) are worth to note. Those examples illustrating both the risks and possible benefits in regard to processing of personal data using AI. Among the potential benefits, the regulator indicated support for supervisory authorities in exercising their investigatory powers or AI supported defence to cyberattacks on IT systems that processing personal data. As a potential threat the regulator indicated – among others – AI supported cyberattacks, unlawful collection of training data, containing personal data, deep fakes, difficulties in exercising data subject rights, difficulties in fulfilling the accountability principle and proving compliance with the GDPR or e.g. collecting to much data to prove that a given person is a person and not an AI bot.

Local actions in regard to AI

In responses that we received there is a trend that supervisory authorities mostly recognise the importance of the issue of processing personal data via AI tools. Some of the regulators indicated that separate authorities are being established in their countries to supervise the use of AI tools. Notwithstanding the above, the supervisory authorities undertake many initiatives related to the processing of personal data via AI, including issuing various types of guidance and guidelines. In this context, the activities of the Spanish supervisory authority are worth noting. It shared a number of materials in this scope (e.g. recommendations for users in the use of chatbots with AI –  https://www.aepd.es/infographics/info-recommendations-chatbots-ai.pdf or the reference map concerning personal data processing embed AI – https://www.aepd.es/infographics/personal-data-processing-that-embed-ai.pdf).

Numerous AI initiatives are also being undertaken by the ICO (Information Commissioner’s Office – the UK supervisory authority). For example, in early 2024 the ICO has launched consultation series on generative AI (https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2024/01/information-commissioner-s-office-launches-consultation-series-on-generative-ai/).

There are also worth mentioning the initiatives of the German supervisory authorities. For example, the regulator from Hamburg published a checklist concerning the use of chatbots that are based on AI, including the ChatGPT. (https://datenschutz-hamburg.de/fileadmin/user_upload/HmbBfDI/Datenschutz/Informationen/20231113_Checklist_LLM_Chatbots_EN.pdf).

Several of the supervisory authorities that we received responses from have announced further education and information activities in regard to the use of AI tools. For example, the supervisory authority from Norway announced the preparation of guidelines on the relationship between the AI Act and the GDPR, while an authority from Germany (Berlin) is planning to organise an event dedicated to AI issues in the second part of the year.

Local rulings

Most supervisory authorities indicated that they have not issued yet local rulings (decisions) concerning the processing of the personal data via AI tools. Therefore, we must pay particular attention to the decision of February 2022 issued by Hungarian supervisory authority by which the regulator imposed a fine of 650 000 EUR on the controller. The case concerned the use of AI-based software to analyse recorded telephone calls automatically. Data subjects were not informed of this processing of personal data and could not object to it in any way. The Hungarian regulator emphasised in its decision that it is difficult to implement AI in a transparent and safe manner, without applying additional safeguards (https://edpb.europa.eu/news/national-news/2022/data-protection-issues-arising-connection-use-artificial-intelligence_en). Whereas the German supervisory authority (Mecklenburg) has indicated that it expects a decision in regard to the ChatGPT within the next few months.

The standpoint of the Polish supervisory authority

The Polish supervisory authority (UODO) indicated in its standpoint that the AI issue has been of interest to the UODO for a long time, emphasised at the same time that the modern design of the use of AI-based systems does not exempt controllers and processors from complying with the GDPR, including, among others, with regard to the legal basis of processing, principles and ensuring security.

The Polish regulator also noted that it has repeatedly pointed out that AI-based systems which use personal data to make decisions must be transparent and accountable to ensure that they do not make unjust or biased decisions. In addition – in the opinion of the UODO – those systems should be controlled by human in order to detect potential errors and correct them if  necessary.

The UODO noted that the Polish supervisory authority has been involved for a long time in educational activities related to issues of new technologies, including AI. For example, the regulator organised the “Forum for New Technologies” conference or discuss new technologies and AI topics during the performance of the project “Your Data – Your matter”, which is an educational project addressed to children. The supervisory authority also noted that it participates in the work of the EDPB (European Data Protection Board), including subgroups for the ChatGPT and Technology (Technology Export Subgroup).

The UODO informed, interestingly, that the supervisory authority currently dealing with a complaint in which a complainant accused the ChatGPT developer (OpenAI) – among others – processing of the personal data in an unlawful, unreliable and non-transparent manner.

Notwithstanding the abovementioned standpoint, the President of the UODO (Mirosław Wróblewski), pointed out recently during the session of the Permanent subcommittee of the polish parliament on artificial intelligence and transparency of algorithms that “the personal data protection issue is an element of the fundamental rights that are implemented in the Artificial Intelligence Act”. He stressed that a large part of the solutions provided by this act relate to the processing of personal data. The President of the UODO said that “this is due to the nature of the operation of artificial intelligence algorithms, which, among others, must be trained on data. The Artificial Intelligence Act ensures that the right to personal data protection is respected, but actually guaranteeing it will be a major challenge for data protection authorities, including the UODO” (see more – https://uodo.gov.pl/pl/138/3058).

An opportunity and a threat

In summary, all authorities that responded to our survey indicated that AI is an opportunity but also a threat (or vice versa). Each of them pointed out consistently that AI cannot functioning without a supervision in the context of data protection and privacy. Despite some countries specificity, where some of our interviewees pay attention to specific risks (e.g. discrimination based on gender – Spain) and others focus on other challenges, each of them confirmed that AI will change, or in fact it has been already changing the world irreversibly and the supervisory authorities are trying not to leave these changes unsupervised.

Jesteśmy częścią grupy Omni Modo
Odwiedź nas na naszych profilach
Newsletter
Ustawienia cookies