Machine translations by Deepl

Concerns over ChatGPT rise - politics gasping after it

Concerns about generative AI, such as ChatGPT, are rising sharply. One problem has not yet been identified and another is already emerging. What do the current (privacy) concerns consist of and how are politicians - national and European - responding to them? Below is the current state of affairs. But tomorrow it may already be different. Particularly because of yet another new development, in the form of plugins, which can potentially be even riskier than ChatGPT itself.

By Henk Boeke, Privacy First advisor 

Earlier this year, the Italian privacy authority threw a spanner in the works, banning ChatGPT. That ban was based on two objections. First, that the processing of personal data (at initial training dates, and via the prompts, with which you feed the system with questions) was inconsistent with the requirements of the GDPR (our AVG). And secondly, that the system did not apply age verification. Which was also inconsistent with GDPR requirements.

At the initial training dates is about the personal data that - possibly as a by-catch - were included in the training of the system, which involved consulting all kinds of online sources. So those personal data may be in the system.

The problem of the prompts, or feeding the system with questions, we noted earlier, in our article Privacy issues surrounding ChatGPT and other generative AI. There we wrote: "Suppose you feed the system with the transcript of a meeting. With the request to make a summary (minutes) of this. So then the system has knowledge of individuals, and what they said. Plus potentially sensitive company information. What does ChatGPT do with that?"

The problem of the age verification is already very old, and has been in play - with us - since the introduction of the AVG. At the time, eminent experts, including Prof Simone van der Hof, noted that actually ALL websites would have to apply age verification, from Nu.nl to Buienradar, in order to comply with the AVG (which says that underage internet users must have parental consent before using websites or apps). Which, of course, is highly problematic in practice. But for the Italian privacy authority, this was a stick to beat the dog.

In the end, it was a storm in a teacup. On 28 April, ChatGPT was made available again in Italy after OpenAI, the maker of ChatGPT, promised "measures and improvements".

What these "measures and improvements" mean in practice remains to be seen. In any case, when creating a new account, it does now warn: "Please don't share any sensitive information in your conversations". But of course, the initial training dates cannot be reversed. Those are just in there.

Also, Open AI now has a age tool added. That sounds impressive, but simply means: asking you to enter your date of birth if you want to open an account. Which is otherwise not verified.

Privacy First comments: We warmly welcome the Italian signal. Especially as a signal. But beyond that, of course, much more is needed, to ensure users' privacy and the stability of our rule of law. More on that later.

Parliamentary questions

On 1 May, the Party for the Animals (PvdD) argued Parliamentary questions, following the Italian ban on ChatGPT. Did the government already know about this, and what would it want to do with it itself?

Yes, we know about that, the minister replied, but we shift all responsibility to the Personal Data Authority (AP). See: Answers Parliamentary questions on the curbing of ChatGPT in Italy due to privacy concerns and its implications for the Netherlands.

Other notable point of the responses: that the Personal Data Authority (AP) should go public with 'education'. Good plan! But what turns out? Searching 'ChatGPT' on the AP's site only gives: AP seeks clarification on ChatGPT. Well... questions but no answers. However, the minister had added: that the AP should go public with information "when the occasion arises". Apparently, for the AP, there is no such reason yet.

Privacy First comments: as far as we are concerned, there is currently cause for the AP to go public with good information. For example: by explicitly pointing out the risks of sharing personal data when formulating prompts.

But much preferably: a clear position from the government itself, with direction and policy, without reference to the AP. In this respect, we endorse the fire letter Politics, bring artificial intelligence (AI) under ónly control this year, by Kees Verhoeven and Mark Thiessen in the Volkskrant of 13 June, and the accompanying petition. This petition has already been signed by numerous prominent figures, including intellectual heavyweights such as Maxim Februari and Bas Heijne, and (ex-)politicians such as Klaas Dijkhoff, Bram van Ojik and Gert Jan Segers.

My AI

The PvdD asked questions not only about ChatGPT but also about 'My AI', Snapchat's chatbot. "Do you foresee that this could go wrong?"

'My AI' is a chatbot from Snapchat that pretends to be an equal conversation partner, especially for children and young people (the big consumers of 'Snap'). While they have no idé that this is not a flesh-and-blood person, but a virtual contact to whom they can entrust all their personal woes, with all the privacy issues that entails. See also: My AI: what is it and why is there fuss about it? (Kidsweek.co.uk 22 April 2023), warning children and young people of the dangers.

The minister's response was again: that it is up to the AP to oversee this.

 Privacy First comments: as far as we are concerned, this is eminently a case where good education and 'transparency' can make all the difference. For a start: by obliging Snapchat to state with every 'My AI' posting that it is a post from a robot. Hopefully, this will be enforced by the European 'AI Act' (about which more later).

Bard

Everyone is - rightly - very concerned about ChatGPT, but that leaves its competitor Bard (from Google) somewhat underexposed. While that one causes just as much concern, as it does pretty much the same thing as ChatGPT. And even more so, because it also provides up-to-date information with it (ChatGPT's training dates run until September 2021). Only we don't see Bard here yet, because Google has blocked access to this AI product from Europe (and Canada) for the time being.

Why Bard is not yet available with us is rather obscure. Google's official statement reads "that this technology is still in its infancy, and will be rolled out gradually and responsibly". In addition, Google wants to remain a "helpful and committed partner" of regulators "to work together to keep these new technologies on track". (See: Almost the whole world can use Google's chatbot Bard except Europe, the reason remains vague (yet), Volkskrant 19 May 2023.)

But there is probably something else going on. Some experts think Google wants to send a signal to strict European regulators with this. Others surmise that Google is unsure whether its chatbot complies with European privacy rules (GDPR), which could lead to hefty fines if it does not.

Privacy First comments: whichever of these statements is the correct one, in all cases the current privacy legislation thus seems to be doing its job well. That's a win!

Europe

Europe is currently following two routes for containing AI. The first route is the so-called AI Act (in Dutch: de 'AI Wet', or 'AI Verordening'), which defines different levels of risk - for unwanted consequences. Ranging from 'low risk' (like intelligent spam filters, or Spotify's recommendation algorithm), to 'unacceptably high risk', like China's social credit system. With a set of requirements and obligations for each level. The higher the risk, the stricter the requirements. Up to 'completely banned' for products in the highest category.

Systems like ChatGPT (and Microsoft's Bing, and Google's Bard, and Snapchat's My AI) would then be required to state that the output was created automatically.

There has been intense lobbying by major tech companies, including Open AI (the maker of ChatGPT), to get their products into the lowest possible category, with the mildest possible requirements. For ChatGPT and similar products, this seems to have succeeded: the progressive EU faction wanted these AI products to be in the 'high risk' category, but they ended up in a lower category.

On 14 June, the European Parliament voted overwhelmingly in favour of the AI Act. But now a long road ahead, with transitional arrangements and all, means that this law cannot come into force until 2026 at the earliest. Meanwhile, while technological developments continue at breakneck speed. A commentator on BNR News Radio compared it to a TGV we are trying to catch up with a Sprinter. (As an aside, to tackle this problem of slow legislation, in addition to the AI Act, there is also the AI Pact, in which tech companies can voluntarily declare earlier compliance with the requirements of the AI Act. Fortunately, they have an ear for that).

The second route of 'Europe' is the so-called AI Treaty. This is an agreement between the members of the Council of Europe, or all European countries (i.e. including Switzerland, Ukraine, etc.) but excluding Russia, Belarus, Kazakhstan and Vatican City. This treaty is mainly about the relationship between AI on the one hand and human rights, democracy and rule of law on the other. On 6 March, State Secretary Van Huffelen sent a 'zero version' of the treaty to the House of Representatives. The aim is to conclude negotiations by the end of 2023.

The big difference between the AI Act (of the EU) on the one hand and the AI Treaty (of the Council of Europe), on the other hand, is that the Act is mainly about products (and their risk levels), while the Treaty is mainly about people (and human rights, including privacy rights). High fines can follow if the Act is violated, while the Treaty provides incentives for citizens to sue. (For more on this: definitely listen to the interview with Catelijne Muller On BNR's Big Five).

Privacy First comments:

  • AI Act: the European mills may be turning slowly, but once we have this law, it will be a huge asset that could become as influential as the GDPR and the AVG, respectively. Only: its enforcement will still be a challenge. Who will implement it all and how? Nothing has been agreed on that yet. The current issues surrounding our own AP make one fear the worst;
  • AI Treaty: how that will eventually look, we still have to wait and see. But at least the basic idea is fine. Because of the fundamental vision of AI, and what we actually want (or don't want) with it, in terms of human rights, democracy and rule of law. Exactly what the philosopher of law Maxim Februari in Act normal yourself advocated so fervently. While at the same time, that convention creates a legal framework that ordinary citizens - or their lawyers - can really use.

Plugins

Finally, the most exciting and disturbing part of the story. To understand that, first the current state of affairs. ChatGPT as we know it today is a stand alone system. With which you can communicate online, but which cannot fetch up-to-date information from the internet, and cannot interact with other websites. ChatGPT is deaf and dumb in terms of interacting with the internet.

Recently, however, there have been plugins, a kind of third-party utility that can 'connect' ChatGPT to the internet. From companies like Expedia, Open Table and Instacart. Which could have huge implications.

In tech magazine Wired (from June 2023) already outlined some simple situations where ChatGPT could be tricked into unwanted behaviour via plugins. Such as: sending fraudulent emails, bypassing security measures, and misusing data entrusted to the plugins.

It takes little imagination to imagine what other trouble could happen when you unleash a generative AI system like ChatGPT on the internet. Especially with the latest version GPT-4, which masters Linux (the language of web servers), and knows all about bombs, biological weapons, and buying ransom ware on the Dark Web. Think for yourself what that could mean for privacy issues, such as identity fraud, when ChatGPT - via plugins - can do its own thing on the internet.

Privacy First comments: plugins for ChatGPT can be very useful, but also extremely dangerous. With the 'Plugin policies' in its terms of use Open AI - the maker of ChatGPT - is trying to prevent the worst missteps, but 'self-regulation' does not always go well, as evidenced by the Boeing 737 Max aircraft that fell out of the sky recently. It is to be hoped that the new AI Act will monitor this too.