Data Protection lawyers with 50+ years of experience

Free initial consultation
/insights

Updated Wednesday, May 24, 2023

Updated Wednesday, May 24, 2023

Data Protection aspects when using the ChatGPT-API

Current developments and possible solutions for handling the ChatGPT-API (Update)

Boris Arendt

Salary Partner (Attorney-at-law)

Current developments
Use of ChatGPT via the OpenAI API platform
Who is the controller?
What should be considered when using the API?
Update from 16.05.2023
Conclusion

Get assistance from our lawyers

Data Protection compliance can be complicated. Let our experienced team simplify it for you.

Free initial consultation

After OpenAI launched the chatbot ChatGPT in November 2022, events have been tumbling down. We had already pointed out some of the data protection challenges in our "Insights" in March. But what about the ChatGPT API?


Current developments

Not a day goes by without exciting news about OpenAI or other AI players being published, but also new problems and challenges becoming known. For example, OpenAI had to report its first data privacy incident when the chat history, email addresses and contact information of other users were disclosed.

At the end of March, the Italian data protection authority banned ChatGPT operation in Italy temporarily for, among other things, lacking a legal basis for processing personal data for training the AI algorithm and failing to integrate an effective age verification system to protect children under 13.

Publicly, the regulator highlights in its press release that OpenAI could face a fine of up to 20 million euros or 4% of its annual global revenue if it does not follow the requirements of the order. OpenAI has been given 20 days to respond to the authorities order, and an initial dialogue between the regulator and OpenAI has already taken place. In the meantime, the Canadian regulator has also announced that it will initiate investigations against OpenAI. It should therefore only be a matter of time before other supervisory authorities follow suit.


Use of ChatGPT via the OpenAI API platform

The current reactions and administrative measures are mainly directed toward ChatGTP of OpenAI. The chatbot is offered in a free version as well as via a paid subscription. The addressee of the supervisory measures is therefore the company providing the tool (OpenAI).

However, ChatGPT is also offered via the OpenAI API platform. Through this API platform, companies can integrate ChatGPT into their own services. A very common scenario is the integration of ChatGPT into customer support, but also apps like Snapchat, Quizlet, and Shopify have already integrated ChatGPT via the API. Just recently, Helvetia Insurance from Switzerland integrated ChatGPT into their customer chatbot. In the instructions for use, the chatbot is presented as an experiment via which questions about insurance and retirement planning can be answered. Customers are then asked not to enter personal topics into the chatbot console.

We are currently receiving more and more inquiries about how OpenAI products can be integrated into our customers' products and services via the API in a legally compliant manner.


Who is the controller?

If companies integrate ChatGPT into their products and services via the API, they generally also become data controllers in terms of data protection law. OpenAI (more precisely OpenAI, L.L.C., 3180 18th St, San Francisco, CA 94110) becomes in these cases a service provider and processor acting on the instructions of the company using the API.

As a controller, each company using the API must be able to demonstrate for each (data) processing step that the processing in question is data protection compliant. While the Italian supervisory authority still addressed OpenAI directly in its order, in the future any company using the API could also become the addressee of supervisory measures. It is thus adviseable to properly implement the applicable data protection requirements when integrating ChatGPT into own services and products.


What should be considered when using the API?

The controller (the company using the API) must be able to demonstrate the legal basis for the data processing when using the chatbot. This should still be quite unproblematic with regard to the pure provision of services (e.g., customer support) since the responsible party can regularly invoke a necessity for the protection of a legitimate interest (Art. 6 (1) f) DSGVO) or for the performance of a contract (Art. 6 (1) b) DSGVO).

It becomes more problematic when it comes to the use of the inputs (prompts) for training the AI algorithm, as this processing for OpenAI purposes is no longer covered by the data processing agreement between the controller and OpenAI. It is conceivable that the controller obtains consent from the user, which also covers further processing by OpenAI. However, it is more advisable to exclude the further processing of the input by OpenAI so that the partly sensitive content is not further processed for the AI algorithm.

API-Content vs. Non-API-Content

OpenAI differentiates between "Non-API Content" and "API Content". According to the terms of use, content input from users received via the ChatGPT-API (API Content) will not be used for OpenAI's own purposes (e.g. for further development and service improvement) unless the user or customer explicitly requests this.
Only the Non-API Content is used by default for the training of the AI algorithm, whereby users are offered an opt-out option here. Opt-out can be done via a web form that is linked to the user account.

From a contractual point of view, it therefore looks as if the responsible party using the API is on the safe side for the time being. The privacy policy of OpenAI, however, contains additional purposes, such as the analysis and improvement of services, the performance of research, and the development of new programs and services. In this respect, it remains unclear whether any processing for OpenAI purposes remains excluded or only processing for training the algorithm. At this point, a clarification would be desirable that no customer data or user input is processed for own purposes.

Transparency for users

The controller must make data processing transparent to its users when using OpenAI services. This is regularly done through data protection notices or privacy policies as an expression of the transparency obligation of Art. 13 and Art. 14 of the GDPR. However, it is not sufficient for the controller to simply refer to the OpenAI privacy policy. Rather, it must provide independent information about how and for what purposes the data of the users is processed by the controller and its processors, how long it is stored, and when it is deleted again. It is also necessary to make it transparent to users how they can exercise their data protection rights, and it must be defined between the controller and OpenAI how compliance with data protection rights can be ensured with service providers such as OpenAI.

Contractual requirements

When using OpenAI services via the OpenAI API platform, an order processing relationship is established between the data controller as the client and OpenAI as the processor. For these purposes, OpenAI provides a Data Processing Agreement (DPA as of 05.04.2023), which can be concluded via an electronic signing process. However, it must be checked in detail whether the respective use is actually a data processing on behalf.

This is because constellations are also conceivable in which the parties are to be regarded as joint controllers under Art. 26 GDPR (e.g., if training data of the customer is processed together with OpenAI for both parties for their joint purposes). OpenAI currently does not (yet) provide a template for a contract pursuant to Art. 26 GDPR.

If the data is processed via the API in third countries (e.g. in the USA), standard contractual clauses (SCC) must also be concluded. OpenAI has integrated the SCCs into its DPA because, according to them, the data processing takes place in the USA. As long as there is no adequacy decision by the EU Commission on the EU-U.S. Data Privacy Framework (more on this in these Insights), the controller must also conduct a so-called Transfer Impact Assessment (TIA)and, if necessary, must conclude an agreement on supplementary measures for the protection of the data.

Data Protection Impact Assessment (DPIA) and data security

If the data processing during the use of the ChatGPT-API is associated with high risks for the data subjects, a data protection impact assessment (DPIA) must be carried out according to Article 35 GDPR.

The joint body of the German Data Protection Supervisory Authorities (DSK) has published a so-called positive list of processing activities for which a DPIA is mandatory. Particularly relevant here are item 11 of the list (customer support using artificial intelligence) or also item 13 (telephone call evaluation using algorithms). In this context, it is of decisive importance for which purposes the service is to be used. For example, if the ChatBot is to be integrated into the customer support of a health insurance company (as in the example of Helvetia Insurance, see above), health data can be processed quite quickly and it would have to be analyzed in the context of the DPIA how the risk of disclosure of sensitive information and health data is effectively handled or excluded.

The DPIA pursues the goal of identifying and assessing the risks for the data subjects in a structured manner and determining how these risks can be handled with technical and organizational measures and reduced to an acceptable level. At this point at the latest, an in-depth analysis of a security concept for OpenAI is likely to become necessary. According to OpenAI's DPA, however, this is only made available on request. The freely available information on the website is probably not sufficient in itself to make a reliable assessment of whether the measures can effectively counter the assessed risks.


Update from 16.05.2023

On 28 April 2023, the ban on ChatGPT in Italy was lifted by the Italian supervisory authority after OpenAI made the service as a whole more transparent, introduced age verification for local users and facilitated the implementation of data subjects' rights.

In particular, OpenAI simplified the process for opting out of the use of non-API content for its own training purposes. Instead of the previous opt-out via web form, the storage of one's own "chat history" can now be turned off by mouse click in the settings. A conversation conducted under this setting will then be kept by OpenAI for 30 days and only checked to monitor possible misuse.

In addition, OpenAI has announced the release of "ChatGPT Business" in the coming months. ChatGPT Business will follow the "API data usage policies", whereby end-user data will not be used to train OpenAI's models by default.

Lastly, OpenAI enshrined a new "export option" in the settings. This makes it possible to export ChatGPT data and thus understand what information OpenAI stores. In return, the user receives a file with their conversations and all other relevant data by email.


Conclusion

The latest developments around AI providers are likely to be just the beginning of a longer consolidation phase between AI providers and regulators. Even if the focus is currently on OpenAI, the areas of tension are also transferable to other providers such as Google's Bard and others.

The current dynamics therefore require careful observation of the developments so that new requirements on the part of the supervisory authorities or the legislature can be responded to in good time.

Even though the news are currently dominated by reports of bans, regulatory measures, and risks, it can be said that an integration of AI services into one's own offerings and services – at least from a data protection perspective – can be designed in a GDPR-compliant way.

When your company considers to implement ChatGPT in their service offerings, it is advisable to consider privacy compliance aspects in the earliest planning phase (privacy-by-design), and that the minimum legal requirements, such as the lawfulness and transparency of data processing, the conclusion of the necessary data protection agreements, the performance of data protection impact assessments, and the review and specification of technical and organizational measures, are addressed.


Legal advice

Simpliant Legal - Wittig, Bressner, Groß Rechtsanwälte Partnerschaftsgesellschaft mbB


© 2019 - 2023 Simpliant