Addressing Privacy and the GDPR in ChatGPT and Large Language Models

Share This Post

Large language models (LLMs) are a type of machine learning model that are trained on vast amounts of text data to generate human-like text. They are able to generate text that can be indistinguishable from text written by a human, and can be used for a wide range of natural language processing tasks such as language translation, text summarization, and question answering. (Source: ChatGPT)

These language models, as defined by Springer, assign “a probability to a piece of unseen text, based on some training data.” Usually, very large amounts of text are used to train a language model which, roughly speaking, learns the probability distribution of sequences of words. 

While these language models are becoming increasingly popular and more widely accessible, they also come with privacy implications that must be addressed. One major concern when it comes to privacy and data protection regulation compliance is that LLMs can, in a sense, “memorize” personally identifiable information (PII). This PII is then at risk of being discovered by another user or an attacker. We have previously covered the privacy risks of language models quite extensively.

In this post we discuss how LLMs are being used (and misused), what risks companies leveraging these technologies should be thinking about in respect to exposing PII, and what solutions exist to protect their customers’ personal data.

Language Model Applications

There are a handful of companies who have created large language models for commercial use, including OpenAI’s ChatGPT and GPT-3, and Google’s LaMDA.

Jasper AI, an AI content platform which leverages GPT-3, has focused on generating marketing pieces such as blogs, ads, social media posts, and more. They raised $125M on a $1.5B valuation in October and were named “one of America’s fastest-growing private companies.”

One of the major use cases that otherwise comes up for LLMs is chatbots. Chatbots can be used for customer service, virtual assistants, and even physical and mental healthcare applications. 

Malicious Use

While, rightfully so, the misuse of ChatPGT for generating malicious code and for creating mass customized phishing scams has come up a lot recently, there has been relatively little reflection on what it means for companies using these LLMs to send their own users’ personal data to a third party.

Much like the rise of political manipulation by foreign powers on social media, it is hard for regulators to immediately catch up to new technology and for tech companies to foresee and prevent all of the ways in which their tech might be misused. In the case of OpenAI, there is some clear evidence that they are putting thought into limiting the harms of their system and part of their methodology does require the ability to actually see the inputs to their systems. 

Asking ChatGPT About Personally Identifiable Information

Personally Identifiable Information (PII) is any data that can be used to identify a given person. Direct identifiers include things like name, age, address, credit card number, etc. Indirect identifiers are any given pieces of information that can be combined together in order to identify a given person such as IP address, religion, birthplace, etc.

How does ChatGPT deal with that? Let’s ask it!

Question: How does ChatGPT deal with personally identifiable information?

Response: As a language model, GPT-3 does not have the ability to collect or store any personal information. It can only process the information that is given to it as input and respond with generated text based on that input. When dealing with personal information, it is important to keep in mind that GPT-3 may generate text that contains sensitive information, so it’s the responsibility of the user to ensure that proper precautions are taken to protect the privacy of any individuals whose information is being used.

In terms of usage, OpenAI, the company behind GPT-3, has developed safety mechanisms such as “davinci-coding” and “curie” to prevent the model from generating sensitive or harmful content. Additionally, when working with personal information, it is important to follow data protection laws and regulations such as General Data Protection Regulation (GDPR) or California Consumer Privacy Act (CCPA) to ensure that the personal information is being handled appropriately.”

What Does This Mean?

It is up to every business using the system to determine what is or is not personally identifiable information from their users and remove or replace it before sending it through to ChatGPT. If not, they may be in breach of some very serious data protection regulations, such as the the European General Data Protection Regulation (GDPR) and numerous other data protection regulations worldwide relevantly require:

  1. Positive consent for the collection and use of personal data; that is, a user has to explicitly give you the right to use their data for specific purposes.
  2. Right to be forgotten, where a user can request that all instances of their personally identifiable information be deleted.
  3. Access to information, which means a user has the right to request access to all of the PII that has been collected from them.
  4. Data minimization, which requires companies to remove all PII that is unnecessary from the data they collect.
  5. Pseudonymization or Anonymization where possible, which means replacing or removing the PII.

Solution: Redaction or Synthetic PII

Redacting PII from chatbot data sets is no small feat. Unstructured data such as chat transcripts are notoriously difficult to redact properly, let alone at scale. Human redaction is slow, expensive, and inaccurate, whereas automated solutions such as regexes are fast and cheap but struggle with idiosyncratic data often found in chatbot training data. While concealing or masking customers’ personal data in a highly accurate and efficient way might be nearly impossible using standard tools, AI can make the impossible possible.

Data Redaction

Data redaction is the process of obfuscating sensitive data into unintelligible forms. Such data may include credit card numbers, addresses, names, ages, etc. 

The above image is an example of a redacted chatbot conversation. Notice that the unredacted message to the left contains quite a bit of the user’s personal data that is better kept safe. Using Private AI’s technology, we have replaced Francois’ personally identifiable information (name, account number, and payment details) with obscured values, maintaining the privacy of the message sender before sharing with ML teams for customer service agent training, business insight extraction, or a myriad of other tasks.

Synthetic PII Replacement

As the name implies, in the synthetic data replacement process, sensitive data is replaced with contextually-correct fake data. This process offers excellent data security because the data used isn’t actually linked to a real person anymore, yet still contains the critical surrounding context of the conversation for accurate training. This type of data is perfect for training machine learning models such as AI chatbots, as the fake PII within the transcript is indistinguishable from real-life customer PII, leading to more accurate training without risking data privacy due to machine learning model memorization.

In the above image, Private AI’s technology has replaced the name of Francois, the account number, and the payment details with fake values, maintaining the privacy of the customer and the naturalness of the data. 

After performing either data redaction or replacement, the data is now ready to be used to send to ChatGPT.

Let’s Try It Live!

What is personally identifiable information within this message?

Hi, I just bought a Miele hardwood floor vacuum but I can’t figure out how to get it to work. I’ve got these hardwood floors, which are famous where I lived in [LOCATION_COUNTRY_1], and I need to take care of them before my mother’s birthday party on [DOB_1]! I paid [MONEY_1] for this! My number’s [PHONE_NUMBER_1] if you’d like to call me so we can get this thing running!

What does it look like with Synthetic PII?

Hi, I just bought a Miele hardwood floor vacuum but I can’t figure out how to get it to work. I’ve got these hardwood floors, which are famous where I lived in Singapore, and I need to take care of them before my mother’s birthday party on july 4! I paid 5000 dollar for this! My number’s 44 6622444445 if you’d like to call me so we can get this thing running!

Does the response change as a result of replacing the PII?

Conclusion

While some of the ethical and legal risks of Large Language Models have yet to be solved, such as creating code which may be malicious in nature, there’s one problem which can be thoroughly mitigated by every business using these LLMs: that of protecting personally identifiable information. Replacing PII with synthetic PII often leads to little or no change in output utility and prevents non-compliance with data protection regulations like the GDPR. You can try it out with ChatGPT for yourself in combination with Private AI’s web demo.

Try our award-winning de-identification and synthetic PII solution yourself on your own data:

Subscribe To Our Newsletter

Sign up for Private AI’s mailing list to stay up to date with more fresh content, upcoming events, company news, and more! 

More To Explore

Rappel

Testé sur un ensemble de données composé de données conversationnelles désordonnées contenant des informations de santé sensibles. Téléchargez notre livre blanc pour plus de détails, ainsi que nos performances en termes d’exactitude et de score F1, ou contactez-nous pour obtenir une copie du code d’évaluation.

99.5%+ Accuracy

Number quoted is the number of PII words missed as a fraction of total number of words. Computed on a 268 thousand word internal test dataset, comprising data from over 50 different sources, including web scrapes, emails and ASR transcripts.

Please contact us for a copy of the code used to compute these metrics, try it yourself here, or download our whitepaper.