Over the years, large pre-trained language models like BERT and Roberta have led to significant improvements in natural language understanding (NLU) tasks. However, these pre-trained models pose a risk of potential data leakage if the training data includes any personally identifiable information (PII). To mitigate this risk, there are two common techniques to preserve privacy: differential privacy (DP) and data sanitization. This post will focus on the former and outline the basics of differential privacy, its notations, its privacy benefits, and its utility trade-offs.
What is Differential Privacy?
Differential privacy allows for statistical information to be inferred from a collection of data without compromising the privacy of individuals, while quantifying the degree to which private data is protected. It helps protect the privacy of individuals by limiting the amount of specific information that could be learned about a single person within a dataset.
An example of this would be: company X builds a predictive model to identify what consumers are most likely to buy based on their purchase history. A differential privacy algorithm is used to add noise to the gradient in the model training phase to ensure that the private information of any one user isn’t heavily memorized by the model, while trying to limit negative effects on model accuracy.
Use case: Reducing Privacy Risks in Large Language Models
Large language models are vulnerable to various inference attacks. However, applying differential privacy while training language models could significantly reduce such risk. If you’re dealing with privacy and NLU models, there are several research papers that exist about the benefits. It is important for businesses and data teams to be aware of privacy risks because data breaches can result in the loss of millions of dollars.
To better understand differential privacy and how it can be used to quantify protected data, we can look at two databases, x and y, as an example. They must differ only by a single record and a randomized algorithm M, which takes a database and outputs a number. From this, we can note that algorithm M is differentially private if the output M(x) and M(y) are almost indistinguishable from each other, regardless of the choice of x and y.
Formal differential privacy definition.
Source: The Algorithmic Foundations of Differential Privacy
In practice, M(x) is usually composed of 𝑓(𝑥), a deterministic transformation of x, and a noise value drawn from a random distribution. In Laplace Mechanism, 𝑀(𝑥)=𝑓(𝑥)+𝜂, where 𝜂 is sampled from Laplace distribution.
Applying Differential Privacy to Deep Learning & Synthesizing Production Workflows
You can apply differential privacy to train machine learning algorithms by integrating differentially private stochastic gradient descent (DP-SGD).
Stochastic gradient descent (SGD) is an optimization algorithm used in machine learning to locate model parameters that correspond the best fit between actual outputs and predicted outputs. SGD and its variants such as batch, stochastic, and mini-batch are also commonly used to optimize neural networks. DP-SGD is a variant of SGD that preserves (𝜖, 𝛿) differential privacy while optimizing the neural network.
Source: Deep Learning with Differential Privacy
The major differences to SGD are highlighted with green and blue boxes in the image above. Namely, clipping gradients in each batch, so that each gradient norm is limited to be at most C, then adding Gaussian noise to the sum of clipped gradients.
Intuitively, clipping individual gradients ensures that each example has limited influence on the parameter update, whereas adding noise to the gradients prevents examples from being traced.
These techniques used to integrate differential privacy into deep learning models have been adapted to NLU use cases, some of which we cover in this article on The Privacy Risk of Language Models.
Differential privacy can be a useful tool for preserving privacy for machine learning models. Although there are a few drawbacks (which we will cover in another post) such as degraded performance and still being vulnerable to inference attacks, this method can still enable the model to learn from the personal information from large groups of individuals without memorizing individuals’ rarely-occurring information.
Unsure whether your NLU models are privacy-preserving? Book a call with one of our privacy implementation experts to learn more.