Top 7 Differential Privacy Papers for Language Modeling

Differential privacy is a hot topic given the many conflicting opinions on its effectiveness. For some background, we previously wrote a comprehensive post on the Basics of Differential Privacy where we discussed the risks and how it can also enhance natural language understanding (NLU) models.  The differential privacy papers in this post are just a … Read more

The Basics of Differential Privacy & Its Applicability to NLU Models

Over the years, large pre-trained language models like BERT and Roberta have led to significant improvements in natural language understanding (NLU) tasks. However, these pre-trained models pose a risk of potential data leakage if the training data includes any personally identifiable information (PII). To mitigate this risk, there are two common techniques to preserve privacy: … Read more

ML Model Evaluations & Multimodal Learning

In the previous episode of Private AI’s ML Speaker Series, Patricia Thaine (CEO of Private AI) sat down with Dr. Aida Nematzadeh (Staff Research Scientist at DeepMind) to discuss machine learning models and multimodal learning.  Before joining DeepMind, Dr. Nematzadeh was a postdoctoral researcher at UC Berkeley advised by Tom Griffiths and affiliated with the … Read more

Language Modelling via Learning to Rank

In the previous episode of Private AI’s ML Speaker Series, Patricia Thaine (CEO of Private AI) sat down with Arvid Frydenlund (PhD candidate at the University of Toronto in the Computer Science Department and Vector Institute) to discuss his latest paper Language Modelling via Learning to Rank, presented at AAAI-2022 and published at the 6th Workshop on Structured Prediction for … Read more

What Companies Should Know About PII & Protecting It

Personally Identifiable Information (PII) is any data that can be used to identify an individual. This can be done using direct identifiers (name, social security number, etc.) which are unique to an individual, or using quasi-identifiers (date of birth, race, postal code, etc.) which in isolation cannot pinpoint an individual, but in conjunction with multiple … Read more

When the Curious Abandon Honesty: Federated Learning Is Not Private

Previously on Private AI’s speaker series CEO, Patricia Thaine, sat down with Franziska Boenisch to discuss her latest paper, ‘When the Curious Abandon Honesty: Federated Learning Is Not Private’.  Franziska completed a Master’s degree in Computer Science at Freie University Berlin and Technical University Eindhoven. For the past 2.5 years, she has been working at Fraunhofer AISEC as a Research … Read more


Testé sur un ensemble de données composé de données conversationnelles désordonnées contenant des informations de santé sensibles. Téléchargez notre livre blanc pour plus de détails, ainsi que nos performances en termes d’exactitude et de score F1, ou contactez-nous pour obtenir une copie du code d’évaluation.

99.5%+ Accuracy

Number quoted is the number of PII words missed as a fraction of total number of words. Computed on a 268 thousand word internal test dataset, comprising data from over 50 different sources, including web scrapes, emails and ASR transcripts.

Please contact us for a copy of the code used to compute these metrics, try it yourself here, or download our whitepaper.