Part 1: Homomorphic Encryption for Beginners

Dec 26, 2018
Share this post
Sharing to FacebookSharing to LinkedInSharing to XSharing to Email

This article will go over the basics of homomorphic encryption, followed by a brief overview of the open source homomorphic encryption libraries that are currently available, then will finish with a tutorial on how to use one of those libraries (namely, PALISADE). For this guide, I am assuming a background in basic cryptography, abstract algebra, and C++.

A Brief Introduction to Homomorphic Encryption

The concept of homomorphic encryption was introduced in 1978 by Ronald L. Rivest and Len Alderman. The R and the A in RSA encryption.

The most popular example for the use of homomorphic encryption is where a data owner wants to send data up to the cloud for processing, but does not trust a service provider with their data. Using a homomorphic encryption scheme, the data owner encrypts their data and sends it to the server. The server performs the relevant computations on the data without ever decrypting it and sends the encrypted results to the data owner. The data owner is the only one able to decrypt the results, since they alone have the secret key.

So far, the most efficient homomorphic encryption scheme when performing the same operations on multiple ciphertexts at once is the Brakerski-Gentry-Vaikuntanathan (BGV) scheme

The Brakerski/Fan-Vercauteren (BFV) and the Cheon-Kim-Kim-Song (CKKS) schemes share second place for efficiency. BGV is, however, more difficult to use. These are all lattice-based cryptographic schemes dependent on the hardness of the Ring Learning With Errors (RLWE) problem, which (at least for now) means that they are quantum-safe. For an excellent introduction of lattices and the hard problem that RLWE is based on (namely, the Shortest Vector Problem) see these two lectures by MIT Professor Vinod Vaikuntanathan:

BGV, BFV, and CKKS are additively and multiplicatively homomorphic (i.e., you can perform both addition and multiplication within the encrypted domain). No division. No exponentiating a number by an encrypted one. No non-polynomial operations. In BGV and BFV, computations can only be performed on integers. In CKKS, computations can be performed on complex numbers with limited precision.

A catch is that you cannot perform unlimited computations within the encrypted domain without running into two issues:

Issue 1

In BGV and BFV, you have to keep track of what is called the plaintext modulus p. Simply, imagine you have p=9 and that you do enc(4) + enc(8) = enc(3) (mod 9). When you decrypt enc(3), if you weren’t careful, you would have no way of knowing whether the actual result was really meant to be 3, 12, 21, …

Issue 2

In both schemes, you have to make sure you don’t go over the noise threshold, above which your values will no longer make any sense. The hardness of RLWE is based on adding a small amount of error to a point in a lattice, such that it is difficult to determine which point that error was originally added to. For more on noise growth in CKKS, see pages 12 and 22. Information about noise growth in BFV is scattered throughout Fan and Vercauteren. The reason BGV is more difficult to use than BFV and CKKS is that the noise levels need to be kept track of at every single step of the algorithm.

Noise can be dealt with by using bootstrapping

Famously introduced by Craig Gentry, this was the very first method enabling unlimited computations to be made in the encrypted domain. Homomorphic encryption without an upper bound on the number of computations that can be performed is called fully homomorphic encryption (FHE), as opposed to somewhat homomorphic encryption (SHE). If anyone requests it, I can explain bootstrapping in another post.

Since bootstrapping is an expensive operation, it is best to avoid using it unless you really need to. What BFV and CKKS use to allow for multiple ciphertext multiplications to be performed is the scale-invariant error-reduction technique explained in Brakerski.

While additions of ciphertexts can be performed almost indiscriminately (within reason), multiplying two ciphertexts together increases noise the most. For Brakerski’s scale-invariant method to work, you need to already know how many multiplications you will perform when you are generating the cryptographic parameters. Whenever possible, you should choose to multiply ciphertexts with plaintexts or add ciphertexts with plaintexts, as operations with plaintexts hardly affect the noise growth.


Dealing with multiple ciphertexts

If you are planning on performing the same operations on multiple ciphertexts, you should consider using the Single-Instruction-Multiple-Data (SIMD) method, based on the Chinese Remainder Theorem, described in Fully Homomorphic Encryption with Polylog Overhead, as well as,Smart and Vercauteren’s Fully Homomorphic SIMD operations. This optimization should be implemented in any homomorphic encryption library worth its salt.

Phew… those are most of the constraints within which you will have to work in order to write code using HE. For more really clear information on the BFV scheme, please see Albercht et al.

Homomorphic Encryption Libraries

There are four open source homomorphic encryption libraries that I’ve heard good things about:

  1. PALISADE
  2. HElib
  3. HEAAN
  4. TFHE

Of these four, I’ve personally used PALISADE, and HElib. All three implement the BFV scheme. HElib implements CKKS as well.

PALISADE is a DARPA and IARPA-funded project (previously NSA-funded too). It was developed and is supported by a superbly talented team at the New Jersey Institute of Technology (Gerard Ryan, Professor Yuriy Polyakov, and Professor Kurt Rohloff).

HElib was developed by Shai Halevi and Victor Shoup, both esteemed figures in the cryptographic community. The library does, however, have less support than PALISADE does.

TFHE implements a scheme that is faster than BGV for individual operations, but which cannot make use of the SIMD method mentioned above.

PALISADE and HElib can be freely used within commercial applications, with each using an

NJIT license, Apache v2.0 license, and MIT license respectively.

Part 1: Homomorphic Encryption for Beginners - Private AI

A Tutorial on PALISADE

First, clone the repo, and then set up the environment in Linux (Windows is no longer supported). The best way to learn how to use any of the homomorphic encryption libraries is to look at the examples (PALISADEsrcpkedemo). For PALISADE, the scheme you want to use is called BFVrns. BFV and BVG are not as efficient and the LTV scheme is not secure.

We will be generating a public key cryptosystem that allows for addition and multiplication within the encrypted domain.

To generate the encryption parameters and save them, you can use the following code, which you can essentially find within the PALISADE demos:

[gist https://gist.github.com/pathaine/5c64769455c0d9d5dd3bf49850293ae0#file-p1-cpp /]

Note that the larger the plaintext modulus, the longer the calculations will take within the encrypted domain.

You can print out the parameters that were generated:

[gist https://gist.github.com/pathaine/cee5612b22a9d5e4c1d906ec5eafbe7d /]

You can determine how secure your parameters are by comparing them to the ones suggested in the Homomorphic Encryption Standard. The parameters generated by the code above give us 128-bit security, meaning that it would take roughly 2¹²⁸ computations in order to crack the secret key.

Next, you will need to generate keys for evaluating addition and multiplication operations:

[gist https://gist.github.com/pathaine/0c73c8b4904f59281c6977a3296711a0 /]

Whenever possible, you will want to use the SIMD method mentioned earlier. In PALISADE, you can do so by creating a vector with the numbers you want to operate on, packing those numbers into a

Plaintext, and encrypting that Plaintext:

[gist https://gist.github.com/pathaine/392c0542738601f0f36a173efa7c537a /]

Recall that we are working with a polynomial ring.

CT1 should be interpreted as the encrypted coefficients of the polynomial 0x⁴ + 4x³ + 6x² + 2x + 5 and CT2 as the encrypted coefficients of 1x⁴ + 6x³ + 3x² + 5x + 2. The sum of these two ciphertexts can be seen as the sum of two polynomials within the encrypted domain, resulting in 1x⁴ + 10x³ + 9x² +7x + 7. Whereas, their component-wise multiplication results in 0x⁴ + 24x³ + 18x² +10x + 10.

Had we used MakeCoeffPackedPlaintext(v1) and MakeCoeffPackedPlaintext(v2) instead instead of MakePackedPlaintext(v1) and MakePackedPlaintext(v2), multiplying CT1 by CT2 would have resulted in a ciphertext representing the coefficients of 4x⁷+30x⁶+50x⁵+55x⁴+74x³+37x²+29x+10.

To decrypt and see the result of the inner product:

[gist https://gist.github.com/pathaine/b0b50e5e82b4222e0abe661daba80712 /]

For more information, click to read part 2 segment of ‘Homomorphic Encryption for Beginners’, where we deep-dive into practical applications of privacy-preserving signal processing.

Acknowledgements

I am tremendously grateful to the PALISADE team for their attentiveness to questions about their library and to Dr. Shai Halevi.

Appendix

If you plan on reusing your keys, you can serialize them and store them in text files:

[gist https://gist.github.com/pathaine/2c24101799686286a91ab8a6fcfcc71b /]

You can retrieve all of your keys, parameters, and cryptocontext as follows:

[gist https://gist.github.com/pathaine/6996fab77ccbf1eaddc43ed66fda7a2c /]

Data Left Behind: AI Scribes’ Promises in Healthcare

Data Left Behind: Healthcare’s Untapped Goldmine

The Future of Health Data: How New Tech is Changing the Game

Why is linguistics essential when dealing with healthcare data?

Why Health Data Strategies Fail Before They Start

Private AI to Redefine Enterprise Data Privacy and Compliance with NVIDIA

EDPB’s Pseudonymization Guideline and the Challenge of Unstructured Data

HHS’ proposed HIPAA Amendment to Strengthen Cybersecurity in Healthcare and how Private AI can Support Compliance

Japan's Health Data Anonymization Act: Enabling Large-Scale Health Research

What the International AI Safety Report 2025 has to say about Privacy Risks from General Purpose AI

Private AI 4.0: Your Data’s Potential, Protected and Unlocked

How Private AI Facilitates GDPR Compliance for AI Models: Insights from the EDPB's Latest Opinion

Navigating the New Frontier of Data Privacy: Protecting Confidential Company Information in the Age of AI

Belgium’s Data Protection Authority on the Interplay of the EU AI Act and the GDPR

Enhancing Compliance with US Privacy Regulations for the Insurance Industry Using Private AI

Navigating Compliance with Quebec’s Act Respecting Health and Social Services Information Through Private AI’s De-identification Technology

Unlocking New Levels of Accuracy in Privacy-Preserving AI with Co-Reference Resolution

Strengthened Data Protection Enforcement on the Horizon in Japan

How Private AI Can Help to Comply with Thailand's PDPA

How Private AI Can Help Financial Institutions Comply with OSFI Guidelines

The American Privacy Rights Act – The Next Generation of Privacy Laws

How Private AI Can Help with Compliance under China’s Personal Information Protection Law (PIPL)

PII Redaction for Reviews Data: Ensuring Privacy Compliance when Using Review APIs

Independent Review Certifies Private AI’s PII Identification Model as Secure and Reliable

To Use or Not to Use AI: A Delicate Balance Between Productivity and Privacy

To Use or Not to Use AI: A Delicate Balance Between Productivity and Privacy

News from NIST: Dioptra, AI Risk Management Framework (AI RMF) Generative AI Profile, and How PII Identification and Redaction can Support Suggested Best Practices

Handling Personal Information by Financial Institutions in Japan – The Strict Requirements of the FSA Guidelines

日本における金融機関の個人情報の取り扱い - 金融庁ガイドラインの要件

Leveraging Private AI to Meet the EDPB’s AI Audit Checklist for GDPR-Compliant AI Systems

Who is Responsible for Protecting PII?

How Private AI can help the Public Sector to Comply with the Strengthening Cyber Security and Building Trust in the Public Sector Act, 2024

A Comparison of the Approaches to Generative AI in Japan and China

Updated OECD AI Principles to keep up with novel and increased risks from general purpose and generative AI

Is Consent Required for Processing Personal Data via LLMs?

The evolving landscape of data privacy legislation in healthcare in Germany

The CIO’s and CISO’s Guide for Proactive Reporting and DLP with Private AI and Elastic

The Evolving Landscape of Health Data Protection Laws in the United States

Comparing Privacy and Safety Concerns Around Llama 2, GPT4, and Gemini

How to Safely Redact PII from Segment Events using Destination Insert Functions and Private AI API

WHO’s AI Ethics and Governance Guidance for Large Multi-Modal Models operating in the Health Sector – Data Protection Considerations

How to Protect Confidential Corporate Information in the ChatGPT Era

Unlocking the Power of Retrieval Augmented Generation with Added Privacy: A Comprehensive Guide

Leveraging ChatGPT and other AI Tools for Legal Services

Leveraging ChatGPT and other AI tools for HR

Leveraging ChatGPT in the Banking Industry

Law 25 and Data Transfers Outside of Quebec

The Colorado and Connecticut Data Privacy Acts

Unlocking Compliance with the Japanese Data Privacy Act (APPI) using Private AI

Tokenization and Its Benefits for Data Protection

Private AI Launches Cloud API to Streamline Data Privacy

Processing of Special Categories of Data in Germany

End-to-end Privacy Management

Privacy Breach Reporting Requirements under Law25

Migrating Your Privacy Workflows from Amazon Comprehend to Private AI

A Comparison of the Approaches to Generative AI in the US and EU

Benefits of AI in Healthcare and Data Sources (Part 1)

Privacy Attacks against Data and AI Models (Part 3)

Risks of Noncompliance and Challenges around Privacy-Preserving Techniques (Part 2)

Enhancing Data Lake Security: A Guide to PII Scanning in S3 buckets

The Costs of a Data Breach in the Healthcare Sector and its Privacy Compliance Implications

Navigating GDPR Compliance in the Life Cycle of LLM-Based Solutions

What’s New in Version 3.8

How to Protect Your Business from Data Leaks: Lessons from Toyota and the Department of Home Affairs

New York's Acceptable Use of AI Policy: A Focus on Privacy Obligations

Safeguarding Personal Data in Sentiment Analysis: A Guide to PII Anonymization

Changes to South Korea’s Personal Information Protection Act to Take Effect on March 15, 2024

Australia’s Plan to Regulate High-Risk AI

How Private AI can help comply with the EU AI Act

Comment la Loi 25 Impacte l'Utilisation de ChatGPT et de l'IA en Général

Endgültiger Entwurf des Gesetzes über Künstliche Intelligenz – Datenschutzpflichten der KI-Modelle mit Allgemeinem Verwendungszweck

How Law25 Impacts the Use of ChatGPT and AI in General

Is Salesforce Law25 Compliant?

Creating De-Identified Embeddings

Exciting Updates in 3.7

EU AI Act Final Draft – Obligations of General-Purpose AI Systems relating to Data Privacy

FTC Privacy Enforcement Actions Against AI Companies

The CCPA, CPRA, and California's Evolving Data Protection Landscape

HIPAA Compliance – Expert Determination Aided by Private AI

Private AI Software As a Service Agreement

EU's Review of Canada's Data Protection Adequacy: Implications for Ongoing Privacy Reform

Acceptable Use Policy

ISO/IEC 42001: A New Standard for Ethical and Responsible AI Management

Reviewing OpenAI's 31st Jan 2024 Privacy and Business Terms Updates

Comparing OpenAI vs. Azure OpenAI Services

Quebec’s Draft Regulation Respecting the Anonymization of Personal Information

Version 3.6 Release: Enhanced Streaming, Auto Model Selection, and More in Our Data Privacy Platform

Brazil's LGPD: Anonymization, Pseudonymization, and Access Requests

LGPD do Brasil: Anonimização, Pseudonimização e Solicitações de Acesso à Informação

Canada’s Principles for Responsible, Trustworthy and Privacy-Protective Generative AI Technologies and How to Comply Using Private AI

Private AI Named One of The Most Innovative RegTech Companies by RegTech100

Data Integrity, Data Security, and the New NIST Cybersecurity Framework

Safeguarding Privacy with Commercial LLMs

Cybersecurity in the Public Sector: Protecting Vital Services

Privacy Impact Assessment (PIA) Requirements under Law25

Elevate Your Experience with Version 3.5

Fine-Tuning LLMs with a Focus on Privacy

GDPR in Germany: Challenges of German Data Privacy (Part 2)

Comply with US Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence using Private AI

How to Comply with EU AI Act using PrivateGPT