The American Privacy Rights Act – The Next Generation of Privacy Laws

Kathrin Gardhouse
Sep 19, 2024
Share this post
Sharing to FacebookSharing to LinkedInSharing to XSharing to Email

For the longest time, the US was one notable outlier in the global trend of developing federal-level comprehensive privacy laws. The nation, (in)famous for its patchworked approach to privacy with many sector-specific and 15 (soon to be 17) states laws that cover privacy protection comprehensively, is now (once again) close to joining the other 137 nations worldwide that have comprehensive privacy laws in place. That’s big news in itself, and what is more, the discussion draft text of the American Privacy Rights Act (APRA) contains several nuggets worth talking about, so talking about it we shall!

Who and what is covered:

APRA introduces a broad scope of applicability, significantly expanding the range of entities and data types subject to privacy regulations compared to most existing state laws and the European General Data Protection Regulation (GDPR). Unlike many state regulations, APRA includes both businesses and non-profits within its purview, while exempting small businesses unless they engage in selling data or handle information on more than 200,000 individuals for any purpose other than collecting payment for requested services.

The legislation also delineates roles within the data processing ecosystem, distinguishing between “covered entities” and “service providers”—terms reminiscent of the European concepts of controllers and processors—which clarifies responsibilities for data protection. Special attention is given to data brokers and large data holders who face heightened obligations, such as registering with the Commission and honoring do-not-collect requests (data brokers), providing concise privacy notices limited to 500 words (large data holders), and responding to access, rectification, and deletion requests in half the time compared to other covered entities, namely within 15 days (both).

Furthermore, APRA sets specific provisions for high-impact social media companies, recognizing the significant influence these platforms have on personal privacy.

In terms of the types of data covered, APRA adopts a comprehensive approach similar to European models, regulating any data that can be linked to specific individuals, including sensitive data. Sensitive data is very broadly defined. It includes data points like biometric details, calendar information, call logs, and online activities. This definition encompasses more data types than typically covered under U.S. state laws, and certainly more than the GDPR includes under “special categories of data,” addressing modern privacy concerns such as targeted advertising and the extensive tracking of online behavior. On the other hand, APRA excludes employee data in contrast to California. It further excludes de-identified data with a fairly robust standard that has to be met:DE-IDENTIFIED DATA—The term "de-identified data" means:

  1. Information that cannot reasonably be used to infer or derive the identity of an individual, does not identify and is not linked or reasonably linkable to an individual or a device that identifies or is linked or reasonably linkable to such individual, regardless of whether the information is aggregated, provided that the covered entity or service provider:

    (i) Takes reasonable physical, administrative, or technical measures to ensure that the information cannot, at any point, be used to re-identify any individual or device that identifies or is linked or reasonably linkable to an individual;

    (ii) Publicly commits in a clear and conspicuous manner to:

       (I) Process, retain, or transfer the information solely in a de-identified form without any reasonable means for re- identification; and

       (II) Not attempt to re-identify the information with any individual or device that identifies or is linked or reasonably linkable to an individual; and

    (iii) Contractually obligates any entity that receives the information from the covered entity or service provider to:

        (I) Comply with all of the provisions of this paragraph with respect to the information; and

       (II) Require that such contractual obligations be included in all subsequent instances for which the data may be received.

With regard to the territorial scope, the act generally preempts state privacy laws to create a uniform national standard, but it retains a complex list of exceptions. The California Privacy Protection Agency, for example, did not receive these preemption provisions well. It points out that the APRA’s data protection standards are considerably lower in several respects compared to those applicable in California which is not a step in the right direction.

What’s Familiar

Comparing APRA with the GDPR, we note that individual rights to access, correction, deletion, and portability have been incorporated into the APRA as well, with the exception of data held exclusively on device. As mentioned, the important distinction between “covered entities” and “service providers” is also European in origin. With this distinction, the primary responsibility will always rest with the covered entity, allowing services providers to store the data or process it for the covered entity in a limited way without shouldering the entire responsibility of being allocated control over the data. Other concepts such as data minimization and privacy impact assessments are also present in the APRA but with notable differences compared to the GDPR.

What’s New

The biggest difference to most existing laws, and certainly to the GDPR, is that APRA gives effect to the insight that giving individuals control over their data is in many cases more of a burden than a blessing. Under APRA, future regulations are supposed to establish a centralized opt-out mechanism that alleviates individuals from having to determine for every website they visit whether they want their data to be processed for the purposes set out in the privacy policy that no one ever reads anyways. The centralized opt-out mechanism allows individuals to opt out of 1) data transfers to relevant service providers, and 2) targeted advertising.

In furtherance of the goal of making protecting one’s privacy less burdensome, section 102 of APRA implements a strict data minimization standard, shifting the burden of data protection in significant respects to businesses controlling the data. The thought here seems to be this: If data minimization is required by law, individuals do not have to worry about businesses using their data for purposes outside of those narrowly defined ones set out in the act.

Under the APRA draft, businesses are permitted to collect, process, retain, or transfer data to the extent necessary to provide a specific product or service requested, or to communicate with the individual if such communication is reasonably to be expected in the context of the relationship with the individual. Aside from that, there is a list of 17 additional permitted uses. This use restriction cannot be circumvented by obtaining consent for further processing. There is no “these are the permitted uses unless the individual consents to further processing.” Otherwise covered entities would again seek to obtain that consent, possibly by using so-called “dark patterns” such as burying the relevant information in the bowels of their privacy policy and by making consenting a lot more convenient than withholding it.

In comparison to the GDPR’s legitimate interest provision which serves as an alternative to consent as a legal basis for processing, the APRA provides more clarity for covered entities. No balancing exercise is required to check whether any interests of individuals outweigh the legitimate interest brought forward by the organization.However, a public interest requirement forms part of the last of the permitted uses, namely to conduct a public or peer-reviewed scientific, historical, or statistical research project. This permitted use also has an affirmative consent requirement on top of the public interest determination when sensitive covered data is concerned.

In the first version of the discussion draft, there were considerable issues with the permitted use case of targeted advertisement. For example, it excluded the processing of sensitive data for this purpose but the definition of sensitive data entailed much of the information needed to undertake targeted advertisement, e.g., “information revealing an individual’s online activities over time and across websites or online services.” The revised draft now carves out these data elements (except if they pertain to a minor) and permits their use for targeted advertising, clarifying also that the opting out by an individual from targeted advertising trumps the permission granted in the data minimization section.

Another issue surfaced with regards to the previous APRA discussion draft was that the data minimization provisions seemed to apply to service providers as well. It will often be difficult for service providers to determine whether the data minimization standard is met, as this is in the control of the covered entity in most cases. For example, a cloud service provider stores user data for multiple businesses. Under data minimization principles, they should only hold data that is necessary for their function. However, the cloud provider may not always have the information to determine what data is essential for the services their clients provide to end-users. This ambiguity can lead to retaining more data than necessary or difficulty in complying with the minimization requirements. In reaction to this concern, the current version clarified that the data minimization provisions do not apply to service providers.

Having ironed out many of the concerns raised with regard to the first discussion draft, the main criticism of the revised version revolves around the inclusion of the Children and Teens’ Online Privacy Protection Act (COPPA 2.0) into APRA. For an overview of the issues brought up by representatives see Proposed American Privacy Rights Act clears US House subcommittee.

APRA and AI

The discussion draft addresses AI through provisions concerning "covered algorithms." These are defined as computational processes that utilize machine learning, statistical methods, or other data processing or AI techniques to make or assist in making decisions based on personal data. Specifically, the draft outlines the use of these algorithms in functions such as delivering or recommending products or services to individuals, where the data involved can identify or link to an individual.

APRA introduces the option for individuals to opt-out of decisions made by such algorithms if they are deemed consequential. Furthermore, entities that handle large volumes of data are required to perform or have an independent auditor conduct an impact assessment on the use of these algorithms if they are using the algorithm to make a consequential decision.These assessments must include comprehensive details about the algorithm's design, purpose, the data used for training, outputs produced, the necessity and proportionality of the outputs and the benefits and limitations, retraining data description, evaluation metrics, transparency measures, post-deployment monitoring and oversight processes, and potential harms on the basis of protected characteristics, being a minor, or an individual’s political party registration as well as mitigation measures taken.

If an independent auditor is engaged for the purpose of conducting an algorithmic impact assessment, the entity must signal to the National Telecommunications and Information Administration that an impact assessment has been completed. If the entity decides not to engage an independent auditor, the impact assessment must be submitted to the National Telecommunications and Informations Administration. They are tasked with reporting on best practices and strategies to mitigate any identified harms, starting three years after the enactment of the law. The original draft had assigned the Federal Trade Commission (FTC), in collaboration with the Secretary of Commerce, the responsibility to oversee these impact assessments and evaluations.

The impact assessment must be retained for 5 years and upon request it must be made available to Congress, and a summary may be published publicly by the entity, but this is not mandatory.

Additionally, the draft mandates that developers evaluate algorithm designs before deployment, aiming to safeguard against potential harms like discrimination or adverse effects on access to essential services such as housing or education. The scope is again limited to algorithms making consequential decisions, defined as follows:

The term ‘‘consequential decision’’ means a decision or an offer that determines the eligibility of an individual for, or results in the provision or denial to an individual of, housing, employment, credit opportunities, education enrollment or opportunities, access to places of public accommodation, healthcare, or insurance.

The legislation also encourages entities to adhere to data minimization standards, as discussed above, although the implications of such practices on AI development, particularly in relation to training algorithms with sensitive data, are yet unclear.

The draft also emphasizes civil rights protections, prohibiting the discriminatory use of data in accessing goods or services. It allows for certain exceptions, such as activities aimed at preventing discrimination or promoting diversity.

Lastly, the FTC is empowered to enforce these provisions, with the ability to initiate rulemaking to clarify the requirements for impact assessments and determine which algorithms pose low enough risks to be exempt from stringent evaluations.

Enforcement

The enforcement of APRA is a cooperative effort between federal and state authorities. Specifically, the FTC, state attorneys general, the chief consumer protection officer of each state, or an authorized officer or office designated by the state, are all empowered to enforce the provisions of the Act. Note, however, that the FTC’s commercial rulemaking ability is going to be terminated according to the discussion draft, while it retains some rulemaking ability with regard to concretizing what is reasonably necessary with regard to data minimization requirements.

Additionally, APRA grants individuals a significant tool in the form of a private right of action, enabling them to initiate lawsuits against entities that violate certain privacy rights, notably excluding data minimization obligations. This provision allows individuals not only to seek damages but also to obtain injunctive and declaratory relief. The Act also allows for the recovery of reasonable legal and litigation costs, which helps to ensure that individuals are not deterred from seeking justice due to financial constraints. Mandatory arbitration cannot be imposed on consumers claiming substantial privacy violations, meaning financial harm of more than $10,000 or certain mental or physical harm, or for individuals under the age of 18.

Timelines

In light of the many new obligations that the APRA would bring, its 180-day enforcement timeline starting from the time it is adopted and with specifics in different areas is quite short. Comparing this to the EU AI Act which gives businesses 2 years from coming into force, 6 months seems particularly ambitious. It remains to be seen how the discussion draft progresses and what changes will be made along the way.

A last word on data minimization

We often hear, and again did hear in the context of APRA and its attempt to address data protection in the AI context, that data minimization and AI development are in an unresolvable tension. We at Private AI respectfully disagree. In fact, our solutions aim to do exactly that: remove personal identifiers from unstructured data on a large scale, and we are very good at doing that. Geared towards developers, we provide easy-to-integrate AI-powered software to identify and redact over 50 entities in 53 languages, supporting various file formats. Still sceptical? Try it here on your own data or request and API key.

Data Left Behind: AI Scribes’ Promises in Healthcare

Data Left Behind: Healthcare’s Untapped Goldmine

The Future of Health Data: How New Tech is Changing the Game

Why is linguistics essential when dealing with healthcare data?

Why Health Data Strategies Fail Before They Start

Private AI to Redefine Enterprise Data Privacy and Compliance with NVIDIA

EDPB’s Pseudonymization Guideline and the Challenge of Unstructured Data

HHS’ proposed HIPAA Amendment to Strengthen Cybersecurity in Healthcare and how Private AI can Support Compliance

Japan's Health Data Anonymization Act: Enabling Large-Scale Health Research

What the International AI Safety Report 2025 has to say about Privacy Risks from General Purpose AI

Private AI 4.0: Your Data’s Potential, Protected and Unlocked

How Private AI Facilitates GDPR Compliance for AI Models: Insights from the EDPB's Latest Opinion

Navigating the New Frontier of Data Privacy: Protecting Confidential Company Information in the Age of AI

Belgium’s Data Protection Authority on the Interplay of the EU AI Act and the GDPR

Enhancing Compliance with US Privacy Regulations for the Insurance Industry Using Private AI

Navigating Compliance with Quebec’s Act Respecting Health and Social Services Information Through Private AI’s De-identification Technology

Unlocking New Levels of Accuracy in Privacy-Preserving AI with Co-Reference Resolution

Strengthened Data Protection Enforcement on the Horizon in Japan

How Private AI Can Help to Comply with Thailand's PDPA

How Private AI Can Help Financial Institutions Comply with OSFI Guidelines

The American Privacy Rights Act – The Next Generation of Privacy Laws

How Private AI Can Help with Compliance under China’s Personal Information Protection Law (PIPL)

PII Redaction for Reviews Data: Ensuring Privacy Compliance when Using Review APIs

Independent Review Certifies Private AI’s PII Identification Model as Secure and Reliable

To Use or Not to Use AI: A Delicate Balance Between Productivity and Privacy

To Use or Not to Use AI: A Delicate Balance Between Productivity and Privacy

News from NIST: Dioptra, AI Risk Management Framework (AI RMF) Generative AI Profile, and How PII Identification and Redaction can Support Suggested Best Practices

Handling Personal Information by Financial Institutions in Japan – The Strict Requirements of the FSA Guidelines

日本における金融機関の個人情報の取り扱い - 金融庁ガイドラインの要件

Leveraging Private AI to Meet the EDPB’s AI Audit Checklist for GDPR-Compliant AI Systems

Who is Responsible for Protecting PII?

How Private AI can help the Public Sector to Comply with the Strengthening Cyber Security and Building Trust in the Public Sector Act, 2024

A Comparison of the Approaches to Generative AI in Japan and China

Updated OECD AI Principles to keep up with novel and increased risks from general purpose and generative AI

Is Consent Required for Processing Personal Data via LLMs?

The evolving landscape of data privacy legislation in healthcare in Germany

The CIO’s and CISO’s Guide for Proactive Reporting and DLP with Private AI and Elastic

The Evolving Landscape of Health Data Protection Laws in the United States

Comparing Privacy and Safety Concerns Around Llama 2, GPT4, and Gemini

How to Safely Redact PII from Segment Events using Destination Insert Functions and Private AI API

WHO’s AI Ethics and Governance Guidance for Large Multi-Modal Models operating in the Health Sector – Data Protection Considerations

How to Protect Confidential Corporate Information in the ChatGPT Era

Unlocking the Power of Retrieval Augmented Generation with Added Privacy: A Comprehensive Guide

Leveraging ChatGPT and other AI Tools for Legal Services

Leveraging ChatGPT and other AI tools for HR

Leveraging ChatGPT in the Banking Industry

Law 25 and Data Transfers Outside of Quebec

The Colorado and Connecticut Data Privacy Acts

Unlocking Compliance with the Japanese Data Privacy Act (APPI) using Private AI

Tokenization and Its Benefits for Data Protection

Private AI Launches Cloud API to Streamline Data Privacy

Processing of Special Categories of Data in Germany

End-to-end Privacy Management

Privacy Breach Reporting Requirements under Law25

Migrating Your Privacy Workflows from Amazon Comprehend to Private AI

A Comparison of the Approaches to Generative AI in the US and EU

Benefits of AI in Healthcare and Data Sources (Part 1)

Privacy Attacks against Data and AI Models (Part 3)

Risks of Noncompliance and Challenges around Privacy-Preserving Techniques (Part 2)

Enhancing Data Lake Security: A Guide to PII Scanning in S3 buckets

The Costs of a Data Breach in the Healthcare Sector and its Privacy Compliance Implications

Navigating GDPR Compliance in the Life Cycle of LLM-Based Solutions

What’s New in Version 3.8

How to Protect Your Business from Data Leaks: Lessons from Toyota and the Department of Home Affairs

New York's Acceptable Use of AI Policy: A Focus on Privacy Obligations

Safeguarding Personal Data in Sentiment Analysis: A Guide to PII Anonymization

Changes to South Korea’s Personal Information Protection Act to Take Effect on March 15, 2024

Australia’s Plan to Regulate High-Risk AI

How Private AI can help comply with the EU AI Act

Comment la Loi 25 Impacte l'Utilisation de ChatGPT et de l'IA en Général

Endgültiger Entwurf des Gesetzes über Künstliche Intelligenz – Datenschutzpflichten der KI-Modelle mit Allgemeinem Verwendungszweck

How Law25 Impacts the Use of ChatGPT and AI in General

Is Salesforce Law25 Compliant?

Creating De-Identified Embeddings

Exciting Updates in 3.7

EU AI Act Final Draft – Obligations of General-Purpose AI Systems relating to Data Privacy

FTC Privacy Enforcement Actions Against AI Companies

The CCPA, CPRA, and California's Evolving Data Protection Landscape

HIPAA Compliance – Expert Determination Aided by Private AI

Private AI Software As a Service Agreement

EU's Review of Canada's Data Protection Adequacy: Implications for Ongoing Privacy Reform

Acceptable Use Policy

ISO/IEC 42001: A New Standard for Ethical and Responsible AI Management

Reviewing OpenAI's 31st Jan 2024 Privacy and Business Terms Updates

Comparing OpenAI vs. Azure OpenAI Services

Quebec’s Draft Regulation Respecting the Anonymization of Personal Information

Version 3.6 Release: Enhanced Streaming, Auto Model Selection, and More in Our Data Privacy Platform

Brazil's LGPD: Anonymization, Pseudonymization, and Access Requests

LGPD do Brasil: Anonimização, Pseudonimização e Solicitações de Acesso à Informação

Canada’s Principles for Responsible, Trustworthy and Privacy-Protective Generative AI Technologies and How to Comply Using Private AI

Private AI Named One of The Most Innovative RegTech Companies by RegTech100

Data Integrity, Data Security, and the New NIST Cybersecurity Framework

Safeguarding Privacy with Commercial LLMs

Cybersecurity in the Public Sector: Protecting Vital Services

Privacy Impact Assessment (PIA) Requirements under Law25

Elevate Your Experience with Version 3.5

Fine-Tuning LLMs with a Focus on Privacy

GDPR in Germany: Challenges of German Data Privacy (Part 2)

Comply with US Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence using Private AI

How to Comply with EU AI Act using PrivateGPT