In the US, there is, as of yet, no comprehensive federal privacy legislation, but there is a federal agency, the Federal Trade Commission (FTC), that is at the forefront of notable enforcement actions. In the absence of comprehensive federal privacy law, the FTC relies for its privacy enforcement actions on acts that are either narrowly applicable privacy laws, such as the Children’s Online Privacy Protection Act Rule (COPPA Rule) or the Health Insurance Portability and Accountability Act (HIPAA), or on laws that have a broader scope and include privacy protections tangentially, such as the FTC Act, empowers the FTC to take action against deceptive or unfair trade practices. This includes addressing deceptive practices related to privacy policies and data protection. The sharpest tool in the FTC’s enforcement toolkit is probably its ability to require the deletion of models and algorithms trained on data in violation of legal obligations.
The FTC also focuses on education and raising awareness of AI companies and private individuals about their obligations and rights, respectively. Most recently, it published an article titled “AI Companies: Uphold Your Privacy and Confidentiality Commitments,” in which it reminds companies that it has the mandate and power to hold AI companies accountable for the claims that they make with regards to their products, be that in their advertisements, privacy statements, or Terms of Service.
Privacy Violation in the AI Context
The most important privacy considerations in the context of AI companies are:
- – Using personally identifiable information (PII) for purposes other than those for which the PII was collected, i.e., to train an AI model, in contradiction to the commitment made to the individual when they agreed to provide their data;
- – Failing to disclose such additional purposes of use altogether; and
- – Hiding the relevant disclosure behind hyperlinks or in non-comprehensive fine print.
The FTC can get AI companies for such practices by couching these privacy-related misrepresentations and omissions in terms of unfair competition. Consumer trust is a big deal and privacy plays an increasingly important role in this respect. If companies develop their AI solutions under the pretense of responsibly handling or not using PII, they have a competitive advantage over companies that make it known that they use PII, or that put additional effort into ensuring that they don’t.
Privacy Protection in the AI Context
An obvious option not to violate competition, consumer protection, and antitrust laws is to make the proper disclosures and obtain consent for the use of PII for model training purposes. Where the use case allows for it or where the proper disclosure is onerous to obtain, another possibility is to filter out the PII from the data set that is to be used to train the model.
Here, Private AI can help. With its ability to identify and redact more than 50 entities of PII, Private AI is well equipped to help with the difficult task of reliably removing PII from data sets at scale. To see the tech in action, try our web demo, or get a free API key to try it yourself on your own data.