The Data Conundrum: Navigating Healthcare Privacy Legislation in Canada

Share This Post

The Canadian healthcare and health tech space is robust and growing at warp speed. Globally health tech, especially in the AI field, is evolving much faster than privacy legislation can keep pace. As such, with most sensitive of data at hand, this conundrum is becoming a complex space to navigate in the Canadian privacy legislative spectrum for many data driven companies. Traversing across important privacy-centric practices such as de-identification, data minimization and data sharing, the legislation is not all that transparent.

The privacy protection of personal information (PI) and personal health information (PHI) is prioritized unequivocally across Canada at the federal, provincial, and territorial levels. There are 10 provinces, 3 territories, and the federal government as the jurisdictions where a plethora of privacy and health privacy legislation exist. Some Canadian provinces (Ontario, Nova Scotia, New Brunswick, and Newfoundland & Labrador) have their health privacy legislation deemed substantially similar to the federal privacy law, the Personal Information Protection and Electronic Documents Act (PIPEDA). Healthcare providers and organizations working in these provinces are exempt from PIPEDA and must comply with their respective provincial health privacy legislation. Such legislation in the remaining provinces and territories has not been declared substantially similar to PIPEDA hence PIPEDA may apply.  

Since PIPEDA came into effect in 2004, it has not been amended to reflect the rapid technological advancements and exchange of big data. The federal government is planning on repealing PIPEDA by enacting the new proposed federal Consumer Privacy Protection Act (CPPA), one of the three proposed acts under Bill C-27 (Digital Implementation Act). The other two acts are the Personal Information and Data Protection Tribunal (PIDTA) and Artificial Intelligence and Data Act (AIDA). 

There are many common themes in health and private sector privacy legislation nationwide that have notable impacts on the collection, use, processing, retention, and disclosure of PI and PHI.  For the purpose of this blog, the three areas of broad discussion are de-identified data (PI and PHI), data minimization, and data sharing with third-party service providers.  

De-Identified Data

The consensus on the definition of de-identified PI varies from region to region. The definition of “de-identify” (in scope for this blog) will be discussed within the context of PIPEDA, CPPA, Ontario’s health privacy legislation Personal Health Information Protection Act (PHIPA), and Quebec’s Bill 64.


PIPEDA applies to the collection, use, and disclosure of all PI. It applies to any federal body along with provinces and territories that do not have substantially similar legislation. In PIPEDA, the concept of de-identification of PI is not expressly addressed. Rather it is implied via reference to anonymizing data resulting in issues arising around the requirement of express or implied consent to generate de-identified data from PI.  


“de-identify means to modify personal information so that an individual cannot be directly identified from it though a risk of the individual being identified remains.”

Exceptions to consent under the CPPA (section 20) states that knowledge and consent are not required to de-identify personal information: 

“An organization may use an individual’s personal information without their knowledge or consent to de-identify the information.”

Regarding the measures that an organization is recommended to follow, CPPA (section 74) provides that:

“An organization that de-identifies personal information must ensure that any technical and administrative measures applied to the information are proportionate to the purpose for which the information is de-identified and the sensitivity of the personal information.” 


“de-identify, in relation to the personal health information of an individual, means to remove any information that identifies the individual or for which it is reasonably foreseeable in the circumstances that it could be utilized, either alone or with other information, to identify the individual, and “de-identification” has a corresponding meaning;”

Under PIPEDA, CPPA, and PHIPA, there is consensus that if the PI (information about an identifiable individual) is not identifiable from a de-identified dataset, then that dataset is not considered PI or PHI therefore, it is not regulated by PIPEDA, PHIPA, and eventually the proposed CPPA. Furthermore, it is generally understood that once a dataset is de-identified, the organization that collects, uses, and discloses that de-identified dataset may do so without any notice or consent and is subject to the prohibition of re-identification.

Bill 64:

“… personal information is de-identified if it no longer allows the person concerned to be directly identified.”

In regards to the measures that an organization is recommended to follow, Bill 64 provides that: 

“…information concerning a natural person is anonymized if it is, at all times, reasonably foreseeable in the circumstances that it irreversibly no longer allows the person to be identified directly or indirectly. Information anonymized under this Act must be anonymized according to generally accepted best practices and according to the criteria and terms determined by regulation.”

Under Bill 64 (similar to CPPA and PHIPA), PI is considered to be de-identified if the removal of direct identifiers has occurred. Like CPPA, Bill 64 also addresses anonymization in relation to the irreversible de-identification of PI via the removal of direct and indirect identifiers.

Data Minimization

In Canada (as is the case globally), organizations must have legitimate business objectives prior to collecting PI. And if the collection is pursued, then it is incumbent upon the organization to securely retain, use and dispose of it in compliance with legislative requirements. As part of an organization’s compliance efforts with legislation and best practices, they should synthesize and implement policies and procedures to minimize the personal information they collect, use and retain. PIPEDA (unlike other similar legislation) addresses the concept of data minimization directly by way of Fair Information Principles:

Principle 4: Limiting Collection:

“The collection of personal information must be limited to that which is needed for the purposes identified by the organization. Information must be collected by fair and lawful means.”

Principle 5: Limiting Use, Disclosure, and Retention

“Unless the individual consents otherwise or it is required by law, personal information can only be used or disclosed for the purposes for which it was collected. Personal information must only be kept as long as required to serve those purposes.”

Third-party service providers

Unlike the GDPR, which provides specific requirements for the exchange of PI between a Data Controller and Data Processor, the CPPA addresses this issue generally. As the CPPA and its sister acts undergo a second hearing, thus far, it is not clear if this act provides third-party service providers with the right to de-identify PI and subsequently use the de-identified PI for their own purposes. The current privacy legislation at a high-level touch on this issue, if not silent altogether.

An overarching understanding may be that one could obtain consent from the organization providing the de-identified PHI or PI (via service contracts) or the individuals to whom the de-identified personal information pertains. It would be prudent for the source organization (providing the personal information) to require service contracts to clearly delineate the obligations in respect of both parties concerning protection measures that ought to be undertaken to protect the personal information within their purview.

Final Thoughts

In its current state, the Canadian health privacy legislation lacks transparency and clear direction on handling PI and PHI via de-identification, data minimization, and data sharing practices. However, the pressure is on legislatures to produce more robust and succinct laws reflecting the complex and evolving technological ecosystem.

About the Author: A passionate Privacy and Security Evangelist, Saima Fancy’s professional background and work is cross-cutting across various disciplines ranging from Data Privacy and Security, Engineering, Health Law and Health Policy. She leverages her diverse experiences and perspectives and funnels them into the interdisciplinary field of Privacy Engineering. Most recently, she was a Privacy Engineer at Twitter in the FlightWatch Team. During her free time, Saima volunteers as Faculty Council at the University of Toronto Faculty of Engineering, NIST Privacy Workforce Public Working Group (Risk Assessment), has presented at IAPP and (ISC)2 events (amongst others).

Subscribe To Our Newsletter

Sign up for Private AI’s mailing list to stay up to date with more fresh content, upcoming events, company news, and more! 

More To Explore


Testé sur un ensemble de données composé de données conversationnelles désordonnées contenant des informations de santé sensibles. Téléchargez notre livre blanc pour plus de détails, ainsi que nos performances en termes d’exactitude et de score F1, ou contactez-nous pour obtenir une copie du code d’évaluation.

99.5%+ Accuracy

Number quoted is the number of PII words missed as a fraction of total number of words. Computed on a 268 thousand word internal test dataset, comprising data from over 50 different sources, including web scrapes, emails and ASR transcripts.

Please contact us for a copy of the code used to compute these metrics, try it yourself here, or download our whitepaper.