The evolving landscape of data privacy legislation in healthcare in Germany

Share This Post

The healthcare sector has witnessed a remarkable evolution in data privacy legislation from the 1970s to the present, mirroring the technological innovations of the time. The journey we trace here using Germany as an example illustrates how laws have struggled to adapt to protect and make usable sensitive health information against the backdrop of digital transformation and Artificial Intelligence (AI) integration. 

We find that, to this day, clarity regarding the privacy-preserving techniques of pseudonymization and anonymization is lacking and that consent requirements for the collection and use of health data are in tension with broad use of electronic health records, particularly for care innovation research and perosnalized offers by insurers.

The 1970s and 80s: The Dawn of Data Privacy in Healthcare

The 1970s and 80s did not see specific healthcare data privacy laws in the US or Europe directly addressing digital records; instead, this era laid the groundwork for future legislation through general data protection laws concerning personal data processing. 

For example, Hessen was the first German Bundesland to enact a data protection law in 1970 and the Bundesdatenschutzgesetz (BDSG), Germany’s Federal Data Protection Act, was initially enacted in 1977. Health information finds no special mention in these laws. It is, however, the professional obligation of health practitioners to protect health-related information. Violations have been penalized per the German criminal code since 1871

Other mostly European countries followed with their own data protection laws. As a result, diverging laws on data protection emerged with the effect of making cross-border data transfers difficult. The Council of Europe took coordinated action and opened the Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data (Convention 108) for signature to European member states as well as countries outside of Europe in 1981. The intended international coordination was not successful, though, due to diverging approaches of implementing them into national laws.

The 1990s – From Europe’s Directives to the GDPR

The EU’s initial comprehensive data protection legislation was Directive 95/46/EC adopted in 1995 and effective from October 24, 1998. However, the Directive, too, had to be implemented into national law, which effectively resulted in similar divergences among states as those following Convention 108. The powers of the European Commission to issue infraction proceedings for incorrect implementation could not fully address the issue, and heightened costs and bureaucracy for businesses for cross-border data transfers ensued.

Effective from May 25, 2018, the General Data Protection Regulation (GDPR) replaced Directive 95/46/EC, enhancing privacy protections and harmonizing data privacy laws across Europe. As a regulation, the GDPR is directly applicable in the member states, avoiding many concerns regarding inconsistencies. On the other hand, the GDPR still allows for flexibility in areas where national particularities are considered important as they give effect to different cultural and social values and approaches to data protection. The most prevalent example is the protection of healthcare data. 

Art. 9 of the GDPR lists health information, including biometric and genetic data, as a ‘special category’ of personal data. Special categories of data are prohibited to be processed unless limited exceptions apply. Examples of exceptions to the processing prohibition are the consent of the individual, the necessity of processing for vital interests of the individual, and public health interests.

Art. 9(4) GDPR allows EU member states to maintain or introduce further conditions, including limitations, with regard to the processing of genetic data, biometric data or data concerning health. Hence, it is possible for member states to disallow the processing of health information even if the individual consents. In other words, with regard to health information, the GDPR sets out merely the minimum protection health information receives in the EU allowing member states to enact more stringent protection. 

Within the broad framework, EU member states face the practical challenge of making sensitive health data usable for research and innovation, for example in AI applications, while meeting the data protection requirements the GDPR mandates.

Germany

Germany’s Federal Data Protection Act underwent significant amendments to align with the GDPR by May 25, 2018. With regard to healthcare and other special categories of data, Germany took advantage of the flexibility provided for in the GDPR. For details on how Germany regulates special categories of data, see our blog post Processing of Special Categories of Data in Germany.

Even prior to the GDPR, there has been and still continues a big push accompanied by a heated debate around the digitization of health data and the creation of a register to improve efficiency in healthcare, on the one hand, and to enable research and innovation using such records on the other hand. 

Health data digitization to improve care

In terms of improving the accuracy and efficiency of healthcare services, a scandal around the drug Lipobay some 20 years ago marks the kick-off of the health data digitization initiative. Lipobay led to severe interactions including deaths when combined with other medications. The resulting vision was that an electronic patient record could prevent such fatal prescriptions because the doctors could access the patients’ medical data to reliably surface other medications the patients are taking.

Despite the fatal results, the electronic patient record initiative has been underway since 2003 and is taking embarrassingly long to bring to fruition. There are many reasons for this, first and foremost that Germany initially chose a bottom-up approach where associations of doctors, health insurance companies, pharmacists, and clinics were tasked with furthering the project instead of clear mandates imposed by the government. Other reasons are privacy and data security concerns and a myriad of different agendas pursued by the many different stakeholders, among which there are supposedly some that do not welcome the increased transparency of whether their methods are in fact effective treatments. 

In addition, the use of the electronic patient record available since 2021 is not mandatory; patients have to request it and hardly anyone did. In the first 2 years since the electronic health record was offered, only 1 percent of Germans took their insurance providers up on it. (Other sources indicate a usage of 22 percent in 2021 already.)

With the enactment of the Digital Law in December of 2023, the electronic health record will be implemented for all state-insured Germans. If patients don’t want it, they must actively object to it. A key component is the electronic prescription which will be established as a mandatory standard in the medication supply chain. The Digital Law also provides that Digital Health Applications (DiGA) will be more deeply integrated into care processes and their use made transparent. With the expansion of DiGA to include higher-risk digital medical devices they will also be able to be used for more complex treatment processes – for example, for telemonitoring.

Health data digitization for research and innovation

Roughly parallel to the electronic patient record which aims at digitizing health information to improve and streamline health services, the initiative to use digital health data for research and innovative purposes commenced. 

Today, these two closely related aspects of health data digitization are regulated in different laws. The draft of the Health Data Utilization Law (GDNG) includes a summary of the status quo regarding the usability of health data for medical science (freely translated):

In Germany, health data are currently not available to a sufficient extent for further use outside the immediate context of care, or they are not being sufficiently utilized. Further use often fails due to different regulations regarding access to data and data protection in European law, federal and state law, as well as due to a non-uniform interpretation of the law by data protection officers and supervisory authorities. The lack of guidelines and procedures for linking data from different sources represents another obstacle to data utilization.

Under the GDNG state insurance funds will be allowed greater use of these data than before if it serves better care, such as medication therapy safety or the detection of cancer or rare diseases. Additionally, the previous consent requirement for the use of their data for research and for individual offers by their insurance provider is replaced by an objection procedure for releasing data from the electronic patient record to better utilize the data for research purposes. These data could be analyzed for studies and artificial intelligence as well as improved treatment of patients.

The GDNG now enables the targeted use of data from the electronic patient records consolidated in a data-protected space, including health insurance billing data. To this end, a decentralized health data infrastructure will be established with a central data access and coordination point at the Federal Institute for Drugs and Medical Devices. For the first time, automatically pseudonymized health data from various data sources will be able to be linked together here. Pseudonymized data must be anonymized “as soon as it is possible within the context of further processing for the respective purpose.”

Pseudonymization and anonymization

The law is silent with regards to how data has to be anonymized. With regard to the pseudonymization technique, the GDNG says the following (freely translated):

The pseudonymization process must exclude, according to the current state of technology, any unlawful identification of the affected patients. The procedure for pseudonymization will be determined by the Trust Center in agreement with the Federal Commissioner for Data Protection and Freedom of Information and the Federal Office for Information Security.

The only resource that appears to be publicly available is a 2012 research report funded by the German Health Ministry which sets out options for the secure implementation of pseudonymization but naturally does not portray the current state of technology. 

Cognizant of the fact that pseudonymization alone, in many cases, does not provide sufficient protection against re-identification of individuals whose data are contained in a particular data set, federated learning techniques are being considered as additional privacy-preserving measures if the data are used to train an AI model. But nothing further than pseudonymization is required under the law.

Conclusion

Germany has been having a difficult time with the digitization of health data and the regulation of data protection in healthcare. Guided by the GDPR, Germany has now enacted two laws that amend existing data protection legislation governing health data privacy as well as research and innovation. Data privacy was, and still is, a main concern – with some voices saying that no other country interprets the GDPR as strictly as Germany. 

That being said, the opt out mechanism which is supposed to battle the very poor uptake by the German population of the electronic patient record might prove to be problematic. In 2021, the German constitutional court rejected a constitutional challenge alleging privacy violations of previous versions of the digital health law provisions. The claimant alleged that the elimination of the consent requirement for receiving offers for – legally unspecified – care innovations previous law was unconstitutional.The court rejected this argument, reasoning that the use of the electronic patient record is not mandatory under the law and hence the claimant could avoid receiving such offers simply by not requesting the electronic health record from their insurer. But this means in effect that using an electronic health record automatically results in health data being used to prepare and offer targeted care innovations unless the patient objects. It would be interesting to see if this still holds with an opt-out mechanism replacing the requirement of having to request the digitization of one’s data. 

A second concern is the lack of clarity around data pseudonymization and anonymization. As we have seen, no clear standards exist as of yet, which is neither contributing to trust by patients nor to clarity for healthcare service providers.

In conclusion, it appears that despite the two decade-long process, Germany does not have a full handle on how it wants to protect health data or how the electronic patient record is supposed to function in practice. In comparison with other European countries, in particular Denmark and Estonia, Germany is far behind.

Further Resources

We previously published a number of resources that inform about how Private AI can facilitate the protection of health data and thus compliance with health data protection laws. In short, Private AI can reliably detect and redact protected health information in large data sets, whether composed of structured or unstructured data. Reliable health data detection prevents inadvertant possession of sensitive data, and their redaction helps safeguarding them and achieving compliance with data privacy laws. To see the tech in action, try our web demo, or request an API key to try it yourself on your own data. For more information, see further resources below:

 

Subscribe To Our Newsletter

Sign up for Private AI’s mailing list to stay up to date with more fresh content, upcoming events, company news, and more! 

More To Explore

Download the Free Report

Request an API Key

Fill out the form below and we’ll send you a free API key for 500 calls (approx. 50k words). No commitment, no credit card required!

Language Packs

Expand the categories below to see which languages are included within each language pack.
Note: English capabilities are automatically included within the Enterprise pricing tier. 

French
Spanish
Portuguese

Arabic
Hebrew
Persian (Farsi)
Swahili

French
German
Italian
Portuguese
Russian
Spanish
Ukrainian
Belarusian
Bulgarian
Catalan
Croatian
Czech
Danish
Dutch
Estonian
Finnish
Greek
Hungarian
Icelandic
Latvian
Lithuanian
Luxembourgish
Polish
Romanian
Slovak
Slovenian
Swedish
Turkish

Hindi
Korean
Tagalog
Bengali
Burmese
Indonesian
Khmer
Japanese
Malay
Moldovan
Norwegian (Bokmål)
Punjabi
Tamil
Thai
Vietnamese
Mandarin (simplified)

Arabic
Belarusian
Bengali
Bulgarian
Burmese
Catalan
Croatian
Czech
Danish
Dutch
Estonian
Finnish
French
German
Greek
Hebrew
Hindi
Hungarian
Icelandic
Indonesian
Italian
Japanese
Khmer
Korean
Latvian
Lithuanian
Luxembourgish
Malay
Mandarin (simplified)
Moldovan
Norwegian (Bokmål)
Persian (Farsi)
Polish
Portuguese
Punjabi
Romanian
Russian
Slovak
Slovenian
Spanish
Swahili
Swedish
Tagalog
Tamil
Thai
Turkish
Ukrainian
Vietnamese

Rappel

Testé sur un ensemble de données composé de données conversationnelles désordonnées contenant des informations de santé sensibles. Téléchargez notre livre blanc pour plus de détails, ainsi que nos performances en termes d’exactitude et de score F1, ou contactez-nous pour obtenir une copie du code d’évaluation.

99.5%+ Accuracy

Number quoted is the number of PII words missed as a fraction of total number of words. Computed on a 268 thousand word internal test dataset, comprising data from over 50 different sources, including web scrapes, emails and ASR transcripts.

Please contact us for a copy of the code used to compute these metrics, try it yourself here, or download our whitepaper.