A Comparison of the Approaches to Generative AI in Japan and China

Kathrin Gardhouse
Jun 14, 2024
Share this post
Sharing to FacebookSharing to LinkedInSharing to XSharing to Email

In the rapidly evolving landscape of generative AI, distinct regulatory and ethical approaches have emerged, reflecting the values, ambitions, and constraints of various global players. We previously delved into the contrasting strategies of the United States and the European Union, two titans in the realm of artificial intelligence. Today, we broaden our lens to encompass the unique approaches of Japan and China. These Asian powerhouses have carved out their own pathways in the generative AI space, informed by their cultural, economic, and geopolitical contexts.

Japan's Approach to Generative AI

With a vision centered on human-centric AI, Japan places emphasis on respecting human dignity and values, promoting social welfare and diversity, and fostering trust and collaboration. Japan’s approach to generative AI is grounded in the Social Principles of Human-Centric AI, published by the government in 2019. These Social Principles enumerate 10 key aspects including transparency, accountability, fairness, security, privacy, education, research, governance, and international cooperation. The overall idea is to realize these goals not by reigning in AI use, but through AI.

To maximize the positive societal impact of generative AI while minimizing its negative ramifications, Japan employs a strategy of ongoing development and revision of AI-related regulations. This strategy follows a risk-based, agile, and multistakeholder process. Several initiatives have also been launched to encourage data sharing and utilization among public and private sectors as well as internationally, notably the Data Free Flow with Trust (DFFT) framework.

Similar to the EU, rather than adopting a one-size-fits-all regulatory approach, Japan’s policy evaluates generative AI applications based on their specific benefits and risks. This enables a more flexible and adaptive regulatory environment that can respond to changing technologies and circumstances. Various laws and guidelines have been developed or revised to guide this sector, such as the Act on the Protection of Personal Information (APPI), the draft AI Guidelines, the Copyright Act, and the soon-to-be proposed Generative AI rules

.With regard to AI regulation, then, Japan is leaving its mark globally. But when it comes to actually developing their own LLMs, Japan is perceived to be behind the US, EU, and China due to a lack of computing and human resources. But it has plans to catch up. Through various programs and projects like the super ambitious Moonshot Research and Development Program (MRDP) and the Artificial Intelligence Strategy Council (AITSC), significant investments have been made in research and development. The country has also capitalized on its supercomputing capabilities, notably with publicly controlled Fugaku, to develop large language models based on Japanese data. Private sector efforts are underway to add much needed infrastructure. Nevertheless, Japan is home to several notable startups in the generative AI space, including Kotoba Technology.

China's Approach to Generative AI

China's stance on AI is fundamentally geared toward becoming a global leader in artificial intelligence by 2030, as laid out in its Next Generation Artificial Intelligence Development Plan. This has led to fierce competition, particularly with the US, further fueled by concerns around AI’s potential to tip the scale in the geopolitical and military context. A distinguishing feature of China's approach is the symbiotic relationship between the government and technology companies like Tencent, Alibaba, and Baidu. This unique public-private partnership has made China a powerhouse in AI research and applications, from natural language processing to autonomous vehicles.

Ethically, China's AI development is guided by a concept known as "Ethical AI," introduced in the Beijing AI Principles. These principles, which somewhat parallel Japan's Social Principles of Human-Centric AI, cover aspects like shared benefit, sustainability, and security. However, unlike the democratic governance models of Japan, the US, and the EU, China's approach to AI ethics and policy is more centralized and tightly controlled by the state. The government has the final say on what counts as ethical or what needs to be regulated, often tying these decisions closely to broader state objectives, such as social stability or national security.

Honing in on generative AI in particular, the recently enacted Interim Measures for the Management of Generative Artificial Intelligence Services serve as a case in point for China's centralized, state-driven approach to AI governance while not losing sight of the concern for lawful rights of individuals. These Measures provide a comprehensive legal framework that addresses key ethical concerns like discrimination, intellectual property rights, and public morality, all while aligning closely with broader state objectives such as national sovereignty and social stability, making China a leading, proactive player in AI regulation. By setting forth requirements that explicitly mandate the adherence to "Core Socialist Values," as well as explicit technical and ethical guidelines for AI development, these Measures exemplify China's commitment to harmonizing rapid technological advancement with its unique socio-political landscape. This regulatory approach reflects China's desire to maintain a delicate balance: fostering innovation and global leadership in AI, while ensuring that such advancements are in lockstep with its national ethos and global ambitions. Some voices say that, at least for now, innovation takes priority, while others emphasise the urgent interest of the Chinese government to control information as this is key for its capacity to shape public opinion and ensure the Chinese government's legitimacy.

China's innovation in generative AI is robust, led by both state-funded projects and commercial enterprises. Massive datasets available in China, coupled with less stringent data privacy laws compared to the EU, give an edge to Chinese companies developing AI technologies, at least with regard to computer vision due to more prevalent surveillance cameras, but less so with regard to written text; in this aspect the Chinese corpus cannot compete with what is available in English, making it fall behind the US in the development and implementation of LLMs.

Similar to leading US companies, open-sourcing AI models has also been the approach by some Chinese companies. Yet, this is not of great advantage for the Chinese as computing power is restricted, given the difficulty of procuring NVIDIA’s advanced GPU chips.

Conclusion

In conclusion, the world of generative AI is as diverse in its regulatory and ethical considerations as it is in its technological applications. Whether it's the United States' market-driven model, the European Union's human-centric, risk-based framework, Japan's agile and multistakeholder approach, or China's state-centered strategy, each offers valuable lessons. As we forge ahead into an era where generative AI will increasingly become a staple of everyday life, understanding these different models helps us appreciate the complex tapestry of considerations that guide AI development and deployment. It can hopefully also aid in debunking the argument against regulation of AI as it would lead to a disadvantage in international competition. When we note how strictly AI is regulated in China to ensure stability and that they have to grapple with resource restrictions both in terms of computing power as well as talent that is driven to the US, we can hopefully focus more on the significant risks that AI poses for all of society, on a global level, and tackle the very hard but very necessary challenge of making sure we get the regulatory piece going. A comparative view of what other nations are doing may also prove useful to prevent inaction out of fear of not getting it right at first try. Quoting Tom Wheeler, former chairman of the Federal Communication Corporation, from a recent podcast appearance concerning AI regulation: “There are no paths. Paths are made by walking.”

Data Left Behind: AI Scribes’ Promises in Healthcare

Data Left Behind: Healthcare’s Untapped Goldmine

The Future of Health Data: How New Tech is Changing the Game

Why is linguistics essential when dealing with healthcare data?

Why Health Data Strategies Fail Before They Start

Private AI to Redefine Enterprise Data Privacy and Compliance with NVIDIA

EDPB’s Pseudonymization Guideline and the Challenge of Unstructured Data

HHS’ proposed HIPAA Amendment to Strengthen Cybersecurity in Healthcare and how Private AI can Support Compliance

Japan's Health Data Anonymization Act: Enabling Large-Scale Health Research

What the International AI Safety Report 2025 has to say about Privacy Risks from General Purpose AI

Private AI 4.0: Your Data’s Potential, Protected and Unlocked

How Private AI Facilitates GDPR Compliance for AI Models: Insights from the EDPB's Latest Opinion

Navigating the New Frontier of Data Privacy: Protecting Confidential Company Information in the Age of AI

Belgium’s Data Protection Authority on the Interplay of the EU AI Act and the GDPR

Enhancing Compliance with US Privacy Regulations for the Insurance Industry Using Private AI

Navigating Compliance with Quebec’s Act Respecting Health and Social Services Information Through Private AI’s De-identification Technology

Unlocking New Levels of Accuracy in Privacy-Preserving AI with Co-Reference Resolution

Strengthened Data Protection Enforcement on the Horizon in Japan

How Private AI Can Help to Comply with Thailand's PDPA

How Private AI Can Help Financial Institutions Comply with OSFI Guidelines

The American Privacy Rights Act – The Next Generation of Privacy Laws

How Private AI Can Help with Compliance under China’s Personal Information Protection Law (PIPL)

PII Redaction for Reviews Data: Ensuring Privacy Compliance when Using Review APIs

Independent Review Certifies Private AI’s PII Identification Model as Secure and Reliable

To Use or Not to Use AI: A Delicate Balance Between Productivity and Privacy

To Use or Not to Use AI: A Delicate Balance Between Productivity and Privacy

News from NIST: Dioptra, AI Risk Management Framework (AI RMF) Generative AI Profile, and How PII Identification and Redaction can Support Suggested Best Practices

Handling Personal Information by Financial Institutions in Japan – The Strict Requirements of the FSA Guidelines

日本における金融機関の個人情報の取り扱い - 金融庁ガイドラインの要件

Leveraging Private AI to Meet the EDPB’s AI Audit Checklist for GDPR-Compliant AI Systems

Who is Responsible for Protecting PII?

How Private AI can help the Public Sector to Comply with the Strengthening Cyber Security and Building Trust in the Public Sector Act, 2024

A Comparison of the Approaches to Generative AI in Japan and China

Updated OECD AI Principles to keep up with novel and increased risks from general purpose and generative AI

Is Consent Required for Processing Personal Data via LLMs?

The evolving landscape of data privacy legislation in healthcare in Germany

The CIO’s and CISO’s Guide for Proactive Reporting and DLP with Private AI and Elastic

The Evolving Landscape of Health Data Protection Laws in the United States

Comparing Privacy and Safety Concerns Around Llama 2, GPT4, and Gemini

How to Safely Redact PII from Segment Events using Destination Insert Functions and Private AI API

WHO’s AI Ethics and Governance Guidance for Large Multi-Modal Models operating in the Health Sector – Data Protection Considerations

How to Protect Confidential Corporate Information in the ChatGPT Era

Unlocking the Power of Retrieval Augmented Generation with Added Privacy: A Comprehensive Guide

Leveraging ChatGPT and other AI Tools for Legal Services

Leveraging ChatGPT and other AI tools for HR

Leveraging ChatGPT in the Banking Industry

Law 25 and Data Transfers Outside of Quebec

The Colorado and Connecticut Data Privacy Acts

Unlocking Compliance with the Japanese Data Privacy Act (APPI) using Private AI

Tokenization and Its Benefits for Data Protection

Private AI Launches Cloud API to Streamline Data Privacy

Processing of Special Categories of Data in Germany

End-to-end Privacy Management

Privacy Breach Reporting Requirements under Law25

Migrating Your Privacy Workflows from Amazon Comprehend to Private AI

A Comparison of the Approaches to Generative AI in the US and EU

Benefits of AI in Healthcare and Data Sources (Part 1)

Privacy Attacks against Data and AI Models (Part 3)

Risks of Noncompliance and Challenges around Privacy-Preserving Techniques (Part 2)

Enhancing Data Lake Security: A Guide to PII Scanning in S3 buckets

The Costs of a Data Breach in the Healthcare Sector and its Privacy Compliance Implications

Navigating GDPR Compliance in the Life Cycle of LLM-Based Solutions

What’s New in Version 3.8

How to Protect Your Business from Data Leaks: Lessons from Toyota and the Department of Home Affairs

New York's Acceptable Use of AI Policy: A Focus on Privacy Obligations

Safeguarding Personal Data in Sentiment Analysis: A Guide to PII Anonymization

Changes to South Korea’s Personal Information Protection Act to Take Effect on March 15, 2024

Australia’s Plan to Regulate High-Risk AI

How Private AI can help comply with the EU AI Act

Comment la Loi 25 Impacte l'Utilisation de ChatGPT et de l'IA en Général

Endgültiger Entwurf des Gesetzes über Künstliche Intelligenz – Datenschutzpflichten der KI-Modelle mit Allgemeinem Verwendungszweck

How Law25 Impacts the Use of ChatGPT and AI in General

Is Salesforce Law25 Compliant?

Creating De-Identified Embeddings

Exciting Updates in 3.7

EU AI Act Final Draft – Obligations of General-Purpose AI Systems relating to Data Privacy

FTC Privacy Enforcement Actions Against AI Companies

The CCPA, CPRA, and California's Evolving Data Protection Landscape

HIPAA Compliance – Expert Determination Aided by Private AI

Private AI Software As a Service Agreement

EU's Review of Canada's Data Protection Adequacy: Implications for Ongoing Privacy Reform

Acceptable Use Policy

ISO/IEC 42001: A New Standard for Ethical and Responsible AI Management

Reviewing OpenAI's 31st Jan 2024 Privacy and Business Terms Updates

Comparing OpenAI vs. Azure OpenAI Services

Quebec’s Draft Regulation Respecting the Anonymization of Personal Information

Version 3.6 Release: Enhanced Streaming, Auto Model Selection, and More in Our Data Privacy Platform

Brazil's LGPD: Anonymization, Pseudonymization, and Access Requests

LGPD do Brasil: Anonimização, Pseudonimização e Solicitações de Acesso à Informação

Canada’s Principles for Responsible, Trustworthy and Privacy-Protective Generative AI Technologies and How to Comply Using Private AI

Private AI Named One of The Most Innovative RegTech Companies by RegTech100

Data Integrity, Data Security, and the New NIST Cybersecurity Framework

Safeguarding Privacy with Commercial LLMs

Cybersecurity in the Public Sector: Protecting Vital Services

Privacy Impact Assessment (PIA) Requirements under Law25

Elevate Your Experience with Version 3.5

Fine-Tuning LLMs with a Focus on Privacy

GDPR in Germany: Challenges of German Data Privacy (Part 2)

Comply with US Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence using Private AI

How to Comply with EU AI Act using PrivateGPT