China’s proposals to regulate generative AI

2023, July 6
China’s proposals to regulate generative AI 6th July 2023

China’s proposals to regulate generative AI

On April 11, 2023, the Cyberspace Administration of China (CAC) issued draft Administrative Measures for Generative Artificial Intelligence Services (“Draft AI Measures”).

Executive Summary

The rapid advancement of artificial intelligence (AI) technologies has transformed industries worldwide, and the proliferation of chat-based generative AI such as ChatGPT, Baidu’s Ernie Bot, and Alibaba’s Tongyi Qianwen mean that AI is increasingly integrated into daily life. 

China’s AI market has seen rapid expansion over the past few years with spending in China’s AI industry forecast to hit USD 14.75 billion in 2023, accounting for about 10% of the world total.(1) Furthermore, projections indicate that market value will grow to USD 26.44 billion by 2026, with the government hoping to achieve RMB 1 trillion in revenues by 2030. (2)(3) 

As a global leader in AI development, China has begun to take proactive steps to shape its regulatory framework – looking to address ethical implications, regulatory concerns, and potential risks associated with AI. The latest Draft AI Measures demonstrate China’s commitment to regulating various generative AI technologies, emphasising responsible development alongside content regulation. China’s priorities of digitalisation, self-sufficiency in science and technology, and fostering innovation will likely see industries such as manufacturing and information technology experience increased development and utilisation of generative AI tools. However, this will no doubt be balanced against the accountability that the Draft AI Measures place on companies providing generative AI services, imposing strict requirements on training data and system outputs. The implications of these measures on the market and the feasibility of delivering compelling services while complying are under discussion among experts. 

The application of these regulations on foreign providers of generative AI services is also unclear, and further clarity will be needed. However, the Draft AI Measures could significantly impact the use and commercialisation of generative AI technologies in China, with potential ramifications for foreign market entrants and collaboration opportunities with local partners. British businesses in China need to understand these implications in order to remain abreast of the ever-evolving AI environment, to ensure compliance, and to be best placed to harness the opportunities that exist in AI development and technology.

The Policy Insights article below is authored by Hogan Lovells. The original article can be found here on Hogan Lovell’s Engage.


Introduction

The Draft AI Measures came just four months after the CAC gave effect to its first measures concerning AI, the Deep Synthesis Measures. The reason for the CAC’s sudden return to the legislative drawing board appears to have been the recent surge in international popularity of chat-based generative AI, with the Chinese market seeing new entrants such as Baidu’s Ernie Bot and Alibaba’s Tongyi Qianwen.(4)(5) While the Deep Synthesis Measures focused on deep fakery in audio and video content, the Draft AI Measures cast a wider regulatory net for generative AI of all types. It is also notable that while the Deep Synthesis Measures focused on AI outputs, in particular deep fake audio and video, the Draft AI Measures would apply equal focus to the regulation of training data and other inputs to generative AI models in addition to the regulation of model outputs.

The Draft AI Measures come at a time of growing international scrutiny of AI. The Draft AI Measures add an important Chinese perspective to the debate, sketching a regulatory framework that appears to be closely aligned with China’s general approach to the regulation of data, cybersecurity, and online content, one which brings a pronounced focus on maintaining political and social order.

To be clear, the CAC’s proposals do track a number of the substantive considerations seen in draft AI laws globally and in ethical frameworks for trustworthy or responsible AI – for example, the principles that AI be lawful and respectful of rights and interests and not propagate discrimination. However, the Draft AI Measures would also require that generative AI meet criteria seen in other aspects of China’s content regulation, such as the requirement that generative AI outputs be reflective of China’s socialist core values.

Critically, the Draft AI Measures would require businesses to obtain regulatory approval prior to using generative AI to provide services to the public. Given the complex nature of generative AI technologies, which are trained on vast quantities of data with a limited degree of human oversight, it is an open question as to what types of generative AI technologies can be brought within the constraints of the draft criteria for approval and, more broadly, what balance of technological innovation and state control would be achieved in practice in China if the Draft AI Measures were implemented as proposed. In practical terms, it may be the case that the Chinese government sees a far narrower and more tightly controlled set of acceptable use cases for generative AI than the more open-ended applications we are now seeing in the West.

Who would be regulated?

The Draft AI Measures apply to the research, development, and use of generative AI and to the provision of services to the public within China. Obligations under the Draft AI Measures mainly fall to “providers of generative AI services,” which are defined as individuals and organisations that use generative AI to provide services such as chat or text, image or audio generation, including service providers that allow others to generate content through APIs or other means (“generative AI providers”).

Cutting across the complex debate seen in the European Union in relation to the AI Act, the Draft AI Measures simply state that generative AI providers shall bear responsibility for the content generated by their products. Generative AI providers would include both developers of generative AI that provide services directly to the Chinese public and those downstream providers of services that use others’ generative AI to provide services, including by integrating the generative AI into their own applications, products, or services through APIs.

The possibility that developers of generative AI will be responsible for the acts and omissions of collaboration partners and downstream providers will clearly raise important risk allocation issues that may be a critical constraint on the commercialisation of generative AI in China.

It is not clear how the Draft AI Measures would deal with foreign market entrants to the Chinese market, in particular whether an element of targeting of the Chinese public is required in order for an offshore technology provider to be caught by the regime. 

It also remains to be seen whether the use of “internal-facing” AI services used within organisations would be considered to be “the provision of service to the public.”

What are the regulatory approval and filing requirements?

  • Security assessment of generative AI products
    Before offering a generative AI service to the public, generative AI providers must complete a security assessment (whether themselves or by a third-party security assessment institution) in accordance with the Provisions on the Security Assessment of Internet Information Services with Public Opinion Properties or Social Mobilization Capacity.
  • Record-filing for algorithms
    Algorithm recommendation service providers are also required to complete a record-filing with the CAC pursuant to the Provisions on the Management of Algorithm Recommendation of Internet Information Services. The filing includes the name of the service provider, the algorithm type and an algorithm self-assessment report. As of April 2023, the CAC announced the results of record filings in four batches, which included algorithms from Tencent, Baidu, Alibaba, and ByteDance.

Depending on the specific business and services, other permits or licenses may also be required, such as the Internet Content Provider license or filing required of web site operators and industry regulations applicable to the specific use case or business activity in relation to which the generative AI is used. 

How will generative AI content be regulated?

As an overarching principle, generative AI providers would be obliged to take responsibility for content generated by their products and adhere to the following principles enumerated under the Draft AI Measures: 

  • Generative AI content would be required to reflect “socialist core values,” not harm national unity, not endanger national security, and not promote the subversion of state power or the overturning of China’s socialist system (Article 4(1));
  • Generative AI outputs must be accurate and truthful, with measures being adopted to prevent the generation of false information (Article 4(4));
  • Generative AI outputs must respect lawful rights and interests, prevent harm to physical and mental health, not infringe individual rights in their likeness, reputation or privacy, and not infringe intellectual property rights (Article 4(5));
  • Generative AI providers are prohibited from generating discriminatory content based on users’ race, national origin, or gender (Article 12); and
  • In line with Article 17 of the Deep Synthesis Measures, images, video, and other content which might cause confusion or mislead the public are required to be conspicuously labeled in a way that alerts the public to the fact that natural persons, scenes, or information are being simulated (Article 16).

How will generative AI algorithms and training data be regulated?

In addition to seeking to regulate the outputs of generative AI, the Draft AI Measures also apply significant focus on the inputs to AI models.

Article 4(2) of the Draft AI Measures provides that the design of algorithms, the selection of training data, model creation and optimization and service provision should all be conducted with measures in place to prevent discrimination on the basis of race, ethnicity, religious beliefs, nationality, gender, age, or profession. Generative AI inputs are regulated in a number of other ways:

  • Intellectual property rights and commercial ethics should be respected, and advantages in algorithms, data, platforms and so forth should not be used to engage in unfair competition (Articles 4(3) and 7(2));
  • Training data should conform to the requirements of the Cyber Security Law (CSL) and other laws and regulations (Article 7 (1));
  • Where training data contains personal information, the requirements of data protection laws should be complied with, including obtaining consent of data subjects where required (Article 7(3));
  • The authenticity, accuracy, objectivity, and diversity of training data must be ensured (Article 7(4));
  • Rules and training for data annotation must be provided (Article 8); and
  • Generative AI providers must comply with transparency requirements and disclose information that could impact users’ choices, including a description of the source, scale, type, quality, and other details of pre-training and optimised-training data (Article 17).

If generative AI providers discover that generative AI outputs do not conform to the requirements of the Draft AI Measures, they are required to adopt content filtering and other necessary measures to prevent the generation of such content within three months from the time of discovery (Article 15).

What data protection obligations would apply to generative AI?

The Draft AI Measures require that generative AI providers protect personal data as a personal information handler under the Personal Information Protection Law (PIPL) (i.e. a status equivalent to a “data controller” under the European Union’s GDPR).

It is also a specific requirement of the Draft AI Measures that generative AI providers not (1) illegally store users’ input from which the identity of a user can be deduced; (2) conduct user profiling based on user input information and log information; or (3) disclose user information to third parties (Article 11).

How would user interactions be regulated?

The Draft AI Measures would impose a number of obligations on generative AI providers in respect of their users, including obligations to:

  • Conduct real-name identification and authentication (Article 9);
  • Take measures to prevent users from making excessive reliance on generative AI content or developing addictions to generative AI content (Article 10);
  • Establish a mechanism to receive and handle users’ complaints and respond to user requests (Article 13);
  • Ensure the stability of the lifecycle of their generative AI services (Article 14);
  • Provide guidance to allow users to understand the generative AI and make rational use of generative AI content (Article 18); and
  • Suspend or terminate service in the case of any improper use of the generative AI (Article 19).

What penalties would apply?

Generative AI providers that violate the requirements of the Generative AI Measures would be penalised in accordance with the CSL, the Data Security Law, the PIPL, and other applicable laws. In the absence of a specific penalty, the CAC and other competent authorities have discretionary powers to order sanctions, including by issuing warnings and orders to take corrective action, ordering the suspension or termination of generative AI services or awarding fines of up to RMB 100,000 (approximately USD 15,000).

Authored by Mark Parsons, Sherry Gong, and Tong Zhu. Ying Tang, a paralegal in our Beijing office, also contributed to this post.

Full article can be found here on Hogan Lovell’s website.


Hogan Lovells understands and works with clients to solve the toughest legal issues in major industries and commercial centres around the world. Our 2,600+ lawyers on six continents provide practical legal solutions wherever clients are working in the world. Whether they are expanding into new markets, considering capital from new sources, or dealing with increasingly complex regulation or disputes, Hogan Lovells can help. Whether change brings opportunity, risk, or disruption, be ready by working with Hogan Lovells.

As one of the first international law firms on the ground in Greater China, Hogan Lovells has one of the largest and leading legal practices in the country. Our multilingual and multinational team of approximately 130 lawyers and fee-earners in Greater China acts as true partners to our clients, guiding them in setting up for successful business operations within the country’s unique business environment. With an understanding of complicated, regulated industries, we help our clients identify and anticipate regulatory changes, market dynamics, and trends that can impact business operations.

Want to become a member?