Old Version
News Brief

New Rules to Label AI-generated Content to Increase Transparency

A new directive aimed at the AI sector has been released recently to increase transparency and user safety by enforcing mandatory labeling of AI-generated content.

By NewsChina Updated May.1

A new directive aimed at the AI sector has been released recently to increase transparency and user safety by enforcing mandatory labeling of AI-generated content. 

China’s National Internet Information Office (NIIO), the Ministry of Industry and Information Technology and public security and broadcast agencies issued the finalized rules on March 14, which will come into force from September 1. 

The notice, Measures for the Labeling of AI-Generated Content Identification, comes in response to the spread of false information, AI-related fraud and the misuse of technologies which are becoming rife amid the fast development of AI, the NIIO said. 

The measures require the use of explicit labels, for example in the form of watermarks, and implicit labels embedded in the metadata, which will help users identify fraudulent or false information, and better clarify the responsibilities of the parties involved in publishing AI content. 

Explicit labels must be added to AI-generated content, including text, audio, video, images and virtual scenes, while hidden ones show the type of generated content, the name of the providers and relevant information related to content creation like the codes. Content providers must clearly tell users in the service agreement how they label AI-generated content and ask users to read and understand clauses related to AI-generated content management. 

The notice requires platforms to confirm whether an internet service provider engages in AI content generation and to update its labeling based on the rules. 

The NIIO said the measures were based on previous regulations on generative internet information and the algorithm for the service of internet information. Serving as a standard, it details the management of the whole process of content generation, covering who produces the content and where it is produced. The authorities also released specific measures for the labeling, especially measures to keep firms’ costs to a minimum. 

The security of AI information was a major concern at the two sessions, China’s annual meetings for the top legislature and top political advisory body held in Beijing in early March. Many celebrities complained they had been the victim of image theft after AI-generated advertising spread online. Experts said the new rules will solve the problem by helping users tell whether images of famous people are real or fake. It will also greatly constrain the spread of deepfakes. 

Xiao Youdan, a researcher at the Institutes of Science and Development, Chinese Academy of Sciences, told news portal The Paper that one of the advantages of the new rules is that they reduce the difficulties platforms have in managing uploaded content, since they can track the labels down to the original producer. Those who forward or reuse AI-generated information are not allowed to delete or change the original hidden labels. 

“The labeling measures are expected to solve new problems arising from the fast development of AI at low cost. Its purpose is to realize a large-scale, highly reliable and low-cost management,” Xiao said, adding that when most people prefer to put more into development than security, the new rules send a positive signal for the healthy development of the AI industry.

Print