China has taken significant steps to address personal data breaches and regulate the use of facial recognition data, with the closure of a record number of cases and the introduction of draft laws for public feedback. According to the Ministry of Public Security, Chinese police have closed 36,000 cases related to personal data infringements in the last three years, resulting in the detention of 64,000 suspects. These efforts are part of the government’s wider initiative to regulate the internet, which has also seen the seizure of more than 30 million SIM cards and 300 million “illegal” internet accounts. The arrests and seizures were reported by state-owned media Global Times, citing the ministry in a media briefing on Thursday.
The police have been investigating an increasing number of criminal cases involving personal data violations, particularly in industries such as healthcare, education, logistics, and e-commerce. The ministry also highlighted the rise of reported criminal cases involving artificial intelligence (AI), citing an incident in April 2023 where a company in the Fujian province lost 4.3 million yuan ($596,510) to hackers who used AI to alter their faces. Law enforcement agencies have solved 79 cases involving “AI face changing” to date.
With the widespread use of facial recognition technology and advancements in AI, the Chinese government has noted the emergence of cases that exploit such data. Cybercriminals have been using photos, particularly those found on identity cards, along with personal names and ID numbers to facilitate facial recognition verification. To address these risks, China’s public security departments are working with state facilities to conduct safety assessments of facial recognition technology and identify potential vulnerabilities in facial recognition verification systems. The government has warned of the significant risks posed by the “underground big data” market created by cybercriminals, which encompasses theft, reselling of data, and money laundering.
The Cyberspace Administration of China (CAC) recently published draft laws specifically addressing facial recognition technology, marking the first time nationwide regulations have been proposed for the technology. The proposed rules require organizations to obtain “explicit or written” user consent before collecting and using personal facial information. Businesses must also state the purpose and extent of the data they are collecting and only use the data for the stated purpose. Facial recognition technology cannot be used to analyze sensitive personal data, such as ethnicity, religious beliefs, race, and health status, without user consent. However, there are exceptions for maintaining national security, public safety, and safeguarding individuals’ health and property in emergencies. Organizations using the technology must have data protection measures in place to prevent unauthorized access or data leaks. Additionally, any person or organization retaining more than 10,000 facial recognition datasets must notify the relevant cyber government authorities within 30 working days.
The draft laws also outline conditions for the use of facial recognition systems, including how personal facial data should be processed and for what purposes. The laws mandate companies to prioritize the use of alternative non-biometric recognition tools if they provide equivalent results as biometric-based technology. The public has one month to submit feedback on the draft laws.
In January, China implemented regulations to prevent the abuse of “deep synthesis” technology, including deepfakes and virtual reality. Users of these services must label the images accordingly and refrain from engaging in activities that breach local regulations. Interim laws will also come into effect next week to manage generative AI services in the country. These regulations aim to facilitate the development of the technology while protecting national and public interests, as well as the legal rights of citizens and businesses. Generative AI developers will need to ensure compliance with the law in their pre-training and model optimization processes, including the use of data from legitimate sources that adhere to intellectual property rights. Personal data usage will require individual consent or compliance with existing regulations. Measures must also be taken to improve the quality of training data, including accuracy, objectivity, and diversity. Under the interim laws, generative AI service providers will assume legal responsibility for the information generated and its security, and will be required to sign service-level agreements with users to clarify rights and obligations.