Chinese regulators have likely learned lessons from the EU’s Artificial Intelligence Act, says Jeffrey Ding, an assistant professor of political science at George Washington University. “Chinese policymakers and scholars say they drew on EU legislation as inspiration for past events.”
At the same time, however, some of the measures taken by Chinese regulators cannot actually be applied in other countries. For example, the Chinese government is asking social media platforms to screen user-uploaded content for artificial intelligence. “This seems to be something very new and may be unique in the context of China,” Ding says. “This would never happen in a US context because the US is famous for saying that the platform is not responsible for the content.”
But what about freedom of speech on the Internet?
The draft regulation on AI content labeling is open to public opinion until October 14, and its modification and adoption may take several more months. However, there is no reason why Chinese companies should delay preparations for this change to come into effect.
Sima Huapeng, founder and CEO of Chinese company AIGC Silicon Intelligence, which uses deepfake technologies to generate AI agents, influencers and replicate living and dead people, says his product now allows users to voluntarily decide whether to label the generated product as artificial intelligence. But if the law passes, he may have to change it to mandatory.
“If a feature is optional, companies will most likely not add it to their products. But if it becomes mandatory by law, everyone will have to implement it,” says Sima. Adding watermarks or metadata labels is not technically hard, but it will raise operating costs for compliant companies.
He said such rules could deter AI from being used for fraud or invasion of privacy, but could also lead to the development of a black market for AI services where companies seek to avoid compliance and save costs.
There is also a fine line between holding AI content producers accountable and policing individual speech through more sophisticated tracking.
“The biggest human rights challenge is ensuring that these approaches do not further threaten privacy or freedom of speech,” says Gregory. While hidden labels and watermarks can be used to identify sources of misinformation and inappropriate content, the same tools can enable platforms and governments to better control what users post online. In fact, concerns about how AI tools could become unfair have been one of the main factors driving China’s proactive efforts at AI legislation.
At the same time, China’s artificial intelligence industry is pressing the government for more room to experiment and develop as it already lags behind its Western counterparts. China’s earlier Generative Artificial Intelligence Law was significantly relaxed between the first public draft and the final draft of the law, removing identity verification requirements and reducing penalties on companies.
“What we’ve seen is that the Chinese government is really trying to walk the line between ‘making sure we maintain control over content’ and ‘allowing AI labs in strategic space the freedom to innovate,’” Ding says. “This is another attempt to achieve that goal.”