Sources close to Chinese officials say the draft rules for AI developers have been updated to include acquiring a license before releasing generative AI systems.
The Chinese government is considering additional regulations on artificial intelligence (AI) development that emphasize content control and licensing.
According to a July 11 report from the Financial Times, the Cyberspace Administration of China (CAC) wants to impose a system requiring local companies to obtain a license before releasing generative AI systems.
This move signals a tightening of initial draft regulations released in April, which gave companies 10 working days after the product launch to register it with authorities.
The new licensing scheme is expected to be included as a part of forthcoming regulations that are anticipated to be released as early as the end of this month, sources told the FT.
Also included in the April draft of the regulations were mandatory security reviews of AI-generated content.
The government said in its draft that all content should “embody core socialist values,” and should not “subvert state power, advocate the overthrow of the socialist system, incite splitting the country or undermine national unity.”
Cointelegraph has reached out to the CAC for comment but did not receive a response by publication.
Related: China is developing AI without US chips — Here’s how
Chinese tech and e-commerce companies Baidu and Alibaba both released AI tools this year, the latter rivaling that of the popular AI chatbot ChatGPT.
According to the sources in the FT report, both companies have been in contact with regulators in the last few months to keep their products in line with the new rules.
Along with the implications mentioned above in the upcoming regulations, the draft also states that Chinese authorities have deemed tech companies making AI models fully responsible for any content created using their products.
Regulators across the world have been calling for regulating AI-generated content. In the United States, Senator Michael Bennet recently authored a letter to tech companies developing the technology to label AI-generated content.
The European Commission’s vice president for values and transparency, Vera Jourova, also told the media recently that she believes generative AI tools with the “potential to generate disinformation” need to label the content created to stop the spread of disinformation.
Magazine: HK crypto ETFs on fire, Binance warns on Maverick FOMO, Poly hack: Asia Express