China Bans AI-Generated Images That Don’t Have Watermarks
Authorities in China have made a series of policy announcements regarding artificial intelligence (AI), including banning AI-generated images that are not labeled as such.
China’s Cyberspace Administration has said that AI images generated from synthesis programs such as DALL-E will require watermarks, or other labels to mark them clearly.
The policy update was part of a raft of regulations on emerging AI technologies.
The Cyberspace Administration oversees the regulation, oversight, and censorship of the internet. Chinese authorities will now be looking closely at what it calls “deep synthesis” technology.
“The introduction of the “Regulations” is a need to prevent and resolve security risks, and it is also a need to promote the healthy development of in-depth synthetic services and improve the level of supervision capabilities,” writes China’s Office of the Central Cyberspace Affairs Commission.
“Providers of deep synthesis services shall add signs that do not affect the use of information content generated or edited using their services. Services that provide functions such as intelligent dialog, synthesized human voice, human face generation, and immersive realistic scenes that generate or significantly change information content, shall be marked prominently to avoid public confusion or misidentification.
“It is required that no organization or individual shall use technical means to delete, tamper with, or conceal relevant marks.”
China’s government will require companies making deep synthesis technology to keep legally compliant records. Furthermore, people who use those services must register for accounts with their real names so their AI activity can be traceable.
Much like in the West, AI-generated images are proving incredibly popular with Chinese internet users. However, as China lay down strict rules, the U.S. is taking the opposite approach with President Biden suggesting a nonbinding AI Bill of Rights.
Also in the new regulations is rules surrounding consent and deepfakes. Companies using the technology must first contact and receive permission from individuals before they edit a voice or images.
According to The South China Morning Post, it comes in response to governmental concerns that advances in AI tech could be used by bad actors to run scams or defame people by impersonating their identity.
Image credits:Header image generated by DALL-E.