YouTube’s updated community guidelines include new disclosure requirements for AI-generated content, its new standards for “sensitive topics,” and the removal of deep fakes.
YouTube, the video streaming social platform, released new community guidelines relating to the disclosure of artificial intelligence (AI) used in content.
The platform published a blog on Nov. 14 saying that the updates will have creators on its platform inform their viewers if the content that is being shown is “synthetic.”
“We’ll require creators to disclose when they’ve created altered or synthetic content that is realistic, including using AI tools.”
An example given in the update was an ai-generated video that “realistically depicts” something that never happened or content of a person saying or doing something that they did not.
This information will be displayed for viewers in two ways, according to YouTube, with the first being a new label added to the description panel and if the content is about “sensitive topics” a more prominent label to the video player.
Sensitive topics according to YouTube include political elections, “ongoing conflicts,” public health crisis and public officials.
YouTube says it will work with creators to help its community better understand the new guidelines, however it said for anyone who does not abide by the rules their content is subject to removal, “suspension from the YouTube Partner Program, or other penalties.”
Related: Google sues scammers over creation of fake Bard AI chatbot
The platform also touched on the topic of AI-generated deep fakes, which have become both increasingly common and realistic. It said they’re integrating a new feature that will allow users to request the removal of a synthetic video that “simulates an identifiable individual, including their face or voice, using our privacy request process.”
Recently, multiple celebrities and public figures such as Tom Hanks, Mr. Beast, Gayle King, Jennifer Aniston and more have battled with deep fake videos of themselves endorsing products.
AI-generated content has also been a thorn in the side of the music industry over the last year, as many deep fakes of artists using illegal vocal or track samples have also plagued the internet.
In its updated community guidelines YouTube says it will also remove AI-generated music or content that mimics an artist’s unique singing or rapping voice as requested by its “music partners.”
Over the summer YouTube began working on its principles for working with the music industry on AI technology. Alongside the community guidelines, YouTube recently released new experimental AI chatbots that chat with viewers while watching a video.
Magazine: ‘AI has killed the industry’: EasyTranslate boss on adapting to change