Video platform YouTube, which is owned by Google/Alphabet, will soon mandate users to disclose when they have uploaded manipulated content utilizing generative AI tools.
The regulation will go into effect as early as January 2024. It will chiefly force videos to visibly indicate if its content contains depictions that appear real but never actually happened or speech that was not said.
“This is especially important in cases where the content discusses sensitive topics, such as elections, ongoing conflicts and public health crises, or public officials,” said YouTube’s vice presidents of product management said in a statement. They went on:
Creators who consistently choose not to disclose this information may be subject to content removal, suspension from the YouTube Partner Program, or other penalties. We’ll work with creators before this rolls out to make sure they understand these new requirements.
We’ll inform viewers that content may be altered or synthetic in two ways. A new label will be added to the description panel indicating that some of the content was altered or synthetic. And for certain types of content about sensitive topics, we’ll apply a more prominent label to the video player.
YouTube creators will soon have to disclose use of gen AI in videos or risk suspension https://t.co/05zu2A4u6B
Finally, YouTube said this is only the beginning of its AI regulations. “We’re in the early stages of our work, and will continue to evolve our approach as we learn more,” the statement says. “Generative AI has the potential to unlock creativity on YouTube and transform the experience for viewers and creators on our platform. But just as important, these opportunities must be balanced with our responsibility to protect the YouTube community.”