YouTube, Google’s video platform, has announced a policy update stating that it will begin removing AI-generated video content imitating identifiable individuals from next year. The move aims to protect music artists whose songs are being replicated using AI. The removal process will not be automatic; affected individuals or artists will need to request the removal.
The decision stems from feedback received from YouTube’s community, including creators, viewers, and artists, regarding the impact of emerging technologies on their experiences. The policy update reflects concerns about the unauthorized digital generation of someone’s face or voice, potentially misrepresenting their views.
While YouTube did not specify the exact implementation date, the policy is expected to take effect in the “coming months,” suggesting sometime in the next year.
The company outlined the removal process, stating, “In the coming months, we’ll make it possible to request the removal of AI-generated or other synthetic or altered content that simulates an identifiable individual, including their face or voice, using our privacy request process.” The decision to remove content will be based on various factors, such as whether the content is parody or satire, the unique identification of the person making the request, or if it involves a public official or well-known individual.
YouTube also announced that it would enable its music partners to request the removal of AI-generated music content mimicking an artist’s distinctive singing or rapping voice. Removal requests will be considered for factors like news reporting, analysis, critique of synthetic vocals, and will be available to labels or distributors representing artists participating in YouTube’s early AI music experiments.
Additionally, YouTube plans to introduce disclosure requirements for video makers, obliging them to disclose when they upload manipulated or synthetic content that looks realistic. This includes videos created using generative AI tools to realistically depict events that never happened or portray people saying or doing things they didn’t actually do. The disclosure requirement, effective in the new year, is crucial for sensitive topics such as elections, conflicts, public health crises, or discussions involving public officials.