YouTube is significantly enhancing its AI deepfake monitoring capabilities by extending this feature to Hollywood, a move that could lead to the removal of numerous AI-generated celebrity videos. At the core of this initiative is the platform’s advanced likeness detection tool, designed to actively scan for and flag AI lookalikes for potential review and removal.
This sophisticated likeness detection feature meticulously searches YouTube for AI deepfake content, specifically flagging it for public figures who are enrolled in the program. Participants gain the ability to monitor AI-generated content featuring themselves and can formally request its removal, though all takedown requests are rigorously evaluated against YouTube’s privacy policy, meaning not every submission will be approved. The company first piloted this feature with content creators last fall, subsequently broadening its scope to include politicians and journalists in March. Notably, YouTube confirms that this tool will encompass celebrities irrespective of whether they maintain an active YouTube account.
Enrollment in the system necessitates participants to provide an official identification and a selfie video for verification. It's important to note that likeness detection primarily zeroes in on facial recognition, rather than other distinguishing characteristics like voice. While the feature aims to address unauthorized deepfakes, the removal of such content is not universally guaranteed, especially considering protected use cases such as parody or satire. YouTube previously indicated that when content creators utilized this feature, they sought the removal of only a "very small" number of videos depicting themselves.
YouTube draws a parallel between its likeness detection system and Content ID, its established mechanism for identifying and managing copyrighted material across the platform. A notable distinction, however, lies in Content ID's functionality, which permits rights holders to monetize other users' videos utilizing their material and subsequently share the generated revenue. This monetization option is not yet available with likeness detection, although the industry appears to be trending towards such capabilities in the future.
Concurrently, YouTube recently unveiled a new feature enabling creators to digitally clone their own likeness using AI, which can then be seamlessly integrated into their videos. This development aligns with broader industry trends; for instance, talent agency CAA, a reported supporter of YouTube’s likeness detection expansion, maintains a comprehensive database of its clients’ biometric data, allowing entertainers to either safeguard their digital identity or leverage it for commercial ventures. A case in point is TikTok sensation Khaby Lame, who reportedly sold the rights to his likeness for online product endorsements, although Business Insider reports that this particular deal has encountered several hurdles and its finalization remains uncertain.
In an interview with The Hollywood Reporter, some talent managers offered an intriguing perspective, framing the proliferation of AI deepfakes as a novel avenue for the entertainment industry to engage with its fanbase. While certain celebrities may opt for the removal of eligible AI content featuring them, others might choose to permit the widespread creation of fan-made AI content. Looking ahead, it is plausible that entertainers could even embrace AI deepfakes of themselves, particularly if they are appropriately compensated for their digital likeness.
The Editorial Staff at AIChief is a team of professional content writers with extensive experience in AI and marketing. Founded in 2025, AIChief has quickly grown into the largest free AI resource hub in the industry.