Netflix has set clearer rules for how its production partners may use generative artificial intelligence after backlash over the apparent use of AI-generated images in Jenny Popplewell’s 2024 true-crime film What Jennifer Did. In a post on its Partner Help Center, the company says generative tools can be helpful creative aids for video, audio, text, and images, but the fast-moving technology needs firm guardrails.
Partners are asked to tell their Netflix contact in advance about any planned AI use. Most low-risk experiments that follow the guidance will not need legal review, but written approval is required if AI outputs appear in final deliverables or involve a performer’s likeness, personal data, or third-party intellectual property.
Netflix’s best practices rest on five principles. Productions should not use AI to copy or closely recreate identifiable traits from material they do not own, and must avoid infringing any copyrighted works. Tools used on a show should not store, reuse, or train on a production’s inputs or outputs. Where possible, teams should work in enterprise-secured environments to protect source media.
Any AI-generated material should be treated as temporary and excluded from the finished program unless approved. And AI must not replace or generate new talent performances or other union-covered work without consent. Netflix says the framework is meant to keep creative teams aligned with evolving best practices while still allowing room to experiment.
The policy emphasizes transparency, data security, and respect for rights to reduce legal and ethical risks and to avoid misleading audiences. By requiring disclosure and a stricter review path for sensitive cases, the company aims to prevent repeats of controversies like What Jennifer Did while maintaining the benefits of faster ideation and iteration that generative tools can bring to set and post-production. Netflix frames the update as global guidance that could expand as new tools and risks emerge.