Earlier this week, Netflix found itself embroiled in an AI scandal after Futurism discovered AI-generated images being used in a Netflix documentary. What Jennifer did.. The film's credits make no mention of the use of AI, and critics criticized the filmmakers for potentially glorifying a film that was supposed to be based on real events. Ars Technica reports. The executive producer of the Netflix hit admitted that some photos were edited to protect the identities of the sources, but remained vague about whether AI was used in the process. From the report: “What Jennifer Did” hit the top spot on Netflix's Global Top 10 when it was released in early April and explores why Pan paid hitmen $10,000 to kill her parents. It attracted a crowd of true crime fans who wanted to know more about the story. However, the documentary quickly became a source of controversy as fans began to notice obvious flaws in the images used in the film, from her strangely mismatched earrings to her nose that appeared to be missing a nostril. from the movie, the Daily Mail reported in a post that showed a number of example images. […]
Jeremy Grimaldi, a crime reporter who wrote a book about the case and provided investigative and police footage for the documentary, told the Toronto Star that the images were not generated by AI. Grimaldi confirmed that all images of Pan used in the film were real photographs. However, he said some images were edited to protect the identity of the image's source rather than blur the line between truth and fiction. “Any filmmaker will use different tools like Photoshop in their films,” Grimaldi told The Star. “Jennifer's photo is a real photo of her. The foreground is exactly her. The background has been anonymized to protect the source.” Grimaldi's comment says the photo is an edited version of Pan's actual photo While it does provide some assurance that the photo was edited, it is also vague enough to make it ambiguous whether the “other tools” used to edit the photo included AI.