Bourdain Deepfake for ‘Rodarunner’ Film Elicits Angst from Fans
In a New Yorker story published online on June 17, the ethics of using the voice of deceased restaurant and TV show personality Anthony Bourdain elicits strong pro and con responses and a debate on disclosure when using such computer tools.
The piece, entitled, “The Ethics of a Deepfake Anthony Bourdain Voice, written by Helen Rosner, discusses the documentary “Roadrunner” and the how the use of an AI-generated voice for Bourdain, reading aloud a, “despairing e-mail that he sent to a friend, the artist David Choe.”
Bourdain, who committed suicide in 2018, was a complex person, and the filmmakers tried to tell his story through his own words and those of others close to him. As a storytelling device, Director Morgan Neville used Bourdain’s AI-generated voice to read his own email. It certainly wasn’t the only use of his voice, but the fake nature of it drew the ire of fans.
“News of the synthetic audio, which Neville discussed this past week in interviews with me and with Brett Martin, at GQ, provoked a striking degree of anger and unease among Bourdain’s fans. ‘Well, this is ghoulish’; ‘This is awful’; ‘WTF?!’ people said on Twitter, where the fake Bourdain voice became a trending topic. The critic Sean Burns, who had reviewed the documentary negatively, tweeted, “I feel like this tells you all you need to know about the ethics of the people behind this project.”
The three quotes Neville commissioned the AI company to create were used primarily as transitions, because the film integrates Bourdain’s narration from TV, radio, podcasts and audiobooks.
“So he got in touch with a software company, gave it about a dozen hours of recordings, and, Neville said, “I created an A.I. model of his voice.” In a world of computer simulations and deepfakes, a dead man’s voice speaking his own words of despair is hardly the most dystopian application of deepfake technology. But the seamlessness of the effect is eerie. “If you watch the film, other than that line you mentioned, you probably don’t know what the other lines are that were spoken by the A.I., and you’re not going to know,” Neville said. “We can have a documentary-ethics panel about it later.”
One of the voiceovers prepares the viewer, in an homage to the films “Laura” and “Sunset Boulevard,” which are narrated by dead men. All of the lines in the film, however, had been written by Bourdain.
“You’re probably going to find out about this anyway, so here’s a little preëmptive truth-telling,” Bourdain says, in disembodied voice-over, in the movie’s first few minutes. “There’s no happy ending.”
Sam Gregory, is a former filmmaker and the program director of Witness, a human-rights nonprofit that focuses on ethical applications of video and technology. “In some senses, this is quite a minor use of a synthetic-media technology,” he told me. “It’s a few lines in a genre where you do sometimes construct things, where there aren’t fixed norms about what’s acceptable.” But, he explained, Neville’s re-creation, and the way he used it, raise fundamental questions about how we define the ethical use of synthetic media.
read more at newyorker.com