A month ago, consulting firm Accenture presented potential clients with an unusual, high-profile pitch for a new project. Instead of the usual slide decks, the client saw a deepfake of several real employees standing on a virtual stage, providing a perfect description of the project they wanted to work on.
“We wanted them to meet our team,” says Renato Skaf, senior managing director at Accenture, who came up with the idea. “It’s also a way for us to differentiate ourselves from our competitors.”
The deepfake was generated by TouchCast with the employee's consent. Touchcast, an Accenture investment, provides a platform for interactive presentations featuring real or synthetic human avatars. Touchcast avatars can respond to typed or spoken questions using AI models that analyze relevant information and generate answers on the fly.
“There’s an element of creepiness,” Skaff says of his company’s deepfake employees. “But there's a much bigger element to cool.”
Deepfakes are powerful and dangerous weapons of disinformation and reputational damage. But the same technology is being adopted by companies that see it as a smart and catchy new way to reach and interact with customers.
These experiments are not limited to the corporate sector. Monica Ares, executive director of the Innovation, Digital Education and Analytics Lab at Imperial College Business School in London, created a deepfake of a real professor to answer students' questions and queries outside of campus. We hope that this will become a more attractive and effective method of classroom. Ares said the technology has the potential to increase personalization, provide new ways to manage and assess students, and increase student engagement. “It still feels like a human talking to you, so it feels very natural,” she says.
As is often the case these days, AI is helping us figure out this reality. It has long been possible for Hollywood studios to use software to copy an actor's voice, face, and mannerisms, but in recent years, AI has made similar technology widely accessible and virtually free. In addition to Touchcast, companies like Synthesia and HeyGen offer businesses a way to generate real or fake personal avatars for presentations, marketing, and customer service.
Ed Segal, founder and CEO of Touchcast, believes digital avatars have the potential to become a new way to present and interact with content. His company has developed a software platform called Genything that allows anyone to create their own digital twin.
At the same time, deepfakes are a growing concern as elections approach in many countries, including the United States. Last month, an AI-generated robocall featuring a fake Joe Biden was used to spread disinformation about the election. Taylor Swift was also recently targeted by deepfake porn generated using widely available AI image tools.
“Deepfake images are certainly alarming and alarming,” Ben Buchanan, the White House special assistant for AI, told WIRED in a recent interview. Swift's deepfake is “an important data point in a broader trend that disproportionately impacts women and girls, who are overwhelmingly targeted for online harassment and abuse,” he said.
The new U.S. AI Safety Institute, established under a White House executive order issued last October, is currently developing standards for watermarking AI-generated media. Meta, Google, Microsoft, and other tech companies are also developing technology to spot AI fakes in the high-stakes AI arms race.
But some political uses of deepfakes highlight the technology's dual potential.
Pakistan's former prime minister Imran Khan addressed a rally to his party's supporters last Saturday despite being locked up in prison. The former cricket star, who was jailed in what his party characterized as a military coup, used deepfake software to conjure up a convincing copy of himself sitting behind his desk and speaking words he never actually uttered. gave a speech.
As AI-powered video operations improve and become easier to use, businesses and consumers are likely to become more concerned about legitimate uses of the technology.Chinese technology giant Baidu recently developed a method Allows chatbot app users to create deepfakes to send Chinese New Year greetings.
Even for early adopters, the potential for abuse is not entirely in mind. “There's no question that security needs to be a top priority,” says Scaff, who is from Accenture. “Once you have synthetic twins, you can make them do and say anything.”