At the World Economic Forum in Davos, Switzerland, last month, Nick Clegg, Meta's global president, said early efforts to detect artificially generated content were a sign that the technology industry is facing today. He said it was “the most urgent issue”.
On Tuesday, Mr Clegg proposed a solution. Meta said it will promote a technology standard that companies across the industry can use to recognize markers in photos, video and audio material that indicate the content has been generated using artificial intelligence.
The standard could allow social media companies to quickly identify AI-generated content posted on their platforms and add labels to that material. If widely adopted, the standard could help identify AI-generated content from companies such as Google, OpenAI, Microsoft, Adobe, and Midjourney, which provide tools to quickly and easily create artificial posts. .
“It's not a perfect answer, but we didn't want the perfect to be the enemy of the good,” Clegg said in an interview.
He said this effort will mobilize companies across the industry to adopt standards to detect and notify when content is artificial, making it easier for all companies to recognize it. He added that he was looking forward to it.
As the United States enters a presidential election year, industry insiders believe AI tools will be widely used to post fake content to misinform voters. Over the past year, people have been using his AI to create and spread fake videos of President Biden making false and inflammatory statements. The New Hampshire attorney general's office is also investigating a series of robocalls that appear to have used AI-generated audio of Biden urging people not to vote in the recent primary.
Meta, which owns Facebook, Instagram, WhatsApp, and Messenger, is the world's largest social network capable of distributing AI-generated content, while developing technology that facilitates widespread adoption of AI tools by consumers. We are in a unique position. Mr Clegg said Meta's position gave him special insight into both the production and distribution aspects of the issue.
Meta is aimed at a set of technical specifications called IPTC and C2PA standards. These are pieces of information that specify whether digital media is authentic in the content's metadata. Metadata is the underlying information embedded in digital content that provides a technical description of that content. Both standards are already widely used by news organizations and photographers to describe photos and videos.
Many technology and media companies, including Adobe, which makes Photoshop editing software, have spent years lobbying their peers to adopt the C2PA standard and created the Content Authenticity Initiative. The initiative is a partnership between dozens of companies, including The New York Times, to fight misinformation and “add a tamper-proof layer of provenance to all types of digital content, including photos, videos, and documents.” It is said that the purpose is
Companies offering AI generation tools can add standards to the metadata of the video, photo, or audio files they help create. This will inform social networks such as Facebook, Twitter, and YouTube that such content is artificial when it is uploaded to the platform. These companies can add labels to indicate that these posts are generated by AI and notify users who view them on social networks.
Meta and others also require users who post AI content to label whether they have posted AI content when uploading it to a company's app. Failure to do so will result in penalties, but companies have not disclosed details of the penalties.
Clegg also said that Meta will add more prominent labels to posts when the company determines that the digitally created or altered posts pose a “particularly high risk of materially misleading the public about material matters.” He also said that it has the potential to provide more information to the general public. Information and background on its origin.
AI technology is advancing rapidly, and researchers are trying to keep up with the development of tools on how to spot fake content online. Companies like Meta, TikTok, and OpenAI developed ways to detect such content, but engineers quickly found ways to circumvent these tools. Artificially generated videos and audio have proven to be even more difficult to discern than AI photos.
(The New York Times Company is suing OpenAI and Microsoft for copyright infringement over the use of Times articles to train artificial intelligence systems.)
“Bad actors will always try to circumvent the standards we create,” Clegg said. He described the technology as “both a sword and a shield” for the industry.
Part of the difficulty stems from the fragmented approach that technology companies take. Last fall, TikTok announced new policies requiring users to add labels to videos and photos they upload that were created using AI. YouTube announced a similar initiative in November.
Meta's new proposal seeks to combine some of these efforts. Other industry efforts, such as the Partnership on AI, have brought together dozens of companies to discuss similar solutions.
Mr Clegg said he expected more companies to agree to join the standard, especially in the run-up to the presidential election.
“We felt especially strongly in this election year that waiting until all the pieces of the jigsaw puzzle are in place before acting is not justified,” he said.