California lawmakers develop multiple plans to require watermarking on content created by AI to curb abuses within emerging technologies that are impacting areas from political competition to the stock market are doing. From the report: At least five lawmakers have pledged or are considering another proposal that would require AI companies to provide some form of verification that a video, photo or text was created by their technology. This activity comes at a time when advanced AI is rapidly evolving and can create images and sounds with unprecedented levels of realism. Supporters worry the technology is ripe for abuse and could lead to the spread of deepfakes, which digitally manipulate a person's likeness to misrepresent themselves to the public, and are already being used in presidential elections. . However, such measures are likely to face increased scrutiny from the tech industry.
In a critical election year and in an online world full of misinformation, the ability to discern what is true is critical, said Drew Liebert, director of the California Technology and Democracy Initiative. Ta. The damage done by AI has already been done, with Liebert pointing to the aftermath of an AI-generated photo that falsely depicted another terrorist attack in the United States last May. He said: “A famous photo has been published on the internet claiming that the Pentagon was attacked, but it actually caused momentary confusion. [$500 billion] “The loss of dollars in the stock market would not have been as severe if people could tell in an instant that it wasn't a real image at all,” he said. Just ask Slashdot: Can some type of watermark prevent AI deepfakes?