Despite years of evidence to the contrary, many Republicans still believe President Joe Biden's 2020 victory was illegitimate. On Super Tuesday, a number of election-denying candidates won primaries, including right-wing commentator Dinesh D'Souza's son-in-law and debunker Brandon Gill. 2000 mules movie. In the run-up to this year's elections, claims of election fraud remain a staple for candidates running on the right, fueled by disinformation and misinformation both online and offline.
And the emergence of generative AI could make the problem even worse. A new report from the Center for Countering Digital Hate (CCDH), a nonprofit organization that tracks hate speech on social platforms, says generative AI companies claim they have policies in place to ensure their image creation tools are not misused. Despite the spread of election-related disinformation, researchers were able to circumvent safeguards and create images.
Some of the images featured political figures such as Presidents Joe Biden and Donald Trump, but others were more generic and, CCDH lead researcher Callum Hood said, more misleading. I am concerned that this is a possibility. Some of the images prompted by the researchers show, for example, militias outside polling places, ballots thrown into trash cans, and voting machines being tampered with. In one example, researchers were able to get StabilityAI's Dream Studio to generate an image of President Biden looking sick in a hospital bed.
“The real weakness was in the images that could be used to prove false claims that the election was stolen,” Hood said. “Most platforms don't have clear policies about it, and they don't have clear safety measures.”
CCDH researchers tested 160 prompts in ChatGPT Plus, Midjourney, Dream Studio, and Image Creator and found that Midjourney had the highest probability of producing misleading election-related images at about 65% . The researcher was only able to prompt his ChatGPT Plus to do so 28% of the time.
“We see that there can be significant differences in the safety measures these tools put in place,” Hood says. “If you can contain these weaknesses so effectively, it means other people don't care as much.”
In January, OpenAI said it was taking steps to “ensure our technology is not used in ways that could undermine this process,” including banning images that prevent people from “participating in a democratic process.” announced. Bloomberg reported in February that Midjourney was considering an outright ban on the creation of political images. Dream Studio prohibits the creation of misleading content, but there appears to be no specific election policy. Image Creator also prohibits the creation of content that could threaten the integrity of elections, but still allows users to generate images of public figures.
Kayla Wood, a spokesperson for OpenAI, told WIRED that the company is taking “mitigation measures, including improving the transparency of AI-generated content and refusing requests to generate images of real people, including candidates. He said he is working on “designing”. We are actively developing provenance tools, including the implementation of C2PA digital credentials, to assist in verifying the origin of images created by DALL-E 3. We continue to adapt and learn from our use of tools. ”
Microsoft, OpenAI, StabilityAI, and Midjourney did not respond to requests for comment.
Hood worries that there are two problems with generative AI. Generative AI platforms not only need to prevent the creation of misleading images, but the platform needs to be able to detect and remove them. A recent report in IEEE Spectrum found that his Meta proprietary system for watermarking AI-generated content was easily circumvented.
“Right now, platforms are not particularly well-prepared for this. So the election will be one real test of the safety of AI imagery,” Hood said. “We need both tools and platforms to move this issue further, especially when it comes to images that could be used to promote claims that the election was stolen or to dissuade people from voting. .”