“We're now seeing rapid growth in the misuse of these new AI tools by bad actors, including AI-generated video, audio, This includes image-based deepfakes.
“This trend is introducing new threats, including election and financial fraud, non-consensual pornography harassment, and the next generation of cyberbullying.” “We need to act urgently to address all of these issues,'' the vice president wrote.
A Microsoft blog post states that “as a company we are committed to a robust and comprehensive approach,” listing six distinct areas of focus:
- Strong safety architecture. This includes “continuous red team analysis, preemptive classifiers, blocking of fraudulent prompts, automated testing, and rapid banning of users who abuse the system, based on powerful and extensive data analysis.” .
- Durable media provenance and watermarks. (“At last year’s Build 2023 conference, we announced a media provenance feature that uses cryptographic techniques to mark and sign AI-generated content with metadata about its source and history.”)
- Protecting our services from unauthorized content and activity. (We're “committed to identifying and removing deceptive and fraudulent content” hosted on services like LinkedIn and Microsoft's gaming network.)
- Strong collaboration across industry, government and civil society. This includes “active engagement” with “others in the technology sector,” civil society organizations, and “appropriate cooperation with governments.”
- Latest legislation to protect people from technology abuse. “We look forward to contributing ideas and supporting new initiatives by governments around the world.”
- Public awareness and education. “We need to help people learn how to tell the difference between legitimate and fake content, including watermarks. This includes new public education tools, including closer collaboration with civil society and leaders across society. You will need a program.”
Thanks to longtime Slashdot reader theodp for sharing the article