Meta Platforms CEO Mark Zuckerberg during the Acquired LIVE event at Chase Center in San Francisco, California, USA on September 10, 2024. (David Paul Morris/Bloomberg, Getty Images) via)
Artificial intelligence is developing too quickly for lawmakers to track and protect citizens, including children and adolescents, who are being negatively affected by the proliferation of fake content and falsehoods spread on social media.
Social media and the use of AI to create fake online images has become the “tobacco of today,” says Odin Education, an AI thought leader and a division of global technology company Gendermark Automation. said Ajit Gopalakrishnan, director of . Automotive field.
At a recent AI workshop in Durban, Gopalakrishnan gave the example of Vodacom's advertising for Samsung Galaxy mobile phones. The ad uses a dull image of a lifeless-looking young woman holding a cell phone, which is then replaced by a photo posted online that oozes color. , confidence and joy.
“We're currently living through probably the highest rates of teenage depression in history, and we're turning to advertising.
“In real life [in the advert]look at her expression. she is dull she is boring She’s a black and white person and what does she look like online at the same moment?”
This cell phone ad is reminiscent of cigarette ads from the '80s and '90s, which portrayed images of successful people smoking, being happy, and enjoying wealthy leisure activities such as fancy parties and sailing.
Highlighting eight risks of AI to society, businesses and humanity, Gopalakrishnan told the workshop: This is not about future risk. This is a current risk. ”
These include threats such as polarization, bias, unemployment, loss of intellectual property, decentralization, mass manipulation, deepfakes, and the inability of laws to keep up with AI.
Social media and AI are already being used to polarize society. Gopalakrishnan said that extreme content on the political left and right drives traffic and keeps people engaged for longer, while content expressing moderate views is less popular because it doesn't evoke emotions and is therefore less likely to be posted. He said he would not be pushed.
AI also comes with creator-specific biases.
Many workers in every profession, from accountants to screenwriters to Hollywood actors to electricians, could be replaced by robots.
“We know that the plumber's job is probably the safest because machines can't do that yet,” he says.
Gopalakrishnan said resumes are already screened by an algorithm, and resumes without specific keywords are filtered out.
European Union AI law therefore seeks to counter this by imposing broad obligations on various actors in the lifecycle of high-risk AI systems.
“For example, AI should not be allowed to make decisions because humans cannot inspect the decision-making process.
“People have a right to be held accountable, so decisions that have significant legal consequences cannot be delegated to algorithms.”
Gopalakrishnan said this year is being touted as the last year of human elections, as artificial intelligence can be used to survey and assess the state of mind of voters and inform promises made by political candidates during election campaigns. However, deepfake videos and audio spoofing pose new threats. scam.
“Imagine if I called my mother with my voice and my face (on the mobile phone) and asked her for her pin number and bank account details. She would do that.
“This is the reality that we're in today. I think people are just ignorant of what's going to happen,” he said.
There were also concerns about the use of AI in weapons development and warfare.
“Russia and the United States, two countries that cannot agree on anything, agree on the fact that everyone must have lethal autonomous weapons. In all wars, AI makes killing decisions.”
“In fact, one of the things we know is that the Israeli government is actually using AI to determine targets using databases,” he said.
Gopalakrishnan believes that the proliferation of AI-generated fake content will lead to a time when people will no longer trust information published on the internet.
Estee Cockroft, founder of Screen Smarts, which teaches people how to be street smart on screen, said it was the “Wild West” when it came to cyberspace and the law.
“Not only are laws far behind the ring-fence where we currently stand in terms of inventions, but technology is accelerating rapidly,” she said.
“It's moving so fast that policy can't keep up.
Cockroft suggests that children and teenagers should not be given access to social media because of the negative effects.
“We are seeing a dramatic increase in social media depression around the world, which experts are calling a 'global youth mental health crisis.'
“Not only does comparison create unattainable ideals, but children and adolescents desperately need what technology cannot provide: close connections with family and friends,” she says.
But while more and more voices are holding companies like Meta accountable for fake posts, accountability and control remain a “long way off” and there is “zero transparency on the part of companies.”
“What we shared in the past can still be manipulated today using technology that wasn't invented when we posted the content. And we can hardly rely on this,” Cockroft said. said.
She regularly hosts workshops on AI in schools, focusing on topics such as polarization, deepfake technology, source interrogation, the dangers of AI nudity, and protecting your privacy online.
“There is an urgent need for open conversations on these topics… Gen Alpha is generally more skeptical than previous generations.
“However, it is essential to incorporate critical thinking skills into all digital citizenship education.”
Cockroft also predicts a time when generative AI will control most media channels and people will no longer believe what they read or see online.
“As humans, we become skeptical of all content because we can no longer differentiate between real and fake,” she said.
She acknowledges that technology plays an important role in children's upbringing.
“We need kids to use AI and learn robotics. It’s important for them to experiment with coding and learn through discovery.
“But you don't have to be tech-savvy or have a smartphone or social media to be connected to the internet.
“Smartphone use should be delayed until high school, and children should not use social media until they are 16 years old.”