facebook parent Meta Platforms has launched an anti-hate speech and misinformation campaign in South Africa, which will be carried out on its platform as well as on local and national radio stations.
It allows users on platforms like WhatsApp, Facebook, and Instagram to identify and report content that is intentionally misleading or misinformed, or that could threaten the integrity of the upcoming election. It's part of the social media giant's efforts to In May.
Meta, like other social media platforms, has come under fire in recent years for allowing misinformation to spread on its platform as long as it drives up traffic volumes.
In Meta's case, this means that British consulting firm Cambridge Analytica has access to user data (specifically 50 million This was consistent with the unauthorized processing of the Facebook account). President Trump's favor.
In 2019, Meta was fined US$5 billion by the US Federal Trade Commission over this incident. But Meta's public policy director for sub-Saharan Africa, Barkissa Ide Cid, says the company has learned a lot since then.
“We're applying the lessons learned from our involvement in over 200 elections around the world,” Sydow said in an interview with TechCentral this week. “Over the past eight years, we have deployed industry-leading transparency tools for election and political advertising, developed comprehensive policies to prevent election interference and voter fraud, and created one of the largest We have built a third-party fact-checking program, a platform to combat the spread of misinformation.”
IEC partnership
Sydow said Meta is working closely with the South African Electoral Commission (IEC) to help develop policies and tools to assist the commission in the fight against election-related misinformation. The initiative includes a training program for IEC staff on media literacy and how to detect misinformation.
But she added that while the social media giant's work with the commission may intensify in the run-up to the election, the partnership represents a continuing process in the growing relationship between the two institutions. .
“Misinformation and disinformation are not new and will not stop occurring even after the election,” Ido said.
Read: AI deepfakes and SA’s fight to protect the 2024 election
To help educate our users, Meta's moderators quickly remove content that we deem harmful, but we do our best to help users engage with it and learn how to recognize it if they come across it. Keep, rank, and label content classified as informational. Also on other platforms.
However, advances in technology are adding new challenges to the content moderation landscape. Artificial intelligence and deepfakes are enhancing the quality of fake content on social media platforms, and this means better education about misinformation so users can recognize fake content and respond appropriately. Iddo believes this highlights the importance of receiving personal support.
But Ido said there are also positive aspects to AI that are less talked about than the potential dangers. In discussions with various stakeholders, including content creators, Mehta said how AI can improve content creation capabilities, especially for smaller, less resourced media outlets and individual content producers. I've noticed growing excitement about how it can help.
Meta is also using AI as part of its arsenal to combat the spread of misinformation on its platform. “We have more than 40,000 staff dedicated to safety and security, and we partner with local governments to fact-check. But we are also experimenting with AI tools to help identify harmful content. We found that large language models are much faster for detection,” said Ido.
At the international level, Meta is part of a partnership with other social media and owners of content creation platforms that use AI, such as Microsoft, Google, Shuttershock, and Midjourney, to ensure that social media platforms can create content created by AI. help you identify.
After identifying that content is AI-generated, Meta lets users know through labeling. “They are [content creation platforms] We need to embed a watermark into our content so that we can recognize it when it reaches our platform,” said Ben Waters, policy communications manager for Africa and the Middle East at Meta.
Locally, Meta, Google, and TikTok's parent company ByteDance signed a cooperation agreement with the IEC in July last year, under which the electoral authority will be able to monitor misinformation reported on social media platforms. An independent three-member committee has been established to evaluate cases.
Depending on the commission's findings, the commission will issue recommendations to the IEC, which can require offending platforms to downgrade or remove malicious content. However, one of the largest social media platforms, X (formerly Twitter), is not a party to this agreement. – © 2024 News Central Media