Malicious use of deepfake technology represents a growing threat to businesses.
The proliferation of AI tools is making it increasingly easy for attackers to use deepfakes in attacks, cybersecurity firm Kaspersky has warned.
Bethwell Ophir, Kaspersky Enterprise Client Lead for Africa, said: “While the time and effort required to create these attacks often outweighs the potential ‘rewards,’ businesses and consumers across Africa are aware that deepfakes are likely to be increasingly exploited in the future.” Kaspersky Lab warns that you still need to be aware of your concerns about the future. When it comes to deepfakes, the potential for malicious use is clear.
“The potential ramifications can be significant, from blackmail to perpetrating financial fraud to spreading misinformation via social media. Cybercriminals are always trying to spread their campaigns. However, the coming years are expected to see an increase in targeted attacks using deepfakes, especially against influential and wealthy individuals and organizations. The time and effort it takes for a person to create a deepfake would be justified.”
Kaspersky Lab's investigation revealed that deepfake creation tools and services are available on darknet marketplaces. These services offer GenAI video creation for a variety of purposes, including fraud, blackmail, and theft of sensitive data.According to estimates by Kaspersky experts, a one-minute deepfake video costs around $300
Human behavior, lack of digital literacy, and the inability to differentiate between fake and genuine products all add further pressure to an already difficult situation.
According to the 2023 Kaspersky Business Digitization Survey, which gathered opinions from 2,000 respondents in the Middle East, Turkey and Africa region, 51% of employees believe they can tell the difference between deepfakes and real images. However, in tests, he was only able to distinguish between real images and AI-generated images 25% of the time.
Kaspersky said this puts organizations at risk, given that employees are often the primary target of phishing and other social engineering attacks.
When it comes to deepfakes, the potential for malicious use is clear.
For example, cybercriminals can create fake videos of CEOs requesting wire transfers or approving payments, which can be used to steal company funds. Personally infringing videos and images can be created and used to extort money or information.
Opil added: “Although the technology to create high-quality deepfakes is not yet widely available, one of the most likely use cases to emerge from this is to generate audio in real time to impersonate someone. For example, a treasurer at a multinational company was recently tricked into transferring $25 million to fraudsters using deepfake technology that impersonated the company's chief financial officer in a video conference.Africa is also affected by this threat. It is important to remember that deepfakes are a threat not only to businesses but also to individual users. It can be used to impersonate people, which is a growing cyber threat.”
The company recommends strengthening human firewalls by educating employees about deepfakes, how they work, and the risks they pose. This includes ongoing awareness and education efforts to teach employees how to identify deepfakes.