Image Credit: NiseriN/Getty
Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More
Once crude and expensive, deepfakes are now a rapidly rising cybersecurity threat.
A UK-based firm lost $243,000 thanks to a deepfake that replicated a CEO’s voice so accurately that the person on the other end authorized a fraudulent wire transfer. A similar “deep voice” attack that precisely mimicked a company director’s distinct accent cost another company $35 million.
Maybe even more frightening, the CCO of crypto company Binance reported that a “sophisticated hacking team” used video from his past TV appearances to create a believable AI hologram that tricked people into joining meetings. “Other than the 15 pounds that I gained during COVID being noticeably absent, this deepfake was refined enough to fool several highly intelligent crypto community members,” he wrote.
Cheaper, sneakier and more dangerous
Don’t be fooled into taking deepfakes lightly. Accenture’s Cyber Threat Intelligence (ACTI) team notes that while recent deepfakes can be laughably crude, the trend in the technology is toward more sophistication with less cost.
Event
Transform 2023
Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.
Register Now
In fact, the ACTI team believes that high-quality deepfakes seeking to mimic specific individuals in organizations are already more common than reported. In one recent example, the use of deepfake technologies from a legitimate company was used to create fraudulent news anchors to spread Chinese disinformation showcasing that the malicious use is here, impacting entities already.
A natural evolution
The ACTI team believes that deepfake attacks are the logical continuation of social engineering. In fact, they should be considered together, of a piece, because the primary malicious potential of deepfakes is to integrate into other social engineering ploys. This can make it even more difficult for victims to negate an already cumbersome threat landscape.
ACTI has tracked significant evolutionary changes in deepfakes in the last two years. For example, between January 1 and December 31, 2021, underground chatter related to sales and purchases of deepfaked goods and services focused extensively on common fraud, cryptocurrency fraud (such as pump and dump schemes) or gaining access to crypto accounts.
A lively market for deepfake fraud
However, the trend from January 1 to November 25, 2022 shows a different, and arguably more dangerous, focus on utilizing deepfakes to gain access to corporate networks. In fact, underground forum discussions on this mode of attack more than doubled (from 5% to 11%), with the intent to use deepfakes to bypass security measures quintupling (from 3% to 15%).
This shows that deepfakes are changing from crude crypto schemes to sophisticated ways to gain access to corporate networks — bypassing security measures and accelerating or augmenting existing techniques used by a myriad of threat actors.
The ACTI team believes that the changing nature and use of deepfakes are partially driven by improvements in technology, such as AI. The hardware, software and data required to create convincing deepfakes is becoming more widespread, easier to use, and cheaper, with some professional services now charging less than $40 a month to license their platform.
Emerging deepfake trends
The rise of deepfakes is amplified by three adjacent trends. First, the cybercriminal underground has become highly professionalized, with specialists offering high-quality tools, methods, services and exploits. The ACTI team believes this likely means that skilled cybercrime threat actors will seek to capitalize by offering an increased breadth and scope of underground deepfake services.
Second, due to double-extortion techniques utilized by many ransomware groups, there is an endless supply of stolen, sensitive data available on underground forums. This enables deepfake criminals to make their work much more accurate, believable and difficult to detect. This sensitive corporate data is increasingly indexed, making it easier to find and use.
Third, dark web cybercriminal groups also have larger budgets now. The ACTI team regularly sees cyber threat actors with R&D and outreach budgets ranging from $100,000 to $1 million and as high as $10 million. This allows them to experiment and invest in services and tools that can augment their social engineering capabilities, including active cookies sessions, high-fidelity deepfakes and specialized AI services such as vocal deepfakes.
Help is on the way
To mitigate the risk of deepfake and other online deceptions, follow the SIFT approach detailed in the FBI’s March 2021 alert. SIFT stands for Stop, Investigate the source, Find trusted coverage and Trace the original content. This can include studying the issue to avoid hasty emotional reactions, resisting the urge to repost questionable material and watching for the telltale signs of deepfakes.
It can also help to consider the motives and reliability of the people posting the information. If a call or email purportedly from a boss or friend seems strange, do not answer. Call the person directly to verify. As always, check “from” email addresses for spoofing and seek multiple, independent and trustworthy information sources. In addition, online tools can help you determine whether images are being reused for sinister purposes or whether several legitimate images are being used to create fakes.
The ACTI team also suggests incorporating deepfake and phishing training — ideally for all employees — and developing standard operating procedures for employees to follow if they suspect an internal or external message is a deepfake and monitoring the internet for potential harmful deepfakes (via automated searches and alerts).
It can also help to plan crisis communications in advance of victimization. This can include pre-drafting responses for press releases, vendors, authorities and clients and providing links to authentic information.
An escalating battle
Presently, we’re witnessing a silent battle between automated deepfake detectors and the emerging deepfake technology. The irony is that the technology being used to automate deepfake detection will likely be used to improve the next generation of deepfakes. To stay ahead, organizations should consider avoiding the temptation to relegate security to ‘afterthought’ status. Rushed security measures or a failure to understand how deepfake technology can be abused can lead to breaches and resulting financial loss, damaged reputation and regulatory action.
Bottom line, organizations should focus heavily on combatting this new threat and training employees to be vigilant.
Thomas Willkan is a cyber threat intelligence analyst at Accenture.
DataDecisionMakers
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own!
Read More From DataDecisionMakers