Bankers beware: Deepfake technology is ever more real

BANKING STRATEGIES

As a British energy company executive listened to the voice on the other end of the phone line, the familiar German accent convinced him that he was speaking to his parent company’s CEO. So when the executive was instructed to wire $243,000 to the CEO’s bank account, he went ahead and did it.

But it turned out both the CEO, and the account, were fake. The executive had fallen prey to “deepfake” technology, which uses artificial intelligence to produce real-sounding audio and video.

After the September scam, a spokeswoman for the energy company told the Washington Post that the deepfake technology imitated not only the voice and accent, but its tone and cadence. She said the stolen money was funneled through accounts in Hungary and Mexico before being scattered elsewhere and then disappearing.

As banks increasingly rely on technology, including video, to help speed up the process of identifying individuals, the possibility that what they’re seeing or hearing is not real has financial industry leaders looking for ways to make sure they are not victimized.

“There are already proven instances of market-level disinformation campaigns using deepfakes,” said Sultan Meghji, co-founder and CEO of Neocova. “Market or individual institutional manipulation has been seen as part of larger disinformation or manipulation campaigns by well-known actors. We’ve also seen instances of nefarious actors attempting to steal from banks using simple deepfakes.”

Meghji, whose company provides cloud-based storage for financial institutions, said the concern for banks is that combining deepfake technology with other programs might increase the ability of would-be scammers to make fraudulent transactions and create “maximum impact and disruption.”

Rivka Gewirtz Little, senior vice president of marketing and strategy at Socure, said she views these attacks as next-generation business email compromise (BEC) scams. In 2019, such scams cost U.S. businesses nearly $2 billion, according to the FBI.

In such attacks, “fraudsters hack email and pose as executives, ordering their staff to send high-dollar wire transfers,” said Little, whose company works closely with financial institutions, challenger banks and fintechs on digital identity verification. “Now fraudsters will pair robo-driven deepfake voice calls with email hacking to pull off even more sophisticated attacks.”

One danger of deepfake technology is having it used alongside natural language processing (NLP) and natural language generation (NLG) to enable fraudsters to have complex conversations between their spoofed executive and their staff. Banks should also be concerned about the use of deepfakes as part of automated attacks.

“We’ll see wider robo-calls to contact centers, for example,” Little said. “Armed with NLP and NLG analytics, these bots will be trained to engage with contact center agents in attempts to make changes to accounts for account takeover, glean information for later use in account takeover or even, in some rare cases, to originate transactions.”

Little said the financial industry’s push to speed transactions to increase customer satisfaction makes deepfake technology a bigger threat.

“In traditional payments, we have one to three days to settle a transaction, and therefore a window of time to spot an attack and stop the money from leaving the bank,” she said. But in the world of real-time payments, “we see money move out of institutions in seconds. As deepfake technology makes fraud attacks more solid, it’s just scarier in a world where there is no time to stop the transaction.”

Sherrod DeGrippo, senior director for threat research and detection at Proofpoint, said BEC attacks typically rely heavily on a combination of social engineering and mediated communications, such as email, to convince someone in an organization to send money to accounts under the attacker’s control.

To achieve this, he said, attackers will pose as a party the recipient trusts, and request money in a way that uses social engineering pressures to expedite the transaction and generally bypass normal security controls. “The crux of BEC scam success comes from the combination of trust the recipient is placing in the requestor, plus the social engineering pressures to expedite the transaction.”

As seen with the British energy company example, adding deepfakes to the mix “can increase the potency and success of those two factors,” DeGrippo said. “The attackers can significantly increase the social engineering pressures of the request by adding human emotion through voice and images… We can expect this to continue as this technique is used more and shown to be effective.”

Deepfakes take social engineering to a new level of sophistication, said Avivah Litan, a vice president and distinguished analyst at Gartner Inc.

The technology makes it “increasingly difficult for banks to stop hacks against their customers, who are traditionally the weakest security link in the chain,” she said. “This will inevitably lead to the bad guys stealing customer funds, while customers unwittingly authorize the theft.”

Another potential threat, according to Litan: Deepfakes being used as part of misinformation campaigns about banks in order to extort money to avoid damaging attacks on their political, social or financial reputations.

Socure’s Little said banks should use multidimensional identity verification and fraud tools to protect against fraudsters using deepfakes to scam contact centers. In many cases, she said, voice and behavioral biometrics can be used to detect these attacks.

But, as with every facet of cybersecurity, banks find themselves in an endless spiral of trying to keep up with the latest technological challenge. The answer could be banks investing in tech-based solutions that use their artificial intelligence to fight the bad guy’s AI.

“In the best-case scenario, banks will leverage multidimensional identity fraud tools to risk-assess every inbound call, producing an AI-based risk score which considers factors of device, carrier data, location, other (personal identifiable information) elements,” Little said. “A deepfake cannot make its way around this kind of multidimensional view of identity.”


BAI
Wayback