Cybercrime, artificial intelligence and new frontiers in the battle against fraud

BANKING STRATEGIES

There is no blood being shed, but the digital world, through which banks transmit treasure and identity, is clearly a battlefield. And artificial intelligence systems—which can make their own decisions at speeds humans can’t comprehend—is fast becoming a game-changing weapon.

On offense as well as defense.

“Fraudsters are more agile than the banking groups they’re trying to infiltrate, so they’re often able to develop malicious code and methods that take advantage process vulnerabilities and security lapses,” says Will Griffith, Financial Services Industry Lead for the data and analytics company Teradata.

Banks, he says, “are on the defensive, reacting to the fraudsters’ moves, using analytics to detect anomalous behaviors.” But the fraudsters are like insurgents.

“It’s a form of asymmetric warfare with the fraudster choosing the time, place and magnitude of their attacks,” says Griffith, who could just as well be discussing an attack by the Taliban in terms of the how, not the what. “The criminal’s intent is to blend in with normal customers and transactions. For the banks, it’s a little like trying to find a needle in a stack of needles.”

And the danger posed by fraudsters armed with AI is only going to rise, experts say.

In the summer of 2016, Ukrainian banks were among that government’s institutions hit by a new ransomware attack called Petya, which looked up computer files.

While ransomware attacks are nothing new, the Petya attack had a twist, one that has financial industry security people concerned.

Petya’s authors used artificial intelligence, or AI, to identify and exploit vulnerabilities in Ukrainian security systems, says Mason Wilder, research specialist with the Association of Certified Fraud Examiners (ACFE).

The nefarious use of AI by Petra’s authors, says Wilder, is a harbinger of things to come: “In the future, artificial intelligence will likely pose much more of a threat than it does now.”

Malware meets machine learning

Fraudsters currently use AI mostly to automate cyber attacks and increase the volume of several common threats—phishing emails or worms that infect computers or Internet of Things devices and use them as botnets to carry out Distributed Denial of Service attacks. 

However, Wilder says these attacks may be paired with some basic machine learning programs to enhance their efficiency in mining Personally Identifiable Information (PII) from large data sets, including public-facing social media accounts. That information can be incorporated into the phishing emails, or used to gain access to a victim’s bank accounts or to open new lines of credit, apply for loans, etc. with financial institutions.

To defend against intrusions, financial institutions are increasingly using AI to analyze both customer and bad guy behaviors to ensure the constant flow of vetted communications.

“Detecting malware has become a big data problem which requires the help of self-learning machines to handle the complexity beyond human abilities, and improve the accuracy of threat detection,” says James Rodman Barrat, a documentary filmmaker, speaker and author of “Our Final Invention: Artificial Intelligence and the End of the Human Era.”

Machine learning, a subset of AI, offers the financial industry a wide array of defenses, says Barrat. It can be used to detect and classify malicious files such as ransomware, Trojan horses, viruses and rootkits; analyze abnormal user and network behavior; perform advanced event analytics; and identify encrypted malware.

But given the growing threat from AI-enabled fraudsters, Barrat says the banking industry needs to double down on its tech advantages.

“The history of cybersecurity has been to prepare for yesterday’s threats,” he says. “After an exploit occurs, defenses are built against it. But those defenses might be useless against today and tomorrow’s exploits.

The answer? Analyzing abnormal user and network behavior in real time.

That’s “the most important component of protection using machine learning,” says Barrat.

The family that’s hacked together hacks together

In the “somewhat near future,” fraudsters may use AI to recreate and steal individual online personas, says Wilder, the ACFE research analyst.

“It’s entirely foreseeable that programs will be able recognize a person’s voice, speech patterns, tone, inflection and syntax based on enough data analysis, then incorporate those patterns into a recreation of that victim’s voice in any number of applications,” he says.

He points to the plethora of information that people put out about themselves and loved ones that can be used against them by nefarious actors.

“If given access to enough videos taken of your family, a program could conceivably impersonate your child on a phone call telling you they are in trouble and need you to transfer money immediately, using a voice indistinguishable from your child,” says Wilder. “This would also have implications for any biometric security measures based on your voice.”

The next level of sophistication in AI is generally considered “deep learning,” where the AI takes those patterns and makes its own decision on how to react or categorize that pattern without human input,

“Those applications are still pretty rare in fraud-prevention because they don’t lend themselves to auditing procedures and it’s hard for them to deal with cutting down on false positives without any human input, says Wilder.

And then there is the cost. “They require significant investment that at this point may not be justifiable in terms of losses prevented,” Wilder says.

Despite this, financial services executives unanimously believe that by the year 2030, AI will have an impact on human tasks, according a survey of 260 large global organizations—about 15 percent financial institutions—conducted by technology industry market research firm Vanson Bourne on behalf of Teradata.

Some other key findings include:

  • While 100 percent of financial services executives believe AI will have an impact on human tasks by 2030, 46 percent say that AI and humans will co-exist, each performing tasks that are optimized to their strengths; 23 percent believe that AI will be integrated with humans resulting in enhanced human capabilities to perform enterprise tasks; and 26 percent say that AI will replace humans for most enterprise tasks.
  • When it comes to AI investment for financial services organizations, 62 percent expect those investments to drive increased revenue and top-line growth, 38 percent are targeting cost take-out and efficiency gains.
  • For every dollar invested in AI today, financial services companies expect to double their return on investment in five years, and triple the return in 10 years.
  • Lack of IT infrastructure (46 percent) and lack of access to talent (41 percent) are the two biggest barriers financial services organizations expect to face when trying to achieve AI realization across the organization.

Meanwhile, on the battlefield that is the digital domain, banks have successfully deployed AI as a defensive weapon, says Griffith. Teradata has worked with Danske Bank to create and launch an AI-driven fraud detection platform.

The engine uses deep learning to analyze tens of thousands of latent features, scoring millions of online banking transactions in real-time to provide actionable insight regarding both true and false fraudulent activity.

By significantly reducing the cost of investigating false-positives, Danske Bank “increases its overall efficiency and is poised for substantial savings,” Griffith says.

And at least one other financial bottom line stands to be bolstered: Teradata expects to meet 100 percent ROI in its first year of production.


BAI
Wayback