Ok Alexa Let’s Commit Fraud

Fraud

Fraud

This article in NoJitter.com by Gary Audin does a great job identifying both the frequency of attacks associated with voice-based channels as well as the emerging attack methods and is well worth a read. Note that the highlights below do not include several graphics that provide additional stats:

“Voice and speaker recognition and text analysis have come a long way. I use voice-to-text transcription for writing some of my blogs. Call centers can use voice recognition technologies to identify customers. But are these technologies reliable for identification? Are there techniques that can use speech technologies to fraudulently impersonate me?

The Pindrop Report

I recently read Pindrop’s “2018 Voice Intelligence Report,”. The report reveals that the rate of voice fraud has increased 350% from 2013 to 2017 — that is one out of 2,900 calls (2013) to one out of 638 calls (2018) are voice channel fraud. The graphics is this blog are from the Pindrop report.

Fraud Costs are High

U.S. data breach incidents investigated in 2017 hit a record high of 1,579 breaches, according to the 2017 Data Breach Year-End Review released by the Identity Theft Resource Center and CyberScout. This indicates an upturn of 44.7% increase over the figures reported for 2016. And according to an academic study, malicious actors in these breaches are “generating, spending, and reinvesting” $1.5 trillion worth of cybercrime profits.

Profits include $860 billion from online markets, $500 billion from intellectual property or trade secret theft, and $160 billion from data trading. Ransomware and cybercrime-as-a-service were less lucrative at $1 billion and $1.6 billion, respectively.”

The article continues by examining exploits well-funded criminals are likely to deploy in the future, especially using machine learning tools against us. The article mentions the synthetic voice tools that are currently available online which mimic a voice and more recently a tool has been demonstrated that can alter the video of a person into saying anything.

“The sources for voice fraud calls include the PSTN, mobile phones, VoIP, and chat. Therefore the fraud detection methods have to cover multiple media. Do not forget that fraud can also be perpetrated through unmonitored IVR and chatbot programs.

Synthetic Voice

Through the use of AI, synthetic voice is potentially the latest dangerous technology. An example is Google Duplex, whose goal is helping to automate tasks like booking a reservation using a synthetic voice based on a real person. This technology will evolve into uses of more complex actions.

The advent of synthetic voice produces privacy and security issues. Perpetrators will use this to their advantage and exploit synthetic voice in their attacks. A hacked Google Assistant could engage in financial transactions with the victim’s bank and credit card accounts.

Businesses already use machine learning for monitoring and matching a customer’s device use, behavior, and voice. Machine learning can be used to create synthetic voices, spoof ANIs or CLI, and conduct massive robocalls that can work through an IVR to verify stolen account information.

Synthetic Voice Attacks

A perpetrator may try to impersonate a valid speaker to avoid positive identification. The perpetrator can record the customer’s voice by calling them or listening to them via social media recordings. The quality of the voice can be quite good. Voice modification software can change the perpetrators voice so it matches the customer’s voice through the use of electronic pitch control. Additionally, voice synthesis software can be used to produce a fake voice avatar.”

Overview by Tim Sloane, VP, Payments Innovation at Mercator Advisory Group

Exit mobile version