PaymentsJournal
No Result
View All Result
SIGN UP
  • Commercial
  • Credit
  • Debit
  • Digital Assets & Crypto
  • Digital Banking
  • Emerging Payments
  • Fraud & Security
  • Merchant
  • Prepaid
PaymentsJournal
  • Commercial
  • Credit
  • Debit
  • Digital Assets & Crypto
  • Digital Banking
  • Emerging Payments
  • Fraud & Security
  • Merchant
  • Prepaid
No Result
View All Result
PaymentsJournal
No Result
View All Result

Viewpoint from the Biometrics Institute – Spoof or Proof?

By Isabelle Moeller
September 25, 2017
in Industry Opinions
0
0
SHARES
0
VIEWS
Share on FacebookShare on TwitterShare on LinkedIn
Biometrics Eye Scan

Biometrics Eye Scan

The security of biometrics technology is in the spotlight and stakeholders must take a balanced view on its strengths and vulnerabilities, says Isabelle Moeller, Chief Executive, Biometrics Institute. As deployments proliferate, the technology’s credibility rests on the industry’s will to collaborate globally.  

The Oxford English Dictionary offers two definitions for the verb ‘spoof’: ‘To make (something) appear foolish by means of parody; to send up’ and ‘To render a system useless by providing it with false information.’

Sadly, where the spoofing of biometric security technologies is concerned only the latter applies and there is little to laugh about. The recent rise of biometrics deployments in consumer services has confirmed spoofing as a vulnerability that needs careful management. A wide variety of specialist interest groups, friendly and otherwise, make it their mission to expose the limitations of each solution brought to market. Indeed, detractors routinely use high profile failures to suggest that biometrics as a mode of security is just too risky a business to be worthwhile. They are wrong.

It’s the system, man

 As with all flavours of security technologies, the weak points in biometrics have spawned a race between those creating and applying the solutions and those seeking to undermine them. As new solutions are launched weaknesses are identified, and countermeasures developed.

In May, a BBC reporter, with the aid of his twin brother, ‘cracked’ a high street bank’s voice recognition system, proving the insecurity of the system. The weak point here, however, stemmed more from how the solution was implemented than from a failing of the recognition technology itself. All biometric systems have some vulnerabilities (it’s worth noting that the iPhone’s fingerprint sensor was successfully hacked just a week after launch). What matters is how these vulnerabilities are mitigated.

In general, there are two factors that determine how effective a biometric solution is, and both require some trade-offs to before a useable solution can be reached.

Firstly, the solution is only as good as the biometric data it enrols and then recaptures each time the user authenticates. The recaptured ‘image’ can be impacted by myriad factors depending on the mode being used. Ambient noise can interfere with voice recognition, for example, eyelashes can obscure an iris image, varying skin conditions can impact fingerprints and so on.

Secondly, the matching process also depends on how tightly the solution’s parameters are set. Insisting on too high a degree of similarity between the stored and presented image creates too many ‘false negatives’, where the genuine user is denied access, and the system rendered unusable.

It’s also worth remembering that a hacker never needs to replicate an individual’s biometric image absolutely, they need only replicate enough of it to fool the system. So, if the matching process isn’t rigorous enough then ‘false positives’ result, where fraudulent users are granted access and the point of the system is defeated.

There is always a balance to be struck. How should the system conclude that it has sufficient verifiable data to confirm the user’s identity?

Horses for courses

The choice of biometric modality has a big impact here. The variations between different biometrics mean that some are better suited to particular use-cases than others. Fingerprints, for example, leave a latent image on the data capture surface, which make them excellent for criminal identification. That said, the latent image itself can be copied, replicated and used in a spoof attack. Irises, on the other hand, leave no replicable trace making them far less useful in criminal applications. Thanks to the social sharing revolution, digital pictures of people’s faces are in very easy supply, particularly in developed countries, meaning that facial biometric solutions have to work harder than ever to verify their subject, using 3D mapping and liveness detection techniques.

The technologies are responding. In the near future, the use of new, cheaper multispectral sensors (which simultaneously capture multiple biometric images within a narrow spectrum) will greatly improve the industry’s ability to detect false biometrics. In automated border control systems that use face recognition, for example, infrared sensors can now determine if a mask is being used.

High stakes, getting higher

 The growing popularity of iris and voice recognition systems present fresh challenges. Siri, Cortana and Alexa are all gaining serious traction, and when banking and payment apps start to use iris recognition to grant access to the user’s account, the stakes rise significantly, and the motivations of the thieves will surely step up accordingly.

Although improving spoof detection is important, trying to chase a perfect anti-spoofing technique for any biometric is a fool’s errand. Try as the industry might, it cannot prove a negative; it can never say that a capture device is completely fool proof, simply because it can’t be tested against the unlimited universe of current and future spoofing techniques.

With facility comes responsibility

In terms of the end-user experience, biometrics are terrific; they are fast, convenient, reliable and, arguably, are untouchable by any other consumer-facing security technology today. Indeed, the facility enabled by biometrics is driving mass deployments across a host of devices and services; something that is bound to continue, despite its vulnerabilities.

This all adds up to an important point. A single biometric solution is not a ‘silver bullet’ and, in many cases, should be deployed as a factor in a multifactor authentication solution – one that is carefully designed and parameterised to mitigate the risks of failure associated with the use-case to which it is applied.

To this end, biometrics’ credibility, together with the security of those that use its technologies, will be determined by the industry’s ability to identify – and adhere to – best practice.

While the legal framework and policy creation for biometric data privacy remains a matter for lawmakers, commercially independent guiding principles for the design, deployment and operation of biometric technologies already exist. They are the product of international collaboration between academics, governments, vendors and other key stakeholders at the Biometrics Institute.

Only by sharing live deployment experiences, establishing guiding principles, creating best practice guidelines and promoting the responsible use of biometrics globally, can the industry truly claim to be representing the interests of end-users. Biometrics may be perfect, but our use of them is not. As the adoption of biometric technologies continues to accelerate, it is our collective responsibility to ensure we strike the right balance between delivering a great user-experience and mitigating security risks along the way.

0
SHARES
0
VIEWS
Share on FacebookShare on TwitterShare on LinkedIn
Tags: BiometricsCustomer RetentionSecurity

    Get the Latest News and Insights Delivered Daily

    Subscribe to the PaymentsJournal Newsletter for exclusive insight and data from Javelin Strategy & Research analysts and industry professionals.

    Must Reads

    Cross-Border Payments

    How the U.S. Built Its Faster Payments Ecosystem

    April 3, 2026
    Young Latin woman applying powder on her face for beauty blog. Smiling woman sitting at table in cosy room holding powder box and brush looking at phone camera recording video. Make up and cosmetics blogging concept

    TikTok Aspires to Fintech Status with Payments, Credit Bids in Brazil

    April 2, 2026
    small business credit card

    What Banks Get Wrong About Small Business Credit Cards

    April 1, 2026
    embedded payments

    Embedding Payments for Growth: How ISVs Can Scale Through Vertical Focus and Partnerships

    March 31, 2026
    ACH fraud monitoring

    From a Checkbox to a Differentiator: Redefining ACH Fraud Monitoring

    March 30, 2026
    Digitization and Multi-Brand Cards: Prepaid Trends. Bancorp Bank prepaid card fees, Bitpay Prepaid Card, mobile prepaid debit cards, prepaid cards for councils

    Turning a Prepaid Card into a Long-Term Relationship

    March 27, 2026
    payments fraud, faster payments fraud, financial fraud

    The Emotional Toll of Financial Fraud

    March 26, 2026
    hyperliquid

    What Hyperliquid Reveals About the Future of Trading

    March 25, 2026

    Linkedin-in X-twitter
    • Commercial
    • Credit
    • Debit
    • Digital Assets & Crypto
    • Digital Banking
    • Commercial
    • Credit
    • Debit
    • Digital Assets & Crypto
    • Digital Banking
    • Emerging Payments
    • Fraud & Security
    • Merchant
    • Prepaid
    • Emerging Payments
    • Fraud & Security
    • Merchant
    • Prepaid
    • About Us
    • Advertise With Us
    • Sign Up for Our Newsletter
    • About Us
    • Advertise With Us
    • Sign Up for Our Newsletter

    ©2026 PaymentsJournal.com |  Terms of Use | Privacy Policy

    • Commercial Payments
    • Credit
    • Debit
    • Digital Assets & Crypto
    • Emerging Payments
    • Fraud & Security
    • Merchant
    • Prepaid
    No Result
    View All Result