Responsible use of machine learning to verify identities at scale

Couldn’t attend Transform 2022? Check out all the top sessions in our on-demand library now! Look here.


In today’s highly competitive digital market, consumers are stronger than ever. They have the freedom to choose which companies they do business with and plenty of options to change their mind in an instant. A misstep that diminishes a customer’s sign-up or onboarding experience could lead them to replace one brand with another simply by clicking a button.

Consumers are also increasingly concerned about how businesses protect their data, adding an extra layer of complexity for businesses looking to build trust in a digital world. Eighty-six percent of respondents on a KPMG research reported growing concerns about data privacy, while 78% expressed fears about the amount of data being collected.

At the same time, rising digital adoption among consumers has led to an astonishing increase in fraud. Businesses need to build trust and make consumers feel like their data is protected, but also need to provide a fast, seamless onboarding experience that truly protects against back-end fraud.

As such, artificial intelligence (AI) has been hyped as the silver bullet of fraud prevention in recent years due to its promise to automate the process of identity verification. However, despite all the chatter surrounding its application in digital identity verification, many misunderstandings about AI remain.

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to offer advice on how metaverse technology will change the way all industries communicate and do business October 4 in San Francisco, CA.

Register here

Machine learning as a panacea

As the world currently stands, there is no real AI in which a machine can successfully verify identities without human interaction. When companies talk about using AI for identity verification, they are actually talking about using machine learning (ML), an application of AI. In the case of ML, the system is trained by feeding it large amounts of data and adapting and improving or ‘learning’ it over time.

When applied to the identity verification process, ML can play a pioneering role in building trust, removing friction and fighting fraud. It allows companies to analyze massive amounts of digital transaction data, create efficiencies and identify patterns that can improve decision-making. However, getting caught up in the hype without really understanding machine learning and how to use it properly can diminish its value and in many cases lead to serious problems. When using machine learning ML for identity verification, companies should consider the following.

The potential for machine learning bias

Bias in machine learning models can lead to exclusion, discrimination and ultimately a negative customer experience. Training an ML system using historical data translates the biases of the data into the models, which can be a serious risk. If the training data is biased or subject to unintended bias by those building the ML systems, decision making may be based on biased assumptions.

When an ML algorithm makes wrong assumptions, it can create a domino effect where the system consistently learns the wrong thing. Without human expertise from both data and fraud scientists, and oversight to identify and correct the bias, the problem will repeat itself, exacerbating the problem.

New forms of fraud

Machines are great at detecting trends that have already been identified as suspicious, but their crucial blind spot is novelty. ML models use data patterns and therefore assume that future activities will follow the same patterns or at least a consistent rate of change. This leaves open the possibility that attacks will succeed simply because they have not yet been seen by the system during training.

Placing a fraud assessment team on machine learning identifies and flags new fraud and feeds updated data back to the system. Human fraud experts can flag transactions that may have initially passed identity verification checks, but are suspected of being fraudulent, and can return that data to the company for further investigation. In this case, the ML system encodes that knowledge and adapts its algorithms accordingly.

Understanding and explaining decisions

One of the biggest blows to machine learning is the lack of transparency, which is a basic tenet of identity verification. They must be able to explain how and why certain decisions are made, and share information about each stage of the process and the customer journey with regulators. Lack of transparency can also fuel mistrust among users.

Most ML systems provide a simple pass or fail score. Without transparency in the process behind a decision, it can be difficult to justify when regulators come to the door. Continuous data feedback from ML systems can help companies understand and explain why decisions were made and make informed decisions and adjustments to identity verification processes.

There is no doubt that ML plays an important role in identity verification and will continue to do so in the future. However, it is clear that machines alone are not enough to verify identities at scale without adding risk. The power of machine learning is best realized alongside human expertise and with data transparency to make decisions that help businesses build and grow customer loyalty.

Christina Luttrell is the chief executive officer of GBG Americas, consisting of: acuant and IDology.

DataDecision makers

Welcome to the VentureBeat Community!

DataDecisionMakers is where experts, including the technical people who do data work, can share data-related insights and innovation.

If you want to read about the very latest ideas and up-to-date information, best practices and the future of data and data technology, join DataDecisionMakers.

You might even consider contributing an article yourself!

Read more from DataDecisionMakers