Fraudulent insurance claims are responsible for more than $80 billion in losses per year in the United States, leading to higher premiums for all policy holders. It is estimated that card issuers will lose $1.3 billion this year from cards issued to synthetic identities. Total fraud losses from synthetic identities receiving loans and credit cards are estimated at $6 billion annually. Lenders, issuers and insurance carriers are increasingly combating fraudulent applications and claims with custom modeling solutions and AI.
Checking pay stubs to verify income is no longer sufficient. Completing an application with their authentic identity, a consumer can easily fabricate paycheck stubs or purchase them online for less than ten dollars. When fraudsters create synthetic identities to take out an uncollateralized loan with no intention of paying it back, they can just easily create or buy fake pay stubs to corroborate their synthesized existence.
Loan and card application screeners, as well as insurance claim processors, face a similar dilemma. There is such a high volume of claims or loans to process, they don’t scrutinize all applications as heavily. In the insurance industry, adjusters have to be very selective as to what claims they actually investigate.
This is one area where modeling plays a role and can be applied at multiple stages. In the first stages of the loan application process, organizations can leverage custom modeling to better identify which potential lenders may be dishonest and which are synthetic identities.
Co-founder of the synthetic identity fraud focused firm SentiLink, Naftali Harris, explained it like this: “Synthetic identities and real identities behave very differently. Real people show up, in terms of their credit history, roughly when they’re 18 or in their twenties and go about their lives living in a pretty ordinary way. Synthetic identities, though, behave in really erratic ways.” Often in ways that repeatable, as organized fraud rings create hundreds of synthetic identities that behave in a way that is unusual but the same across the fake applicants.
This is exactly the kind of behavior and pattern recognition modeling and machine learning (ML) is designed to catch. When custom modeling identifies behavior associated with synthetic identities, those are loans where organizations should both ask for information to validate their existence and proof of income, and spend more time verifying and validating the information they provide.
A similar approach is useful in the insurance industry where it is estimated that in the U.S. 10 percent of all insurance payouts, $80 billion per year, are from fraudulent claims . It isn’t feasible to review and investigate every single claim. Where insurance carriers have primarily relied on rules-based systems in the past, the major players have switched to custom modeling and analytics. All-State, for example, leverages an AI-based modeling system while also maintaining a human touch.
In an article from April, All-State elaborates on how they leverage modeling to identify the claims with high risk characteristics that undergo deeper scrutiny, but also how they use AI to be more proactive than reactive when it comes to detecting emerging threats and trends.
An even newer approach is leveraging advanced modeling and analytics to identify inconsistencies, signs of hesitation and other indicators of emotion and intent. Modeling in this space has been pioneered by Neuro-ID, who leverages patented software to monitor, analyze and score digital human interactions. This is a technology, known as Confidence Indicator Services, that fits really well into application and claims processes where how users react to different questions or parts of form can be directly leverage. Neuro-ID collects and measures various behavioral indicators of confidence, indecision, state-of-mind, intention and emotion.
Applying Confidence Indicator services towards the early to middle stages of screening can enable organizations to i