With Custom Modeling, Deployment is Just as Critical as Design

Sarasota, FL, March 18, 2015 / By: Justin McDonald, The Fraud Practice LLC

Custom modeling and analytics is an advanced risk management technique that utilizes organization-specific data to identify trends and evaluate the risk of future transactions by use of statistical formulas or models. Advancements in data science, machine learning and technology have made custom modeling solutions more affordable and attainable, and organizations have benefitted from the increased availability of such services in the marketplace. This is particularly true for merchants and other mid-sized organizations that may not have the resources to build and manage custom modeling and analytics entirely in-house but now have more options for buying partial to complete modeling solutions.


Whether an organization is building a custom modeling solution in-house, using a service provider or combining both in-house and third party resources, the fundamental components of an effective custom modeling solution are the same. Statistical models must first be created, which requires historical data, a team of modeling experts, as well as the right tools and software to design effective models. Next, the organization will need the infrastructure or platform to actually apply this model to live transactions, interpret the results and route the transactions accordingly. A commonly observed problem in the market, however, is that organizations put forth such great effort in ensuring the statistical models are accurate predictors of risk that the next step, how these models are actually deployed, is often overlooked or just an afterthought.


This isn’t to say that model design is not a critical step. What’s the good in efficiently deploying custom models if they are not effective at distinguishing fraudulent from legitimate transactions? But organizations must also consider the other side of the coin: even if a custom model was accurate at predicting fraud most all of the time, it is of no benefit unless it can be applied to transactions, meaning the transactional and customer data can be fed to the models and the results can be interpreted to decide the course of action for each order.


Deployment is the second major step in executing custom models after model design, but is at least equally as important of a step.

This statement is true for in-house, outsourced and hybrid custom modeling solutions, but for the context of this article the focus is on assessing deployment features and capabilities when shopping vendors. For organizations that have or plan to have in-house custom modeling and deployment, prioritize the following capabilities and considerations to build accordingly.

First let’s be clear on what is meant by model deployment. At this stage the custom statistical models already exist, but deployment refers to how these models are actually leveraged. It may be best to provide a simplified example. First, think of a statistical model as a formula. The formula takes into account many variables, often hundreds or thousands, and applies coefficients, or weights, to each variable. Deciding which variables to include and what weights to apply are examples of how models are designed.


To deploy a custom model is to run it on a platform that can produce or calculate all of the variables the model needs, then after feeding the model the data needed to provide the predictive outcome or score, the platform must execute a decision (Approve/Decline/Review) contingent on the results. Deployment refers to the infrastructure and processes required to apply custom models to live transactions and subsequently route these transactions accordingly. Below are 7 important considerations around model deployment that all organizations currently using, planning to use or considering custom modeling should keep in mind.

Feeding the Model The deployment platform can be thought of as a hub that is connected to multiple data bases, third party services and data sources providing all the needed information or variables that feed the models. On a more basic level variables can be binary, such as whether or not shipping and billing addresses match or if the payment card number being used is on a blacklist. The variables that a model relies on can also be more complex. For example, a model may call for variables such as the distance between billing and shipping addresses or a velocity of change count like the number of different payment cards associated with the same email address. The model must be provided the distance in miles or kilometers, or the velocity count number, it is not expected to make the prerequisite calculations. Often models rely on very complex variables that may be correlated or contain sophisticated aggregations with respect to many data points. The responsibility of providing these complex variables falls