A simple fact: when you engage with consumers with language, offers, ideas, and a personal touch that aligns to their needs (both observed and inferred), purchases go up, satisfaction rises, and loyalty is cemented. The core tool for delivering highly relevant experience is predictive analytics.
TIBCO’s Data Sciences team and I have completed numerous projects—then scaled to ongoing usage—of either modeling the propensity of taking a desired (or non-desired) action based on personal or segment prior history, or the affinity of products that are most often sold in combination to all or specific segments of customers.
When the circumstances are right; when we have history, sample size, and consistency of products and consumers, predictive analytics is a nearly flawless means of boosting the effectiveness of engagement by customizing the product or service selected, the nature of the offer, the channel of communication, or the digital imagery.
Interestingly, the mathematics is probably the easiest aspect of predictive analytics—assuming of course that you are or have access to a strong data scientist and the appropriate advanced analytic tools. The challenges come from process and execution.
An important first step is to go from many to some— to create a segmentation POV that reduces millions of people to a manageable and tangible set of consumer clusters that share important behavioral or attitudinal traits. Additionally, segmentation is a powerful tool for building a common language around customers and more easily communicating the impact of predictive analytics.
Next, identify the “training period.” This means a period of time that mirrors the time in which you’d like to be in market with your personalized campaigns or customer experience. In seasonal businesses (we do a lot of work with outdoor apparel companies), that means having a robust history of purchases during winter if we’re heading to winter, summer if we’re heading to summer.
A data scientist must consider sample size. In a high SKU or a low frequency retail world, the customer sample must be large enough to accommodate relatively few actual combinations of purchases over the training period. Think of the common product affinity example: “Those who bought x typically buy y.” That assumes sufficient instances of purchases to correctly direct the model. Additionally, in order to “tune” or refine the model, build the model on a portion of customers and their transactions and test the assumptions on the other half to prove or disprove the initial direction.
Finally, there is the process of execution—determine the most effective way to use, test, and quantify the impact of predictive analytics. For instance is it best to test down to a specific product recommendation, a set of products, or more broadly a category or line of products? Which elements of a campaign can and should be adapted? Does it make sense to vary both the product and the offer—or does that add unnecessary complexity to the post-campaign analysis? How did the personalized offer do against a non-personalized offer? How did both offers do against a group of customers that received no offer at all?
Let’s use the current Presidential election to illustrate what can go wrong.
We’ve seen a weekly confoundedness surrounding the up, down, and up again fortunes of the remaining candidates. The inability to consistently predict the outcomes of individual primaries and the overall race is without precedent. Even as the models self tune from week to week, the absence of consistent prediction is remarkable.
Using the lessons of marketing science above, we better understand the challenges that the experts (e.g. Nate Silver, Editor-in-Chief of ESPN’s FiveThirtyEight) have faced.
Segmentation. Michigan’s Democratic Party primary is considered one of the biggest upsets in American political history. Hillary Clinton was “predicted” to win by 22 points and she lost to Bernie Sanders by 2. HRC was given a 99% likelihood of winning the primary. The potential answer in FiveThirtyEight’s post mortem came down to segmentation and sample size. In effect, the pollsters did not account for a robust view of young voters, independents and working-class white voters, and among those groups, the polls that were utilized in the predictive model under sampled (perhaps due to abandonment of landlines among Millennials).
Training. The Republican side has been even more unpredictable. How can you expect to train a model when the input variables (Donald Trump primarily) bear no resemblance to the modern era (roughly from the 60s ‘til 2008) of Presidential politics? Despite a relatively small number of actual elections – forty-two since 1960, the dynamics of front-runner status, of traditional voting blocs, and Party control have generally given pollsters and the predictive analysts who follow those polls a very high degree of confidence. Simply put, when we cling to, “Past behavior is the best predictor of the future,” we’re upturned by something that we’ve never seen before. There has never been more noise from all aspects of this race trying to drown out a signal that has yet to emerge.
Be happy that you get to do your job in the relatively calm environment of customers, products, channels, and social media.