Are You Ready to Implement Segment-of-One Marketing?
Technology now exists to enable Segment of One marketing (aka. hyper-personalization, aka. n=1), but not all brands are in a position to take advantage of those capabilities. Here’s a look […]
If you’ve been doing marketing for a while, you’re already familiar with the concept of A/B testing. Let’s say, we have two versions of a subject line, or of a landing page, or of an image, or whatever, and we want to know which one converts best. To determine that, we randomly assign one version to Group A and one version to Group B, probably on a 50/50 split. Next, we run a test, wait for statistical significance, and then roll out the winner.
Or sometimes we test more than two versions (multivariate testing). The statistical test of significance is a bit different in this case, but otherwise the idea is similar to an A/B test.
Let’s say we have a million customers or prospects. We do an A/B test or a multivariate test and we find a winning version, but during the testing phase, half of our audience – or even more than half with multivariate testing – got exposed to a weaker version that doesn’t convert as well, and now it’s too late to send them the better version for that particular campaign. That might be a little difference for each campaign, but over time those differences can add up.
There are actually two innovations here. There’s the MAB itself, and there’s the contextual aspect, so let’s look at those one-by-one.
The term “multi-armed bandit” comes from an important paper by Herbert Robbins, “Some aspects of the sequential design of experiments,” published in Bulletin of the American Mathematical Society. That paper is widely recognized as having introduced what we now refer to as bandit problems.
The name comes from the well-known slang term for a slot machine, which is called a “one-armed bandit” because it has a lever (an “arm”) and it “steals” your money.
In his paper, Robbins asks us to imagine several slot machines, each with unknown payout probabilities (hence, multiple arms), and you can pull only one arm per round, trying to maximize your long-term payout. With this, Robbins proposed something very innovative, known today as sequential experimentation.
By contrast, traditional statistics firsts collects data, then analyzes it, and then forms a conclusion. Instead of this, Robbins proposed the idea of “acting,” and learning from the outcome, and then dynamically updating the strategy based on what we learn, and then acting again, with the ultimate objective of maximizing our cumulative reward. This idea was the philosophical root of modern reinforcement learning.
Specifically, Robbins proposed that a learning system should minimize “regret,” meaning the loss relative to an ideal strategy that already might have been able to identify the best variant before the results had been formally quantified.
More recently, a very important milestone came about when John Gittins proposed his Gittins Index Theorem, which showed that an optimal strategy exists for a bandit with known prior distributions. This theorem stunned the mathematics community because it provided a closed-form solution to a very hard sequential decision problem. And this, in turn, empowered many of the methods that we see in use in marketing today.
In short, with an MAB, we treat each variant (or “arm”) as an option, and we gradually start shifting more traffic to the better-performing variants as the emerging end result becomes more and more obvious. Doing this helps to maximize conversions during the testing phase itself, rather than waiting until after the end to optimize. So that’s a bandit.
Contextual bandits are a key aspect of Segment of One personalization.
As the name implies, a contextual bandit injects context into its decision process. So instead of asking “Which variant works best for our overall customer base on average?” it looks at purchase history, online behavior, demographics, time of day, channel, and any other relevant factors and then asks: “What would work best for this particular person, at this moment?”
This approach is very appealing because it helps to squeeze as much juice as possible from every opportunity we get to communicate with our customers, and that makes a big difference. Brands typically see a 3-7% topline increase from incremental sales, when transitioning to recommender-led campaigns, which delivers 4-5x higher ROI, compared to segment-led campaigns. Also, conversion rates improve by 20-30% with n=1 campaigns, and KPIs like repeat, retention, and win-back improve by 5-10% year-over-year.
Part of the answer is that it can seem too difficult to integrate a contextual bandit into existing Martech stacks. And that’s precisely why purpose-built Segment of One technology is growing in popularity. Take SOLUS for example.
SOLUS sits between your existing data sources and your existing engagement channels, generating individual customer-level nudges that combine recommendations, propensity scores, and stacked models, rather than broad segments or static rules. Everything in your stack works just as before, albeit MUCH smarter. And full implementation takes one week.
So if you want to level up your marketing game and get more juice from the squeeze, then let’s connect and run a test. You’ll be amazed at what you’ve been leaving on the table.
Jim Griffin is a faculty member at the University of Texas, Austin, in the Masters of Business Analytics program. He’s also the founder of AI Master Group, which delivers high-impact consulting and resources related to AI. Jim has more than 15 years of project experience in North America, Europe, the Caribbean and Asia Pacific, with projects involving AI, analytics, machine learning and CRM. He also has a popular YouTube channel and podcast devoted to AI.
Jim can be reached at jim@aimast.org
Technology now exists to enable Segment of One marketing (aka. hyper-personalization, aka. n=1), but not all brands are in a position to take advantage of those capabilities. Here’s a look […]