Why Your A/B Testing Program isn't Working

By Kurtis Morrison, VP Client Services at EyeQuant

 

Over the past 4 years I’ve met somewhere between 500 and 1000 conversion optimization practitioners. I meet more every week, and with every person I meet I try and learn a little something. I ask lots of questions. Lately, one of my favourite questions is this: What percentage of your A/B tests are “winners” (i.e. they produce a statistically significant uplift in conversion)?

It seems simple enough. After all, the ultimate KPI for any conversion optimization program is uplift. Without uplift, there is no measurable ROI of conversion optimization at all. There’s no tangible reason for management to take it seriously as a function. So you’d think that people in CRO - who spend all day looking at metrics and data - would know their own numbers, right? Yet in most cases, the people I talk to have only a rough idea of what their win-rate is, and many don’t really know. 

(FYI: the reported "win rates" range from 20% to around 70%)

I also ask about the cost of testing. Most people think about the cost of testing in terms of the wages for people involved in CRO, and the price of the tools they use.

I rarely hear about the less tangible factors like opportunity cost, but for any online business, there’s only enough traffic for X number of tests per month. Every testing slot you use = one that can’t be used on other test ideas that might potentially have a bigger impact. And every time you run a test where 1 (or more) variants underperforms the control, you lose real revenue. It’s called “testing”, but there are real customer experiences and cash at stake. These less obvious costs can be huge.

By comparison, whatever price you’re paying for your Optimizely (or other) subscription is fairly insignificant. And yet it’s amazing how many people would happily run an A/B test on a hunch rather than pay a few bucks for some user testing (for example) to validate their idea before they commit real traffic and revenue to it. That’s why I think CRO has grown steadily over the past 5 years, but never really saw “hockey stick” growth as a discipline. In most cases, testing programs aren’t run like a “business”, i.e. a deliberate process with carefully considered benefits and costs.

But what if they were? I can think of 3 major ways that teams might change in terms of the way they work:

  1. Optimizers would look for multiple sources of data that tell the same story about their website. An insight from a user test, a survey, or a heatmap on its own wouldn’t be enough because teams would recognize that there’s too much at stake. Teams would look for multiple signs pointing the same direction.

  2. The amount of attention paid to the quality of test variants and the process of designing test variants would dramatically increase. Today, teams spend most of their time and effort on defining what’s wrong with their website, while the actual solutions part is rather unsophisticated. If testing were treated as serious business, there would be careful checks in place to ensure that the variations are true to the original hypothesis, and some validation work would be done to ensure those variations have a decent chance of winning.

  3. Teams would focus more on test velocity. After all, additional revenue from testing simply comes down to your average revenue uplift multiplied by the number of tests. If teams took more time to estimate the financial impact of increasing test velocity, I think we’d see a lot of companies investing in more agile processes and technical infrastructure.

If more teams were doing these things, I think conversion optimization would definitely gain a lot more traction in the C-suite, and we’d see a huge increase in resources dedicated to testing. And wouldn’t that benefit all of us?

If you have an opinion on this, let me know in the comments or feel free to get in touch with me at kurtis@eyequant.com!