Friday, December 18, 2020

Sub-Optimal A/B Testing - Why?

By: Alon Cohen Jan 14, 2018, updated: Dec 18, 2020


A/B testing is the primary tool marketing people use to optimize conversions in the digital marketing world. It’s a method to find the better converting version of a webpage or an ad.


The way A/B testing works is that you randomly present page A or page B to your website visitors and check which version of the page converts more visitors. Some tools (like https://www.optimizely.com) make that process relatively straightforward; however, it takes time to collect sufficient evidence to make a clear decision about which page performed better.


So why even bother?


The problem with webpage design is that it is hard to get it right the first time. A designer might think that the call to action button is in the way and move it to where it becomes ineffective. Color schemes and market trends affect how people perceive, understand, and operate a page.


Statistically significant results from an A/B test can help validate a webpage design assumptions and improve on them.


What can go wrong?


If, for instance, you did not assign a 50/50 impression between the A & B versions of the page, you might think that one page performs better. 


In many cases, unless one page is horrible, the other will perform pretty close to the first one, and the statistics can change week after week. You must wait a sufficiently long time, sometimes a few weeks, to get a decisive answer. 


Picking the wrong page will reduce your conversions.


Since you usually use multiple channels to target customers, changes in one medium might affect the A/B test results. Even though A/B testing could be helpful when your two pages are almost similar in performance, it usually takes a lot of work to get conclusive results, and ongoing testing is required.


So what is going on here?


Say you have used the best tools for A/B testing. You waited and got some slight statistical confirmation that one page is better. Why? Because one page was suitable for some people and the other was helpful for others.


The audience is not homogenous. When the results are close, half of the audience liked version A, and the second half liked version B. The sad product is that your bottom line stayed the same despite all your optimization efforts and patience.


Let’s say that 50% of the people like Red and half like Blue to explain it better. If you make the page Red, more red-liking people will convert. If you make the page Blue, more Blue liking people will convert. So it does not matter if the page is Red or Blue; conversions will not improve. 


Is there a solution?


Ideally, you need to have a different version of the webpage for each visitor or at least a separate page for various market segments or buyer personas. Only then could you see improvement in your total performance.


Unfortunately, I have yet to see helpful marketing tools that can tell you (the site) in real-time which version of the page to render to which user. The hope is that such a tool or an API will enable correct personalization of the webpages and drastically improve the conversion. 


It is not about knowing the visitor’s name. It is about understanding the visitor’s social or behavioral profile and displaying the correct page for each visitor’s characteristics.


In simple words, you need a way to show red-loving people the red page and the blue-loving people the blue page. This way, you can improve the total conversions and move from a "local" maximum on the optimization graph to a more "global" sales process optimization.

 




Thoughts?


Have you heard of such a tool or an API? Let me know.