Inconclusive A/B Test Results: What Now?

Inconclusive A/B test results are a common scenario for marketers in charge of conversion rate optimization. There’s nothing more discouraging than spending a considerable amount of time and resources on implementing a testing idea only to have your hypothesis invalidated in the end. However, if you plan on working in CRO for a long time, you better get used to it. 

It so often happens that, no matter your professional approach to A/B testing, some of the experiments will not generate the expected results, leaving you with a bitter taste in your mouth. As a matter of fact, it appears that 50% to 80% of test results are inconclusive. That’s a lot!

Fortunately, this is not the end of the road. A/B testing implies a continuous learning process. This means that every failed experiment or inconclusive test result is nothing more but an opportunity to gain insight into customers’ minds. With every inconclusive test, you’re actually getting  one step closer to finding out what your audience really wants. 

So, what to do when there’s no significant difference between control and variation? Here are some ideas: 

1. The idea was good, but the implementation fell short 

Image source

You ran your research and ensured optimal testing conditions. Moreover, best practices have shown that this test has a high success rate, so you’re 100% confident in your A/B test idea. However, upon concluding the experiment, it appears that the variation did not bring a significant difference in conversion rate. 

In this case, there is a high chance that the problem was the implementation itself. The idea was good, but, for some reason, it seems that the execution wasn’t the greatest. Maybe some development mistakes slipped in or the design of the test didn’t quite match the rest of the website, creating a visual disparity. To avoid this, make sure to work closely with skilled designers and developers whose executions will stay true to your hypotheses.

2.  The hypothesis lacked potential in the first place 

Image source

If you’re aiming for a significant lift in conversion rates, you need to test high-impact ideas, not small tweaks. I’m not sure if changing the color of a button or the font size will bring a significant increase in conversion rate. 

Needless to say, it’s always crucial to keep yourself up to date with changes in customer behaviour and fluctuation of marketing strategies. Only this way you can be sure that your testing process is constantly in line with the current direction of the business. 

Not to mention the fact that if you’ve spent a lot of time testing for a client, at some point you’ll probably lose sight of the real issues users are facing. This only makes your testing ideas become less relevant. My suggestion is to run qualitative research once in a while, just to make sure that you’re optimizing accordingly.  

It’s always good to keep yourself connected to everything that is going on with the business, from design changes to user behaviour. 

3. Don’t give up on A/B testing, reiterate!

Image source

You have a great hypothesis, it’s backed up by qualitative research and you are sure that is going to bring a lift, but for some reason, it doesn’t. You have just concluded the test and it’s a tie. ‘How is this possible?’ you are probably thinking. The idea was impactful and the execution – flawless. 

If you are confident that you had a strong testing hypothesis, as it was addressing pressing issues, maybe it was a good idea indeed. But mind you, there are different ways of expressing an idea. It’s possible that you’ve included too much information, or too little. Maybe the changes you’ve made were too radical, confusing users or they were barely recognizable.

In other words, you’ve chosen one path out of several other possibilities. Run some heatmaps and site polls again, look through several session recordings and…reiterate! Test the same idea, but with minor changes. 

Are you really ready to give up on a bright idea, bound to have a significant impact on purchase behaviour? Probably not, so try again! At some point, something’s gotta give, right? 

4. Segment your A/B test results

Image source

Users are not all the same. Their motivation and buying behavior differ, so more often than not,  an experiment will not have the same effect on all visitors. 

Therefore, whenever you’re dealing with an inconclusive A/B test result, the first thing you should do is break down users into small segments and analyze them individually. While some of the most insightful user reports are ‘New Vs. Returning Visitors’, ‘Devices’, and ‘Traffic Sources’, don’t stop there! Keep exploring your user base to find the segments that are most relevant to you.

Feel free to dive deeper into segmentation and focus on your highest converting segments (could be mobile users, young women, visitors coming from paid traffic etc.). Even though control might beat variation in overall results, variation might beat control in certain segments. If those segments are the ones who matter, then you have a winning test.  

5. Are you measuring the right metric(s)? 

Image source

Undoubtedly, at the end of the day conversion rate is the one that counts, but even so, sometimes we need to be more realistic towards our testing process and aim for micro-conversions and smaller lifts. 

Let’s imagine that you’re running a test on the product page that consists of adding product benefits. You should be measuring the ‘add to cart’ rate as your primary metric, but instead you choose to go for a lift in the conversion rate. 

The risk you’re taking here is to have your hypothesis invalidated because you are giving too much credit to a metric and ignoring the other. Keep in mind that your experiment was actually designed to bring a lift in the ‘add to cart’ rate, and it does not guarantee an increase in sales as well. 

Adding a product to the cart is an early step in the conversion funnel and a lot of things can happen as users make their way towards the end of it. For instance, they could drop-off just as they are about to start the checkout process, because they find it too complicated or not secure enough. 

As a result, you’re left with a higher ‘add to cart’ rate, but a lower conversion rate. So, what you should be doing is measuring the ‘add to cart’ rate for the experiment on the product page and design another A/B test to remove user anxiety on checkout. A/B testing is a puzzle and you need to have patience in order to put all the pieces together. But when you finally do, you’ll see it was all worth it!

Each experiment needs to have a clear purpose. Being aware of it will help you measure the right metrics. And once again, don’t neglect smaller lifts – they do matter! If you want to learn more about setting up metrics for quick wins and long-term gains, check out this article

6. Gain insights and use them to build better testing hypotheses

Your A/B test might not have generated positive results because visitors did not behave in the way you expected them to. 

But any shift in user behaviour is a valuable piece of information that you can use in your advantage to brush up on your testing hypotheses or change the direction of your testing process. 

Conclusion

Having said that, I hope you won’t let inconclusive test results bum you out anymore! Learn from each experiment that didn’t go the way you’ve predicted, and use those insights to build better, more promising A/B tests. Impactful testing ideas executed perfectly and a thorough post-analysis of your experiments based on user segmentation will help you improve your A/B testing skills that will eventually lead to more successful experiments.  

By Cristina Neagu

I am a CRO specialist with a focus on consumer psychology & behaviour.

Leave a Reply

Your email address will not be published. Required fields are marked *

Get Our Optimization Checklist

Boost Conversion Rates with this 12-Point Product Page Checklist

Learn how well your product page is performing now and how you can improve it to get more conversions.

Recent Posts