A/B testing is a very powerful tool that can greatly increase the profitability of your website and business, however there are common mistakes that are costing online retailers big bucks. In fact, Oubit reported that online retailers lost $13 billion in revenue over the course of a year by following poor a/b testing methodologies.

When businesses are running tests repeatedly, but not seeing results, it is common that they are making key mistakes. Find out what causes false positives, how to avoid skewing your results and strategies to optimize A/B testing to be as efficient as possible in these 15 tips.

1

Base Your Tests on Research

What are you testing in you’re A/B tests? How do you know what to test to drive the best results? These are great questions which are important to A/B testing success. While there are many success stories of websites changing their banner or layout to increase conversions, what works for one website does not necessarily work for another. Additionally, basing tests on opinions, best practices or panic is not likely to identify the most profitable changes.

Your tests should be based on conversion research and studying your own analytics. Look for what is working currently, and what areas need help. Heat maps and eye tracking are great tools to see where customers are interacting with your pages. With this information, you can make educated guesses about what changes will be most impactful on your conversion. Then prioritize them and choose one to start with. Creating a hypothesis, along with a prediction of the outcome, is the first step to focused and efficient A/B testing.

2

Schedule Your Priorities

Once you have researched, identified and prioritized several different ideas to test, schedule them on the calendar. By importance, decide which order you will run the tests on your ideas, and how long each test will run.

3

Proper Testing Length

Speaking of how long tests should run, just how long should they run? As a minimum, tests should run for at least 1-2 weeks. Additionally, they need to run as long after that as needed to reach the proper survey size. Remember, the more variations you are testing, the more conversions are needed. Additionally, you may need to extend the test length if you have conversion rate results that overlap. A test length calculator can help you determine your needed test length. Also, always consider survey size when questioning when to conclude the test.

4

7-Day Minimum to Eliminate Seasonality

Why should tests run at least a full 7 days? It is due to the fact that conversion rates fluctuate greatly from day-to-day, so results from Friday to Monday, will differ from Tuesday to Friday. To ensure that results are valid, all tests must be ran for at least a full week. Keep in mind, you may reach your sample size goal before the end of the full week, but should still let it run its course to eliminate the seasonality factor.

5

Be Mindful Not to Skew Results

When running an experiment, you must have a control version which remains the same. This constant is what all other variables are tested against. If the control changes, your results can not be accurately measured. Therefore, ensure that during the testing period, the original page is not altered. Additionally, be mindful of running promotions during tests and vice versa, as this is an outside factor that is going to influence results. Schedule tests during normal business days, excluding holidays.

6

Test One Page at a Time

Running multiple tests simultaneously is another culprit for potentially causing a false positive result. Although you have many pages you will likely want to test, schedule them according to priority and run one test at a time. This will provide more reliable results.

7

Test One Issue at a Time

Along with only testing one page at a time, it is just as important to test one issue at a time. Say that you test the control page against a variation with a different font color and banner. You will not be able to tell which change caused the result if two variables are tested. To eliminate confusion and increase efficiency, test only one change at a time.

 

8

Optimize Tests for your Audience

Did you know that over half of internet users are now accessing the web from mobile devices? Not only that, they are doing so from a variety of devices and browsers. For this reason, to reach the target audience visiting your site, you need to optimize you’re test for mobile devices and browsers your visitors are using. Otherwise, again, your results will be skewed.

9

Be Patient: Your Sample Size Must be Large Enough

Once you are running an A/B test, it is natural to become anxious to see the results. However, one of the most common mistakes is made by jumping to conclusions before a proper sample size has been tested. This is largely due to tests commonly being run until a significant difference is found, which you will learn more about below. Instead, you should be running the test until a fixed sample size is met, as well as a 1-week minimum.

How do you ensure your sample size is big enough? In general, wait until you have at least 150-250 conversions per variation in the test. To be more precise, you can utilize a sample size calculator to find the specific amount which is ideal for your website.

Whatever you do, don’t stop the test and assume that the results are accurate until an adequate sample size has been tested. Otherwise, you might as well skip the test altogether!

10

Look For 95% Statistical Significance or Null

Statistical significance is a percentage that shows how probable the accuracy of a test result is. When any test is being performed, it is crucial to take into account the possibility of outside factors influencing the results. The general guideline to ensure these factors are taken into consideration, is that you look for a 95% statistical significance before drawing a conclusion. When this percentage is reached, the probability of outside factors is minimized to 5%. In the case that this 95% significance is not reached, the results are concluded to be largely caused by outside influences thus deeming the test insignificant, or a null hypothesis.

With that being said, in order for the statistical significance to be meaningful and accurate, the appropriate sample size must be entered into the test. This will create statistical power, which speaks to a test being able to predict trends accurately.

Without considering the statistical significance and statistical power of each A/B test, you’re results will not be reliable or useful. If you need help calculating the statistical significance of your test, online tools are available such as http://mystatscalc.com .

11

Be Aware of Conversion Range Overlap

When running an A/B test, it is also very important to take note of the conversion ranges. These are numbers which depict the margin of error for the reported conversion rate. For example, if your conversion range is +/- 1.0 and your conversion rate is 20%, then your conversion rate may actually range from 19-21%.

Now, where this becomes very important is when comparing conversion rate results between variations in a test. The reason for this is because if a control version shows 20% conversion rate with a +/- 1.0 conversion range, and the variation version shows a 19% conversion rate with a +/- 1.0 conversion range, the control could actually be 19-21 and the variation could be 18-20. This results in an overlap. An overlap means the results are not significant and the test should be extended.

With conversion range awareness, you can ensure you’re A/B results are truly significant.

12

Know When to Cut Your Losses

While we have reviewed the importance of letting a test run its course, it is also important to know when to cut your losses and move on to the next test. If you have followed the previous steps, you will have a calendar of well researched tests to run, so can’t let one test drag on too long without providing significant results.

Warning signs that you should move on include when a test has run for 2 weeks and is not showing statistical significance, has a small sample size, isn’t converting much better than the control or will take several weeks to reach the needed sample size. In these cases, it is often best to move onto the next test.

You will have to analyze tests on a case-by-case basis, as even a small lift from several changes can add up to large profits. However, also consider the cost of your time and what another test could be offering.

13

Measure Results from Beginning to End

When testing a specific variable, you will naturally track the direct results of that change. However, it is important to keep an eye on your entire sales funnel to ensure you are achieving your end goal. For example, if you are testing a function to try and improve the amount of items added to each visitors shopping cart, you will not only want to track shopping cart adds, but also the final amount that customers are actually purchasing. Ensure that the change is benefitting the overall bottom line.

14

Segment Your Results Using Google Analytics

Regardless of the A/B testing tool you use, you should also utilize Google Analytics to segment and analyze your test data. You can then cross check your results to gain deeper insights and even identify any reporting issues before starting a test. Save time and gain understanding.

15-001

Small Gains Can Equal Big Profits

Lastly, although a test may result in only a small lift in your conversion rate, several small changes can add up to translate into a considerable lift in profits. Consider a 5% lift is an extra $500 for every $10,000 earned. That will add up if several small changes are made over time.
Now you know the common pitfalls website owners and marketing managers run into when running A/B tests, as well as how to avoid them. Keep these 15 tips in mind as you plan, execute and review you’re upcoming A/B tests and you will be able to properly utilize this powerful tool to drive the results for your bottom line.

Spread the love