Accept a b test results. AB test: how to conduct it and what is needed for this. How and when can I interpret split testing results

(split testing, A / B testing, Split testing) on ​​the site is a marketing method that consists in monitoring the control (A) and test (B) groups of elements - site pages that differ only in some indicators in order to increase the site conversion . Pages are shown to visitors alternately in equal shares, and after reaching the required number of impressions, the most conversion option is determined based on the data received.

Stages of A/B testing

In general, the entire A/B testing process can be summarized in 5 steps:

Step 1. Goal setting (business goals, conversion, website goals)

Step 2 Fixing initial statistical data

Step 3 Test setup and process

Step 4 Evaluation of results and implementation of the best option

Step 5 Repeat the experiment on other pages or with other elements as needed

Test duration

The duration of the experiment depends on the available traffic on the site. Conversion rate, as well as differences in the tested options. Many services automatically determine the duration. On average, 100 conversion actions on the site are enough and takes about 2-4 weeks.

Pages for testing

For testing, you can select any page of the site that is important in terms of conversion. Most often, this is the main page, registration / authorization pages, sales funnel pages. In this case, it is better to pay attention to the following points:

  1. Most visited site pages
  2. Pages with high visits
  3. Refusal pages

The first is necessary for the purity of the experiment, the second and third to identify weaknesses on the site.

Most often, buttons, text, a slogan-call to action and the layout of the page as a whole are chosen for testing. To select an element, you can use the following algorithm of actions:

  • A hypothesis is put forward about the behavior of the visitor
  • A solution is proposed to change the elements (it is better to take 1-2, no more)
  1. Add the word "Free"
  2. Submit explainer video
  3. Glue the registration button to the top of the page
  4. Reduce the number of fields in the application
  5. Add special offer counter
  6. Add Free Trial
  7. Change button colors or text on them

Test Automation

There are several paid and free tools for automating the testing process with a different set of features. A large list can be viewed. The most popular can be experiments in Google Analytics. It is free, Russified, easy to learn, and if a counter is installed on the site, then you do not need to wait for the collection of initial data and you can start the experiment in just a couple of clicks.

A/B testing with Google Analytics

Consider the process of creating a test in Google Analytics. To do this, go to the Reports->Behavior->Experiments tab. Enter the URL of the page you are testing and click "Start Experiment".

The next step is to fill in the fields: the name of the experiment, the goal (you can choose from the configured goals for the site), the coverage of site visitors for the experiment (it is better to set 100%).

In the second step, you will need to specify the addresses of the main (control) page and its variants.

If everything is done correctly, the system will give a green light to start testing.

The result of the experiment is very visual and may look like this:

Contrary to popular belief (after all, duplicate pages are created), such testing does not have a negative impact on the position of the site. It is enough to write rel="canonical" on alternative pages.

Important about A/B testing

  1. Test versions of pages should not differ by more than 2 elements
  2. Traffic between pages should be equally distributed
  3. When making settings, select new site visitors
  4. The results can only be judged by a wide sample, preferably at least 1000 people.
  5. Evaluate results at the same time
  6. You should not trust yourself, not all users think the way you do, so your preferred option may not be a winning one.
  7. The results of A / B testing may not always bring the desired results in increasing conversions. So you need to experiment with other elements.

A/B testing, also known as split testing, is one of the most effective ways to come up with measurable (and evidence-based) improvements to your site. In practice, it looks like this: two versions of content are developed - for example, for a landing page - and two such pages are simultaneously launched for audiences of the same size in order to find out which one works better. Such a test, properly performed, shows what changes will help increase conversions.

Many people have questions about how to launch and successfully conduct A / B testing. Here are the most frequently asked questions and their answers.

1. When is A/B testing a good/bad idea?

Most often, such tests fail because there are no clear goals behind them - so you need to know what you are testing. For example, use this quiz to test a theory: Would this image help increase conversions if added to a landing page? Are people more likely to press the blue button or the red button? What happens if you change the title to highlight that the offer is limited? The effect of all these changes is quite measurable.

People have a lot of trouble running A/B tests when the goal is too vague, like testing two designs with many differences. This can take a long time until there is a clear winner, and inaccurate conclusions can be drawn, there will be uncertainty about what really caused the increase in conversions.

2. How many variations should be in A/B testing?

Let's say you've done a good job and have four incredible landing page design ideas. Of course, I would like to launch all four options at once and determine the winner, but such a simultaneous launch can no longer be considered A / B testing. A number of factors from each option can litter the clear waters of results, so to speak. The beauty of proper A/B testing is that it is reliable and specific.

3. What is a null hypothesis?

The null hypothesis is the hypothesis that the difference in results is due to sampling error or standard fluctuations. Think about tossing a coin. Although the chances of her landing on the heads are 50/50, sometimes in practice they are 51/49 or some other ratio that depends on the case. However, the more you toss the coin, the closer you end up with a 50/50 outcome.

In statistics, the correctness or incorrectness of an idea is proven by challenging the null hypothesis. In our case, challenging this hypothesis is testing for a sufficiently long time to exclude random results. This is also called reaching statistical significance.

4. How many page hits does it take to get a good A/B test result?

Before checking the results of an A/B test, you should make sure that it has reached statistical significance - some point after which you can be 95 percent or more sure that the result is correct.

The good thing is that many testing tools already have a statistical significance counter built in: with it, you will be given a signal when the test results are ready for interpretation. If there is no such counter, you can use one of the many free calculators and tools to calculate statistical significance.

5. What is multidimensional testing and how is it different from A/B testing?

A/B tests are usually used to determine one effective redesign solution to achieve a specific goal (for example, increase conversions). Multivariate testing is typically used to test small changes over a longer period of time. It covers multiple site elements and checks all possible combinations of these elements for continuous optimization. HubSpot expert Corey Eridon explains the differences in the use of one or another test:

“A/B testing is a great method if you want fast, meaningful results. Since the changes from page to page are clearly visible, it will be easier to tell which page is performing best. It is also the right choice if your site has little traffic.

But for correct results in multivariate testing, you need a site with high traffic, since in such testing several different changing elements are checked.

If you have enough traffic for multivariate testing (although even then you can use A / B tests to test new designs and layouts), it is best to conduct it when you want to make subtle changes on the page, understand how certain elements interact with each other and gradually improve the existing design.

6. Is it true that A/B testing negatively affects SEO?

There is a myth that A/B tests lower a site's ranking in search engines because they can be classified as duplicate content (which is not known to be very friendly to search engines). However, this is absolutely not the case - with the right approach to testing. In fact, Google's Matt Cutts advises running split tests to improve the functionality of your site. The Website Optimizer also has a good debunking of this myth, for example.

If you're still convinced otherwise, you can always add a noindex tag to one of the variations on the page. Read the detailed instructions for adding such a tag.

Editor's note. Recently, Google published on preventing the negative impact of A/B tests on the position of the site in Google search results.

7. How and when can I interpret the split test results?

The test is running. The data is starting to accumulate. And you want to find out who is the winner. But the early stages are not the right time to interpret test results. Wait until your test reaches statistical significance (see step 4) and then return to your original hypothesis. Did the test finally confirm or disprove your assumptions? If yes, you can draw some conclusions. When analyzing testing, do not rush to attribute its results to specific changes. Make sure that there is a clear connection between the changes and the result and that there is no influence of any factors mixed in here.

8. How many changing elements should be tested?

You need a test with convincing results, you spend your time on it, and therefore, for sure, you want to get a clear answer in the end. The problem with testing multiple changes at the same time is that you can't pinpoint which one is more useful. That is, you can certainly tell which page performs better overall, but if three or four changing elements are tested on each page, you won't know which element is hurting the page, and you won't be able to introduce useful elements to other pages. Our advice: Run a series of basic tests, making one change each time, to iterate over the most effective version of the page.

9. What should I be testing?

  • Calls to action. Even considering this one element, you can test several different things. Just make sure you understand what specific aspect of the call to action you want to test. You can test the text of the call itself: what does it push the one who views it to? You can test the location: where on the page is the best place to place the call? You can also test the shape and style: how does it look?
  • Title. This is usually the first thing a visitor reads on your site, so the potential for impact is significant. Try different heading styles in your A/B testing. Make sure the difference between each heading is clear and that it's not just a mindless rewrite of the same heading. This is necessary in order to know exactly what caused the changes.
  • Image. What is more efficient? An image of the person using your product, or the product itself? Try different variations of the page with different helper images and see if it makes a difference.
  • Text length. Would abbreviating it help make the message clearer? Or, on the contrary, do you need more text to explain the essence of the sentence? By trying different versions of the body text, you can determine how much clarification the reader needs before converting. For this test to work, try to use texts with approximately the same content, changing only their length.

10. Can A/B testing test anything other than web pages?

Certainly! In addition to landing pages and web pages, many marketers use A/B tests for email inboxes, pay per click (PPC) campaigns, and calls to action.

  • Email. Here, the testable changing elements can be the subject of the letter, personalization techniques, the name of the sender.
  • PPC campaigns. During these campaigns, you can apply A/B testing to the title, body text, link text, and keywords.
  • Call to action. Here you can experiment with the text of the call, its shape, color scheme and location on the page.

11. How can I find A/B testing examples from similar companies?

There are a number of sites that collect examples and results of A/B testing. Some allow you to search by company type and most provide detailed information on how the company interpreted the test results. If you're just getting started with A/B testing, you might find it helpful to read some of these sites to understand what your company needs to test.

  • WhichTestWon.com. There are several examples on this site, and there are also some annual competitions where you can submit your tests.
  • Visual Website Optimizer offers A/B testing software. The company blog has a few examples that you could learn from.
  • ABTests.com. This site is no longer updated, but it has a good archive of A/B tests.

12. What should I do if I don't trust the results?

If you really don't trust the results and have ruled out any errors or issues related to the validity of the test, the best thing to do is to run the same test again. Treat it as a completely separate test and see if you can replicate the result. If he repeats himself over and over again, he can probably be trusted.

13. How often should I run A/B testing?

There is always something to test on your site. Just make sure each test has a clear purpose and results in a more functional site for your visitors and company. If you run a lot of tests and end up with minimal impact and minor wins, rethink your testing strategy.

14. What do I need to start A / B testing on the site?

The best way to run A/B testing is to use dedicated software like Visual Website Optimizer , HubSpot , Unbounce . If you don't mind fiddling around with the code a bit, Google also has a free tool called Content Experiments in Google Analytics. It's a little different from traditional A/B testing, but if you're technically savvy, this tool is worth a try.

15. What are the validity pitfalls besides sample size?

Last year, MECLABS compiled a collection of test validity threats. Here, Dr. Flint McGlaughlin discusses testing errors and how to reduce the risk of them in your tests. We recommend reading the full text, but still give a couple of errors from the list:

  • Something is happening in the outside world that causes negative biases in the test results.
  • A bug in the testing software undermines its results.

16. Do I need to conduct A / B testing of the main page of the site?

The task of developing a workable test to test the homepage can be very difficult. The traffic on this page is very changeable, because everyone goes there - from casual visitors to potential customers and real buyers. In addition, there is usually a huge amount of content on the home page, so it can be difficult to determine in a single test what makes visitors take action or not take action.

Finally, due to the fact that completely different visitors come to your home page, it can be problematic to determine the specific purpose of the test and page. You might, for example, set out to test conversions, but if the test page gets more traffic from potential customers than real customers, your goals for that group might change.

If you still want to test your home page, take a look at call-to-action tests.

17. What if I don't have a master version of the page?

A control version is an existing version of a web page, against which you would normally push new versions. You might also want to test two versions of the page that didn't exist before. And this is quite normal. Just call one of them control. Try to choose the one that is most similar in design to the existing page, and use the other one as an option.

18. Why is A/B testing not always 50/50?

Sometimes when conducting an A / B test, you may notice that different versions of pages have different traffic. This does not mean that something is wrong with the test, just random deviations appear by chance. Think about tossing a coin. The chances of heads and tails are 50/50, but sometimes it comes up tails, for example, 3 times in a row. However, the higher the traffic to your page, the closer the test results should be to 50/50.

We have released a new book, "Social Media Content Marketing: How to get into the head of subscribers and make them fall in love with your brand."

If as a child you loved to disassemble cars with a motor or mix all the liquids that were in the house, then this article is for you. Today we are going to take a look at A/B testing of a website and find out why in the right hands it turns into a powerful weapon. We dig out the spirit of the experimenter in the depths of consciousness, shake off the dust from it and read.

What is A/B website testing?

In short, it is a method of evaluating the performance of two variations of the same page. For example, there are two product card designs and both are so cool that you can't even sleep or eat. The logical way out is to check which option works best. To do this, half of the visitors are shown option number 1, and half - option number 2. The winner is the one who copes better with the tasks.

This is not the only way to use A/B (or split) testing of a site. With it, you can test crazy hypotheses, the convenience of a new page structure or different text options.

How A/B Testing a Website Is Conducted

Formulation of the problem

First you need to decide on a goal. Understand what you want to achieve: increase conversions, time spent on the site, or reduce the bounce rate. If everything is OK with the goals and objectives, change the content or design based on them. For example, you can follow the path of all growth hackers and change the location and design of the Buy button. Right now it's hanging on the bottom left and you want to see what happens if you change its appearance and move the button up and to the right.

Technical implementation

Everything is simple here - either a separate page is created, on which only the object of testing changes, or the programmer uses magic and implements everything within the framework of one document.

Preparation of control data

The page has been redone and everything is ready to run the test. But first we need to measure the initial conversion rates and all the other parameters that we will consider. We assign the name “A” to the original version of the page, and “B” to the new one.

Test

Now we need to randomly split the traffic in half. Half of the users are shown page A, and the rest - B. To do this, you can use special services (there are a lot of them) or do everything by the hands of a programmer.

At the same time, it is important that the "composition" of traffic is the same. The experiment will not be objective if only the first option is available to all users who come by clicking on the context, and only the second option is available to all visitors from social networks.

Analysis

Now you need to wait until enough statistics are collected and compare the results of A / B testing. How long you have to wait depends on the popularity of the site and some other parameters. The sample must be statistically significant. This means that the probability of a random result should be no higher than 5%. Example: Let's say both pages have the same number of visits - one thousand each. At the same time, page A has 5 target actions, and page B has 6. The result differs too little to speak of a pattern, so it is not suitable.

Most special services themselves calculate the threshold of statistical significance. If you do everything by hand, you can use calculator.

Making a decision

How to deal with the test results is up to you. If the new approach worked, you can leave it on the site with a new version of the page. At the same time, it is not necessary to stop there, especially if you see that there is still potential for growth in indicators. In this case, leave option B on the site and prepare a new test.

How to make A/B and split testing objective

Reduce the influence of external factors.We have already touched on this topic a bit - you need to test in the same period of time, and the traffic sources should be the same for both pages. If you do not take care of equal conditions, you will get an unrepresentative sample. People from the search behave differently on the page than visitors from a group on Facebook or Vkontakte. The same with the volume of traffic - it should be approximately the same.

Minimize the influence of internal factors.This is true for the websites of large companies - the statistics can be strongly influenced by the company's employees themselves. They visit the site, but do not perform any targeted actions. Therefore, they should be excluded from the statistics. To do this, you need to install a filter in web analytics systems.

Plus, there is a pretty obvious thing that is sometimes forgotten. You need to test one element. If you changed half a page at once, but there was no complete redesign of the site, the results of the experiment will not be valid.

Does A/B Testing a Website Affect SEO?

There is a popular myth that A / B testing can go sideways, because due to duplicate pages, you can get under search engine filters. It is not true. Google even tells you how to do it right and provides special tools for this.

What and how can be improved with A/B testing

  • conversion.The most popular option. Even a small change to a page can have an impact on your conversion rate. In this case, the target action can be considered as a purchase, and registration, and viewing a page, and subscribing to a newsletter, and clicking on a link.
  • Average check.In this case, new blocks of additional sales are often tested: “similar products” and “frequently buy with this product”.
  • behavioral factors.These include browsing depth, average time on site, and bounces.

Usually try to change:

  • Button design "Buy", "Leave a request".
  • Page content: headings, product description, images, calls to action and everything else.
  • Location and appearance of the block with prices.
  • Page structure.
  • The layout, structure and design of the application form.

In principle, anything can work, not a single Wang can say exactly how to increase the conversion or the average check. There are a lot of recommendations, but taking them all into account is simply unrealistic, and they can work with the opposite effect. And sometimes completely illogical things lead to better performance, for example, the rejection of a detailed description of goods. Try different approaches and options, it's a test.

Website A/B Testing Tools

There are just a bunch of them, so we chose the best. All of them are in English and therefore expensive, but each has a free trial period. In Russia, only lpgenerator.ru does something similar, but only landing pages created in the service constructor can be tested there. You won't be able to load your page.

Optimizely.com

One of the most popular services. Able to test everything and in any combination. Other advantages: the possibility of multi-channel testing, experiments with mobile applications, convenient result filters, targeting, a visual editor and a little bit of web analytics.

changeagain.me

A fairly convenient service, the main advantage is simple and complete integration with Google Analytics: goals can be created directly in the service, and then they are automatically uploaded to the system. The rest of the features are more or less standard: a simple visual editor, targeting by device and country. the specific set depends on the tariff plan.

ABtasty.com

This service is distinguished by a large trial period - it lasts as much as 30 days, instead of the standard 14-15 days. Plus, the tool integrates with WordPress, Google Analytics and several other services used by foreign marketers and webmasters. Additional advantages: user-friendly interface and detailed targeting.

How to A/B Test with Google Analytics

To do this, you need to log into your account, open the report menu, scroll to the "Behavior" tab and click "Experiments" in it. Everything is extremely simple there.

We give the experiment a name, distribute traffic across pages in the required proportion, select goals, and proceed to the next step - detailed settings.

The addresses of pages A and B are set there. If you check the box “Unify variants for other reports by content”, then in other reports the indicators of all variants will be taken into account as indicators of the original page.

After that, Analytics will generate a code that you need to place on page A and run the experiment. Performance reports can be seen in the same Experiments menu.

How to set up Yandex Metrica for A/B testing

The work is divided into two parts. The first step is to either create two pages or configure one to show the user two different types of elements. How to do this is a topic for a separate large article, so for now, we will bypass it.

After that, you need to transfer information about which version of the site the user saw to the metric. A small instructiongives itself "Yandex" . For us, we need to create an A / B testing parameter and assign it the desired value. In the case of a button, we define the parameter as:

var yaParams = (ab_test: "Button1" );

or

var yaParams = (ab_test: "Button2" );

After that, the parameter is transferred to "Metrika" and it can be used to generate a report on "visit parameters".

Results

A / B (or split) testing of the site is an important, necessary and almost mandatory tool. If you regularly test new hypotheses, page performance can be taken to a new level. But it cannot be said that it requires a minimum of effort. To simply change the location or color of the button, you will have to involve a programmer or designer, even if it does not take much time. Plus, any assumption can be wrong. But those who do not take risks do not receive an increased flow of applications and do not run around the office happy.

Original post: http://quality-lab.ru/a-b-testing/

Introduction

Emotions drive people, and managing people's emotions is the dream of every marketer. As a rule, all innovations are based on the subjective “it seems to me that it will be more beautiful / more convenient”. Much less frequently, for a specific change, an analysis of the opinions of customers is carried out. It is possible to rely on a subjective assessment of a marketer, but it is risky. Assembling a focus group is expensive. Simply introducing a change and seeing what happens after a certain amount of time is not scientific.

So how do you still determine the benefits of change without losing customers and time? This question is solved by A/B testing. Its use leads to an increase in customer traffic and the degree of site conversion, to an increase in the number of sales, clicks and likes.

What it is?

Definition from Wiki:
A/B testing(English A / B testing, Split testing) - a method of marketing research. The essence of the method is that the control group of elements is compared with a set of test groups (in which one or more indicators were changed) in order to find out which of the changes improve the target indicator. A variation of A/B testing is multivariate testing. In this case, not two complete options are tested, but several elements of the product or components of the object under study in various combinations at once, in which each tested element can be of two types (A or B).

Simply put, the entire flow of people on the site is divided into two groups. One group is shown the main page, for example, with the Sign up button (Option A). The second group is the same page, but with the Sign up for free button (Option B). Testing is carried out sessionally. At the end of each session, the results are summed up and the winning option is calculated. An example of multivariate A / B testing is shown in the diagram:

How to test?

Imagine a situation: an online bank needed to increase the number of applications for lending to individuals. There was already a banner on the site with a call to fill out an application, but marketers offered to finalize it. Two layouts were submitted to the testing department for A/B testing:

First of all, the testing department decided on tools that allow you to record statistics and analyze the result. There are dozens of A/B testing platforms on the web, the most popular of which are:

All of them are convenient in their own way and contain a sufficient number of functions to become an indispensable assistant when conducting A/B testing. The choice of our testers fell on the free Google Content Experiment (this solution is part of Google Analytics and can independently determine the winner).

With the help of this platform, an experiment was created for testing on the bank's website. To obtain correct results, it was necessary to conduct several testing sessions lasting two weeks. In the first session, the testers got a mixed result (the conversion in the two options was almost equal, so it was not possible to determine the winner of the A / B testing). After a series of similar experiments, the testers still managed to get a clear test result: the second version of the banner (with a family photo) won. Perhaps this was due to the fact that the last session fell on the New Year holidays: the target audience was more loyal and friendlier.

The result of the story: if earlier 2 out of 10 who viewed the banner applied for a loan, now it is 4 out of 10.

Let's get back to A/B testing tools. For tools that do not know how to determine the winner, the results of the A / B testing session can be processed manually or using a calculator. When processing manually, it is necessary to take into account the ratio of conversion to the number of visits to the site. This is a time-consuming process that will require you to be focused and precise; it may take several hours. It is much more convenient to use a ready-made solution - a calculator: you just need to enter the test results and get the winning option. Almost all A/B testing calculators are in English, but there are also

A sharp jump in conversion is not reflected in sales? Or maybe it just doesn't exist? If you base decisions on false test results, at best you miss the chance for optimization, at worst, you reduce conversions.

Fortunately, there is a way to prevent this. What is A / A testing, how to conduct it - read the article.

False positive result

Let's say you're evaluating combinations of a button and a title. When the reliability reaches 99%, draw conclusions and apply in practice.

After several business cycles, you observe: the updated design does not bring the expected profit. But you conducted testing, invested time and resources in it!

This is a false positive result, also known as "type one statistical error" and "false rejection of a true null hypothesis". It occurs more often than you think - about 80% of the time.

Why is this happening?

Instrument effect

At the beginning of the experiment, it is important to make sure that the configuration of the tool is correct and that it works as it should. Otherwise - the risk of getting:

  • Incorrect indicators. Just one mistake can skew A/B testing data. At a minimum, integrate with Google Analytics for cross-checking.
  • Incorrect display of the landing page. Ensure that landing pages look correct on all devices and browsers, and visitors do not experience flicker. causes the same problem.
  • Premature test termination. Sometimes the software announces the "winner" too early - with insufficient sample size or representativeness. Remember: just because you've reached statistical significance doesn't mean it's time to stop testing. The longer it is, the more accurate the results.

Watch both ways: any of these signs lead to a false conclusion. Track every goal and metric. If any indicator is not fixed (for example, adding an item to the cart), stop the test, fix the problem, and start again.

A/A vs A/B

An A/B test drives traffic to the control version and variation and shows which one performs better.

A/A - the same, only for two identical pages. The goal is not to see differences in their performance.

Only 20% of experiments give reliable results. Statistical significance and a large representative sample are not enough. That's why professionals use this technique before A/B test.

As you can see, these types complement each other.

If at the end of the experiment the conversion rates of both pages are the same, you can run an A/B test. In practice, things don't always go smoothly.

Example 1. How a page can replay its clone

This is the landing page that the Copyhackers team tested in November 2012:

After 6 days, the testing system marked the "winning" option at a 95% confidence level. For the sake of accuracy, the experiment was extended by a day - and reached 99.6% accuracy:

Is a page 24% more effective than the exact same page? The result is false positive. After another 3 days, the differences disappeared:

Conclusion: the test calculated the winner too early.

Example 2. How to do nothing and increase conversion by 300%

What we see:

  • 9% - increase in the rate of opening letters;
  • The number of clicks on links increased by 300%;
  • The unsubscribe rate dropped by 51%.

And everything would be fine, but this is an A / A test! The content that competes with each other is absolutely identical.

Is A/A Testing Worth It?

Renowned expert Neil Patel has seen big jumps in conversions without increasing revenue. He advises testing the software first, so that later you don’t have to deal with the consequences of wrong decisions.

According to Pip Lay, founder of the agency ConversionXL, tests themselves are a waste of time.

Who to believe? On the one hand, accuracy is paramount, and the A/A method is the way to ensure it. On the other hand, it is a waste of resources for testing, as well as preparing for it.

Craig Sullivan, user experience expert, believes that 40 tests per month is a high burden for employees. It is better to kill half a day on QA than 2-4 weeks just to test the tool.

Problem #1. A/A tests take time and traffic, which you can spend on studying the behavior of website visitors.

Problem #2. Both A/B and A/A need to be carefully organized and monitored so as not to get a false result. As in the example from Copyhackers.

It's up to you to waste time or risk software reliability when making a decision.

There is a potentially less expensive option - A/A/B.

A/A/B vs A/A

Traditional A/A testing says nothing about visitors. But if you add another option to the process, it's a different matter.

A/A = 2 identical pages compete.

A/A/B = A/A test + one additional variation.

You will understand whether to trust the tool. If yes, choose the best version according to his testimony. If not, they should not be used.

Yes, it takes longer to reach statistical significance. But you also evaluate the software, and if it confirms its reliability, you also evaluate the behavior of visitors.

Conclusion

Do the benefits of A/A testing outweigh the disadvantages? There is no clear answer. Testing every month is overkill. Enough - when using new software (service for testing). For those who really feel sorry for the time, there is a compromise option - A / A / B test.

If you eliminate errors today, you will get more accurate totals in the future.

High conversions for you!

Similar posts