# A/B Testing

# What Is A/B Testing?

A/B testing (also known as split testing or bucket testing) compares two versions of a web page or app against each other to determine which one performs better. AB testing is essentially an experiment where two or more variants of a page are shown to users at random. Statistical analysis is used to determine which variation performs better for a given conversion goal.

In an A/B test, you take a webpage or app screen and modify it to create a second version of the same page. This change can be as simple as a single headline or button or be a complete redesign of the page. Then, half of your traffic is shown the original version of the page (known as the control), and half is shown the modified version of the page (the variation).

# Why is it so important?

In the online world, A/B Testing is an important marketing strategy maximizing your conversions. This conversion may be a headline, a button, or a complete redesign of the webpage. Even in the digital world, companies or small businesses can use this test to have an impact on KPI. An example, the company wants to go to the website-related transformation. With A/B testing, it can increase customer satisfaction with small changes without changing all sites.

# How to Perform an A/B Test?

**Step 1: Data Collection**: We need to collect enough data to concentrate on the target problem.

**Step 2: Split data in half for conversion target. **(We usually say control and test group.) Target variables can be one of them: a headline, a button, an image, or color used for design. By the way, the following section is important there should be only one target variable in the data.

**Step 3: Formulate Hypothesis**: We have the current variation. We want to create hypothesis-based variation. This variation is another version of your current version with changes that you want to test.

**Step 4: Run Test**: Once you have a hypothesis ready, test it against various parameters like how much confidence you have of it winning, its impact on macro goals, and how easy it is to set up, and so on. First of all, will we conduct the AB testing by average or by rate? This problem is the conversion rate problem and its counterpart in statistics is the “p ratio test”. If the average is desired to be used, the dependent or independent “t-test” is used. Hypothesis checks are performed for both, and then hypothesis testing is carried out.

**Step 5: Results Analyzing:** Test results are scored. We’ve done checked that it’s statistically significant. Now, we know which group to use statistically. After we have considered these numbers, deploy the winning variation

# What is Hypothesis Testing?

At the dawn of the A/B testing, statisticians provided a very basic framework for statistical inference in an A/B testing scenario. Commonly known as “Hypothesis Testing,” the procedure goes as follows:

- Start with the existing version of the tested element within it. That existing version is now termed the “baseline” (or variation A).
- Set up the alternative variation, the “treatment” (or variation B).
- Calculate the required sample size. This calculation is based on the baseline’s current conversion rate (which must be already known), the minimum difference in performance you wish to detect, and the desired Statistical Power.

**You can access the project codes here**:

References: