Also known as Split testing is a method of comparing two versions of a digital asset, such as webpage or app against each other to determine which one performs better in terms of a given metric or objective. The idea is to show two variants (A and B) to similar users at the same time and then compare which variant better meets the desired action.
Only one change can be tested at a time to ensure efficacy of the results
👥Who
Collaboration between core team to design and run the test
🛠 Running the technique
Hypothesis Creation: Before starting the test, you need a hypothesis. For example, you might hypothesize that changing of the position of ‘Register Account’ button from the bottom to the middle right will increase click-through rate of non registered users.
Variants Creation: Based on your hypothesis, you would create two versions of the digital asset: Variant A: The current version (often called the "control"), Variant B: The new version with the change you want to test.
Traffic Splitting: Visitors to the digital asset are randomly shown either variant A or variant B. Often, the traffic is split 50/50, but other ratios can be used depending on the circumstances.
Data Collection: As visitors interact with each version, data is collected on how many take the desired action
Result Analysis: After a sufficient amount of data is collected, the results are analysed. Statistical analysis helps determine if the differences in performance between the two variants are significant (i.e., not just due to random chance).
Implementation: If Variant B (the change) proves to significantly outperform Variant A (the control), then you might decide to fully implement the change. If not, you can revert to the original or try a different test.
No Comments