A/B/n testing is the process of A/B testing with more than two different versions and differs from multivariate testing. The "n" refers to any additional tests. Despite these additional variations, though, A/B/n testing works the same way as standard A/B testing.
A/B/n testing is the process of A/B testing with more than two different versions. The little "n" doesn't refer to a third test, but to any number of additional tests: A/B/n encompasses A/B/C, A/B/C/D, or any other type of extended A/B test.
A/B/n testing is a matter of splitting users into groups, assign variations (typically of landing pages or other webpages) to groups, check the change of a key metric (typically conversion rate), check the test results for statistical significance, deploy the winning version.
Though they're often confused, A/B/n testing is not the same as multivariate testing. The key difference lies in how the variations are controlled. Let's use a webpage as an example. Say we have an image and a call to action (CTA) button, and we have three variations of each. If we run a multivariate test, it will automatically test all possible combinations -- in this case, 6. However, if we run an A/B/n test, we hand-select which variations we want to test, which is frequently less than every possible combination. If we had a large number of different resources we wanted to test, the number of different variations in a multivariate test would grow exponentially -- quickly requiring massive amounts of traffic and time it would take to get statistically significant results - but in an A/B/n test, we can manually choose how many variations to deploy.
DIY feature flags seem simple at first, but often lead to tech debt, resource drain, and scaling issues. This playbook shows why enterprises need professional feature management.