Most of the time when it comes to landing page testing, whether it be multivariate or split testing, you read about gleaning successes, where “we increased our conversion by 454 percent!” But what do you do your test fails?
Let's focus on what causes your A/B/n experiments to fail, because this is the most common type of landing page testing done by small- to mid- size webmasters. We will also explore how to gain value from even failed tests.
Some Causes of Experiment FailurePoor Test DesignWhen it comes to test design, part of what you get out of it what you’ve put into it. If test design was based solely on aesthetic sensibilities and not informed by any kind of research or analytics, then this test is at risk of failing.
Many tools and resources can help inform a well-researched test design. Some places to get research and inputs from are from are:
User testing.Heuristic usability research.Clickstream analytics, such as Google Analytics.Previous tests you've done.ClickTale.Your editorial staff.Conducting more research for your test design sets you up for a successful outcome.
Not Testing Variations That Are Truly DifferentOne common mistake that is made during split testing is that a couple minor changes are made to the variation page and then the test is run. Often times, tests like these result in the original beating the variation.
Split testing often isn’t the right tool for the job for testing small changes – this is something that is better suited for multivariate testing. With multivariate testing, you can better pin down or more soundly assert that a specific change is responsible for a given increase in page performance.
To be clear, you shouldn’t totally shy away from making small changes when designing a split test – just don't be afraid to make the bolder changes that you believe will increase conversions. Don't be afraid to fail fast.
One thing to remember about split testing is that because it is an A/B/n test, you don’t know what feature of the variation page is responsible for the page outperforming the original. Testing the whole page gives you license to test more freely and make all the changes you need to in order to create a high converting page.
Running an Invalid TestEnding an experiment before it is complete is one common testing mistake. Often, this happens when a webmaster sees that a given variation page is the clear winner after a couple days and feels confident that there is no way that the original can outperform it. Aborting a split or multivariate test before you have statistically significant results can lead you to an invalid conclusion.
What To Do After Your Test FailsDive Into AnalyticsOnce your test has failed, the next step is to dive into analytics and segment everything about the pages you have tested.
If you have user behavior analytics, you can view heat maps of how your visitors were behaving on those pages.
If you just have clickstream analytics, dig into behavior related metrics and also try to identify any wins your failed variation page may have had. For example, you were seeking to increase sales from a given landing page and the variation page failed to beat the original page but had significantly less bounce rate. These are nuggets worth looking for.
Embrace Failure & Run Another TestAccording to Noah Kagan, writing for Visual Website Optimizer, “only 1 out of 8 split tests have driven significant results [for appsumo].” And according to John Quarto Vontivadar, in the same article:
”the purpose of testing is not to find out what works, but rather to find out what does NOT work. The tests by Noah reveal a rather large amount of information and insight towards future testing. In fact, when a test 'works' – and I use quotes on that to mean 'does what we wanted it to do by supporting the hypothesis in some way' – we often learn *less* because we over-interpret the success. As Lance also pointed out, it isn’t the headlines (or the pop-up) that is the problem, it’s the contextual basis under which they were presented.”Additional ResourcesAppsumo reveals its A/B testing secret: only 1 out of 8 tests produce results by Noah Kagen – this is a great case study referenced above.Split Testing Adwords: You're Doing It Wrong by Dan Thies – really good article discussing statistical significance and split testing for AdWords.Why Split Testing Is Like Sex In High School by Danny Iny – this article includes some good tips and alternative perspectives.How to Double Your Subscriber Growth With 10 Minutes Work – good article covering growth opportunities using split testing.Free A/B Split Testing Tools – these tools include a couple fun calculators that are worth checking out.Harnessing the $9+ Billion Social and Mobile Ad Potential
In partnership with Moontoast, ClickZ presents the "Ultimate Guide to Social Rich Media Advertising". Social rich media advertising offers a one-of-a-kind opportunity for brands and agencies to target consumers with interests that match the virtues and values of their products. Download your free guide today!
No comments:
Post a Comment