Gut instinct is over-rated.
Try setting-up a split test on one of your website’s main pages with four new variations.
And try to predict which will win.
In theory, you have a one in five chance chance of getting it right.
In reality, you’re likely to have a different perspective on your website than your visitors, so your chance of getting it right may be lower than expected.
If you embrace split and multivariate testing (I hope you do), you’ll almost certainly hit a number of potential questions and issues after each test:
Question/Hurdle 1: “Really? That one won? I find that hard to believe.”
Answer:Â Let the experiment run longer to generate more data. The greater the sample, the more representative the data.
Question/Hurdle 2: “Won’t the new version adversely affect our SEO rankings?”
Answer:Â What’s more important: the volume of traffic you get from the search engines, or what happens to the visitors after they arrive at your website? Hits or conversions?
Question/Hurdle 3: “I don’t like the winner. It just doesn’t look good.”
Answer:Â Is your website focused on your needs or those of your visitors?
One of the risks of running multivariate tests is that you’ll get answers you don’t like. But it’s not about you.
Confession: we regularly run tests on our SoftwarePromotions website. The most recent was on the three columns of text on our home page.
Initially I didn’t really want to believe the results (Question/Hurdle 1) so we let the experiment run longer. Even after we had a large data sample I still didn’t like it (Question/Hurdle 3) but had to trust the data.
Looking for evidence to prove a theory is flawed from the outset. Looking for evidence to test a theory is purer and cleaner.
Data has no agenda but the facts.