Someone recently asked on the GWO form:
Why would 10% expected improvement take a lot longer than a 20%?
I would think it is the other way around.
The truth is – the smaller the expected improvement, the longer it takes.
Or, in other words, the bigger the difference in conversion between two variations, the shorter time it will take to see the difference.
Here is an analogy that will hopefully provide an intuitive asnwer.
Lets say I want to test two basketball teams against each other, seeing who’s the better team.
Team A currently wins 20% of the time.
I want to test if team B is a 100% improvement over team A, which would mean team B wins 40% of the time.
The question is, how many games do they need to play to be 95% sure than group B is indeed better. (95% is the confidence level GWO uses).
Without going into the math, if team B indeed is twice as good as team A, I should know fairly quickly.
Big differences are obvious in a short time.
On the other hand …
I want to test if team B is just 10% better than team A, which would mean team B wins 22% of the time.
How many games do they need to play to be 95% sure than group B is indeed better.
Again, without going into the math, if team B indeed is just 10% better than team A, they’ll need to play quite a lot of games to be 95% certain that they are indeed 10% better.
Small differences take longer to notice than big differences.
July 10, 2009 at 4:03 am
This is exactly what Tim Ash describes in his Book Landing Page Optimization. I see lots of people set up experiments where the difference is so hard to find that I almost can’t see the difference in their variations. Its important when using tools such as Google Website Optimizer to make sure the test elements you change are either quite dramatic. Especially on lower traffic sites.