It’s a common belief that you need to make as many changes as possible to optimize conversion rate, which requires examining a large amount of data. At InsightWhale, we discovered that this is not always true. In fact, as this case study shows, small tweaks can often have the greatest impact.
Pinpointing a problem
Last October we received a call from the owners of one Australian comparison shopping service (an aggregator of products from online stores). The service allows users to search for the lowest market prices for any type of product, to get the best shopping deals online and to buy goods from the service’s website.
The service owners called us for help. Their goal was simple: they wanted the users to buy more from their website, so they wanted to make their website operate in a more efficient way.
It is always a good idea to start a conversion optimization process by researching the audience. According to Google Analytics more than 40% of users who came from Google CPC were browsing the website from their tablets or mobile devices, but only 21.13% of them proceeded after selecting a product to choosing a merchant and clicking on the ad.
We found out that a major problem was the way in which the website treated mobile users. The statistics showed us that the website could be much more efficient. The problem was identified at the very first stage of the funnel. As more users were dropping out at other stages (such as order placement and agreeing to terms and services with the merchant), at he end of a sales funnel only a few users made it to clicking on the ad.
A large portion of mobile users was leaving the website at the very 1st stage of the funnel, and it kept resulting in a low conversion rate. Why would mobile users want to leave the website at the 1st stage? We plunged into research.
The root of all evil
After we had studied the mobile version of the website, we found out that the main reason for conversions not growing was a pale and weak design of the ads. There was no other information apart from the price information and the seller’s logo. In other words, the ad had nothing catchy. For example, this is what the “Compare” block looked like:
Usually mobile users perform a concentrated and quick search for information. They tend to focus on the brightest elements of the website and avoid scrolling down long pages. So, that was the point with the Australian website: the design of its ads didn’t meet the requirements of mobile users.
We formulated a hypothesis: “If we improve the visual part of the ads, the number of clicks on them will grow, and more and more users will continue to the next stage of the funnel and reach the end of it (thus making ROI higher)”.
The research included around 7% of all mobile traffic through AdWords (it was around 3.5% of all mobile traffic). The test ran for 2 weeks, and we tested 3 variations.
Variation 1 included minimal changes. We didn’t develop it from scratch, but tried to improve the already existing version by making it more prominent and visible. We also made the ad look like it was more clickable. Small changes allowed us to get more clicks and to raise a conversion rate by 12.7% compared to Control.
In the second variation we significantly modified the design of ads. We changed the price information, added new graphical elements and reprioritized the key points.
Variation 2 gave us 3.9% more clicks than Control.
Finally, the third ariation was similar to the second version, but it was simpler and included minimum details. Some images were removed from it.
A resulting conversion rate was 4.5% higher compared to Control.
A post-test analysis
All variations that were tested:
The winner was Variation 1: the ads received 12.7% more clicks from the mobile users. The testing costs were fully covered by a profit of implementing the changes of Variation 1 in just one month. Actually, after the very first month the profit was 2 times higher than the testing costs.
The test results of a mobile traffic clickout were as follows:
Minimum changes for maximum conversions
The test results reveal a kind of universal truth. Here are some principles that we have discovered:
- You don’t necessarily need big changes in order to significantly increase the efficiency of your website.As you could see, the winner in our test was the first version which included only a few changes. In fact, the results for the first version were much better than those for the other two versions created from scratch.
- To assure a reliable testing include more than two versions in the experiment.You won’t get a verifiable test result unless you have more than one alternative to compare with the control version. If we had included only the first version in the test, we would have had doubts whether we had chosen the best design ever. If we had tested only the second or the third version, we wouldn’t get to identify the most efficient version of the block.
- Don’t look for an ideal version.The rapid conversion leap in our test (compared to the control version) proves that the design chosen has been very effective. However, it is unreasonable to perform a further search for a more profitable design. Probably you won’t achieve the same rapid growth and just waste money for the expenses on the tests may cost you more than the money you may get from an increased by 0.5% or 1% conversion rate.Moreover, there is a risk that “squeezing” the maximum of your users will not only provide a sales growth (at least a temporary one), but also give a negative impact on the website in the future.
- With a low percentage level of sales you will be able to quickly cover the expenditures for testing.If you see that you have fewer conversions than average for the market, searching for an optimum alternative will help you significantly grow your revenue and cover all the testing expense.