A Simple Guide to Effective Conversion Rate Optimization

Conversion rate optimization can seem complicated, but when you follow a clear process, you can get great results quite easily.

Conversion rate optimization can seem complicated, but when you follow a clear process, you can get great results quite easily.

Conversion rate optimization (CRO) fine-tunes your marketing, and it can lead to such big improvements in sales and profit that even the smallest businesses should do it.

At its most basic, CRO is a method of increasing the number of potential, past, or current customers who take a desired action in any marketing situation, online or off.

But for now, let’s zoom in on maximizing your chances of converting your website visitors to subscribers, customers, members, clients, and so on.

Reaching high conversion rates isn’t hard to achieve. But it does require spending some time testing site elements and creating alternatives that work better.

But here’s the catch: You should follow a clear process when you set up your tests, or they might hurt your business rather than help it.

The process isn’t complicated, but it’s easy to make a mistake that skews your results.

So let’s look at the ideal process for conversion rate optimization and the mistakes that even some well-known (and expensive) CRO companies often make with their clients’ tests.

You’ll also learn what kinds of tests are most likely to create big improvements.

What you need to get started with conversion rate optimization

Getting started with CRO isn’t difficult or nearly as complicated as you might think.

What you need to get started depends on what you’re testing. But let’s use a page on your website as an example.

  • You need access to your website’s admin or back-end area. You’ll need to copy-paste a few lines of code on the page you want to test. If you have a WordPress site, you can copy-paste the code into the theme settings or use a plugin that lets you add code to the page “header” (try this one). The code won’t be visible to your visitors, but it’s necessary for the testing software to work.
  • You need testing software that handles the technical stuff. The software divides visitors between the original version of the page you’re testing and the test variation, and it tracks how many people convert (do what you hope they do). I use and recommend VWO (Visual Website Optimizer) because it’s very easy to set up and use, you can test almost anything you want with it, and I’ve found their customer service to be capable and responsive.

Before we get to the actual testing process, here are a few things you should know:

  • The original version of a tested page (or email, offer, advertisement, picture, headline, webinar, video, etc.) is called “control.”
  • The new version that’s tested against the control is called “variation” or “test.”
  • A test measures conversion rates in percentages. For example, if 37 out of 100 visitors to your opt-in landing page join your email list, the page’s conversion rate is 37%.
  • The test result is never completely accurate because there’s always a chance that luck affected the result. So you’ll work with the likelihood of correct results, which is measured in percentages. The higher the percentage, the higher the likelihood that one version is the best. For example, a variation might have a 98% chance of being better than the control. The testing software calculates the confidence level, so all you need to do is be aware of it.
  • It’s impossible to predict exactly how long a test will take. If you’re testing a webpage, for example, the timeframe depends on how many visitors come to the page each day and what each variations’ conversion rate is. The more visitors and the bigger the difference in conversion rates, the sooner the test is complete. Some tests are ready in hours while others take weeks to finish.

Now let’s examine the ideal, simple process for conversion rate optimization.

Step 1: Make a guess

Come up with an idea of what might work better than what you’re doing now. You can base it on a gut feeling or statistics, or you can consider options that other people have used with good results.

But regardless of how you come up with the idea, it’s just a guess of what might improve your results. Even the most experienced CRO professionals can only create guesses, although their guesses are based on knowledge and experience.

Be specific. Write the guess down on paper if you need to.

Here’s an example of a specific guess that works well as the basis for a test: “My potential yoga studio customers might be more interested in learning five new yoga poses than learning how yoga improves posture, so maybe I should offer an ebook about new yoga poses, instead of the one about posture that I now promote.”

Coming up with test ideas might be easy for you, but it’s a big task for some. We’ll look at the tests that are most likely to create big improvements a bit later, but first, you should be aware of a few mistakes that are easy to make in the first step.

Be aware:

  • Your test will only create results if people perceive the test variation as different from the control. Changing a headline from “Make more money” to “Create bigger profits” is unlikely to alter conversion rates by more than a fraction; people don’t perceive the two headlines as different in any significant way, so they don’t act differently either. In the same way, button color tests almost never have a significant impact on conversion rates (although the rare successes are often shared widely) because a different button color rarely changes how visitors perceive a web page.
  • Your results might be different from someone else’s results even with the same test, control, and variation. Don’t expect to duplicate results. Sure, it’s likely that your results will be similar if you’re testing the same thing for the same audience in the same way. But since even small differences in the test can lead to big differences in the results, you should do your own tests. You can find inspiration and ideas from others—just don’t assume that their tests prove something about your situation.

Step 2: Create the test

When you’ve decided on something that might improve your results—your guess—devise a way to test it.

Let’s say you guessed that a different opt-in incentive (a give-away or freebie) might convert more visitors into subscribers. In this case, the test might be as simple as presenting a different version of your opt-in landing page to 50% of the visitors (the testing software does that for you).

In other words, come up with a test that either proves your guess was right or proves it wrong.

Be aware:

  • What people expect strongly affects what they do. For example, if people click a link that says, “Download an ebook about how yoga affects your posture,” they expect to get an ebook about posture, and they’re likely to opt in to get it—even if they’d somewhat prefer to learn new yoga poses. So, instead of testing your opt-in landing page (where expectations might already be set), you might need to test the offer on your home page where visitors will have less specific expectations that are likely to affect the test results.
  • Sometimes you only want to test how a certain group of people acts. Typically, when you don’t include everyone in a test, the goal is to include only the most likely subscribers or buyers because what converts them is much more important than what converts anyone else. For example, you might know that people from your own country are much more likely to opt in or buy than anyone else, so you can exclude the rest of the world from your tests.
  • Include everyone to your tests unless you really only want to test what works for certain people. You don’t want to make this mistake: A well-known CRO company tested email frequency for one of their clients. But they only included recent buyers and people who had recently indicated interest in buying something. As they should have realized, that’s the group most likely to be okay with frequent emails. They concluded that more frequent emails was the best choice for the whole email list, even though they only knew how a certain group acted. (You could guess that the results would be the same for the whole list, but the test didn’t indicate that.) So, unless you want to know how a certain group of people acts, just include everyone to your tests.

Step 3: Analyze the results

Even if your initial guess is very specific, and you’re sure you did the test just right, you should consider what might’ve gone wrong.

If you tested your opt-in incentive on your homepage, consider whether an unusually high number of visitors came from some specific or unusual source during the test. And if that’s the case, consider how that might have affected the results.

In most cases, analyzing the results only means checking the actual result (whether the new variation improves conversion rates) and considering why the result might not be trustworthy.

You should also look at the results and try to understand the reasons behind them. The more tests you do, the better you learn to predict what creates the best results.

Let’s say you have an opt-in landing page that promotes an ebook, and when people download it, they also receive your regular emails. If you add a testimonial on the value of the ebook, and it improves the conversion rate, it’s likely because visitors had doubts about the ebook’s benefit. But if the testimonial focuses on the value of your regular emails, it probably means the visitors are worried about getting useless emails from you.

However, the testimonial might have worked just because it added credibility. Or social proof. Or a professional feel to the page. Or something else. You really can’t know for sure.

But you can learn a lot from tests. Don’t obsess over individual test results, though—they’re easily misinterpreted. Instead, identify the kinds of changes that consistently improve your results.

Be aware:

  • Don’t rely on test results before they’re “statistically significant.” In other words, unless one variation has at least a 95% chance of being the best variation, the test isn’t over (the testing software calculates that for you). Even the 95% confidence level still means there’s a 5% chance that the seemingly overpowering variation is actually the worst. If a test is taking too long (how long that is depends on you, but a month is probably “too long”), it’s usually better to accept that a clear result isn’t possible, and you should move on to the next test.
  • Traffic sources aren’t equal or identical. If you only include visitors from a specific ad campaign in a test, don’t assume that visitors from a different ad campaign will act the same way. Sure, if you can’t see how the source would affect the test result, you can move forward with the idea that the result applies to everyone because that’s probably the case. Just know that you’re making an assumption beyond what the test really told you.
  • Things that happen at the same time aren’t necessarily linked. If you switched your website host and your site loading time improved, and if your overall conversion rate improved at the same time, the two aren’t necessarily connected. This is something political candidates often use in their election campaigns: “Since [blank] has been running things, more people are unemployed.” We humans are often tempted to believe a time connection must mean something, so be careful not to draw those conclusions too quickly.

Take an extra step: Make the next guess based on the results

Whether your test creates a positive or negative result, you can often create a follow-up test based on that result.

Let’s say you’re testing prices for a product. The original price is $19, and the test price is $29. If you make a bigger profit with the higher price, you could then test the $29 price against $39 to see if the even higher price would increase your profits.

On the other hand, if you make a bigger profit with the product priced at the original $19, you could then test an even lower price to see what happens.

Be aware:

  • You can accidentally create an “incoherent feel” with extreme tests. If your product’s sales page has a low-budget or amateurish design, a really high price feels incoherent or out of place—like a 12-bedroom mansion in a low-income part of town. So, testing only the higher price won’t necessarily create an accurate result. Instead, you should create a more expensive-looking variation that matches the high price (like building a mansion in a wealthy section of town). It’s important to consider whether the test creates an incoherent feel and, if so, how you can avoid it.
  • Finding the “perfect” balance is impossible. If you’re looking for perfection and running endless tests to find the “perfect” price for your product, you’re probably missing out on more profitable tests. When your tests start to take longer than a few weeks to produce statistically significant results (at least 95% confidence), you should probably think of other, more impactful tests.

What creates the biggest difference in your conversion rates?

As mentioned earlier, you won’t see a difference in your conversion rates if people don’t perceive your control and variation differently.

But even if they do, that doesn’t necessarily mean your conversion rates will change. The only differences that matter are meaningful differences.

You’ll get the most drastic results when you test elements that affect your visitors’ decisions the most. In other words, when you test visitors’ basic reasons for taking action against other reasons, you get the most significant results.

Simply put, conversion rate optimization helps uncover the best reasons for your visitors to take the desired action.

But testing all the ideas you have might take years. If you don’t want to spend so much time on it, you can download a quick exercise that lets you evaluate your ideas so you know which ones are most likely to work well.

Click here to download the quick 5-step exercise, which shows you—with perhaps uncomfortable clarity—how persuasive your ideas are.

If you have any questions about conversion rate optimization or the exercise, leave a comment below or send me an email. I’m happy to help.

Peter Sandeen

Do you want to improve your value proposition or conversion rates?

Or create an effective marketing strategy based on your strengths?

Click here to see how I work with businesses and how I can help you.


    Share Your Thoughts...


    * (real name—no keywords)


  1. Eric Silva said:

    Hey Peter,

    Great read man. Super simple writing that has practical application. Which is hard with a seemingly boring subject. Even though I think Conversions rule over many marketing strats.

    Keep em coming man,

    • Peter Sandeen said:

      Hey Eric,

      Thanks, that’s really good to hear 🙂


  2. I like your approach, but as a trained scientist I would add:
    The statistical analysis to set up a test that will yield a statistically significant outcome is hare. You will need to hire a statistician for that. Several sites offer less rigorous tests. So to see if they are really telling you something you can bank on remembers: The number of events must be adequate that one more either way will not change your conclusions.
    Recently I did an open rate A/B test in Mail Chimp. I have a usual open rate of about 10% so the 20 emails in each category could be expected to have 2 opens each. For one group to have 2 more opens than the other there must be a very clear advantage or disadvantage between the two test items. In one group I had one open and the other 2. So I could not conclude there was a clear advantage, but mail chimp did.
    For there to have been at least 2 more opens in one group I would have had to have 40 in each group.
    There are tools that will let you predict the emotional impact of headlines and subject lines.
    One of them is http://www.aminstitute.com/headline thus you can use a thesaurus to craft the words you use in the title or subject line. I often find that I had a real dud when I started my blog.

    • Peter Sandeen said:

      Hey Dave,

      Yep, that’s definitely true. I use Visual Website Optimizer, which at least urges you to let a test run for a week even if the results come in right away. But I think it does make conclusions with even very small sample sizes.

      I might make a note about this into the article. Thanks for pointing it out.


  3. gaby said:

    You are brilliant and very clear about what you have to say.Easy to grasp

    • Peter Sandeen said:

      Hey Gaby,

      Thanks, glad I could help 🙂


  4. This is a really in-depth article about conversion rate optimization! I just started to use Sumo ME on my website, and though I don’t have an incentive to offer at this moment, the conversion rate seems to be sitting at around 5-8% which isn’t TOO bad. That’s the pop-up conversion rate. I’m working on an incentive that’ll hopefully jack up the conversion rates. Still a long way to go!

Copyright 2015 Peter Sandeen | about | services | contact | privacy | legal

contact {at} petersandeen {dot} com | +358 41 433 0144