We all know the case for more experimentation for your brand sites, e-commerce websites or mobile apps and how A/B testing greatly helps marketers quickly collect insightful data to improve conversions.

When we talked to clients, partners and prospects at Opticon 2022 in San Diego, we recognized the same pattern and challenges everyone faces. Many teams only stop at simple tests like comparing version A and version B of a page, so they sometimes get inconclusive test results with no significant difference between the two versions or only solve “simple” problems like which CTA text works better, which banner converts more, or which button color is more appealing to users. Our Conversion Rate Optimization Specialists actually start seeing more impactful results when they follow the below three tips.

1. Start using a goal tree and form an experimentation methodology

Many of our clients made the mistake of jumping right into experimentation without forming an optimization methodology and process. Experimentation is not just about what to test and whether a test is successful after running for a couple of weeks. First, ask your team: Why are you running experiments? What are the end goals? What are the metrics you will measure along the way for each test?

At Opticon 2022, Optimizely experts talked a lot about building a goal tree at the beginning of your experimentation project. Identify your high-level company goals, your business drivers that impact the goals, and optimization metrics or specific KPIs for your tests. This helps your team focus resources on the right problems and prioritize the workload more impactfully. It’s not always easy to improve your company goals right away, but your tests can focus on the “micro-conversions” that add up to your top-level goals. Here’s an example of a goal tree:

Source: Optimizely

 

We also follow Optimizely’s advised experimentation cycle, which looks something like this:

Source: Optimizely

  • Experimentation should actually start with “Implement”: What tool are you using to set up your tests? How can you build repeatable foundations for all your future experiments? Does your experimentation tool work well with your web/app platform? What if a test becomes successful? Would it be feasible to implement and how can you minimize efforts needed to implement a successful test?
  • “Ideate” is when you build out ideas for experimentation. Leverage all the data you already have, such as current analytics, customer feedback, problems identified from previous research, or customer-focused issues backed up by data to then formulate a data-driven hypothesis.
  • “Plan” all your tests for a quarter or even longer-term if possible. It’s important to take into account all your other development roadmaps and digital marketing campaigns in place during the time of running experimentation so that your results are reliable.
  • “Build & QA” your experiments carefully so that the right amount of traffic accesses each version of the test and all unwanted factors that may wrongly impact the test results will be eliminated.
  • “Analyze” your test results. This is one of the most difficult steps. For tests that are successful, we need a careful evaluation of whether the result is actually significant and fair, and no other element such as seasonality may make it invalid. It’s even more important to analyze inconclusive tests and ask why they were inconclusive, as well as to come up with a follow-up action plan.
  • “Iterate” means to learn and build from tests results as well as use insights to inform the next experiment. It is an ideal next step but not always relevant for your experimentation cycle. If the test is for a one-time promotion campaign or a feature release that will only happen once, you can probably skip this step.

2. Do more than just A/B tests

Though A/B testing is the most common type of experimentation because it’s easy to set up and gets results quickly, its insights may be limited. There are other types of experiments you could run such as:

  • Multi-page test: it helps measure conversion changes across a series of pages (unlike A/B tests that only run on one page at a time).
  • Multivariate test: it helps measure precisely how multiple changes interact and influence conversions (unlike A/B tests that only look at one change at a time).
  • A/B/n test: it experiments on multiple changes and tests entirely different variations of a page against one another.
  • At Opticon 2022, Experimentation and CRO Specialists shared lots about how bringing in personalization to their experiments often brings impressive results. The difference in KPIs between their test versions is much more significant. However, setting up personalization experimentation can be complex and depends greatly on the tools you have available. If you’re interested in this, talk to our experts today.


Some of the above tests come with challenges, such as longer test times and possible outliers. However, some experimentation tools like Optimizely’s stats engine helps take care of this automatically so you don’t have to do a lot of manual work.

3. Communicate your experimentation results to more people in your organization

Your experimentation program can impact everyone in your organization, from the development team to the product, customer service and marketing teams. Thus, it’s often beneficial to share your experimentation program’s plan and results to the entire organization. It helps build an “experimentation culture” – where every decision made is tested and backed with data. It also helps you gain more insights and test ideas from relevant stakeholders. Make sure you include the following when you present your experimentation program results to other groups:

  • Why did you run the experiment? What was your hypothesis? What inspired the hypothesis?
  • What experiment did you run? How many variations did you have? How were they different from one another?
  • When did you run the experiment? For how long? How many visitors were allocated to each variation?
  • What was the result? Was the result statistically significant? How did you determine the winning variation? Why do you think it won? For inconclusive tests, what do you think the reasons were?
  • Lastly, share with them next steps, action plans, and your upcoming experimentation roadmaps.


Make sure you tell a story and you are very specific when telling them results. Be open to hearing second opinions, feedback and new ideas. And lastly, make it consistent! Don’t just share once and never tell anybody else outside your experimentation team about all the other tests you run.

Conclusion

Experimentation is an essential part of achieving digital success, and it’s no surprise that Optimizely is betting on more experimentation capabilities on its platform. Running experiments has become easier than ever, making it accessible – and important – for any modern digital business that wants to base its business practices on facts and numbers rather than hunches.

Most important for any business embarking on an experimentation strategy is to create an experimentation culture that permeates every level of the organization. Awareness for the importance of experimentation and an understanding for its workings are crucial in order to stay the course even in the face of adverse results.

Keeping the three tips for experimentation strategy mentioned above in mind and with Optimizely’s tools at your disposal, your experiments will be successful, affecting your bottom line immensely. If you need more advice on which experiments to run for your business or how to implement your strategy, contact us today.