How to Test Ad Creatives: Beginner’s Guide to Optimize Your Display Ad Tests

There are so many creative elements that digital marketers can test in their banner ads – from value propositions to taglines to images and styling – that it can be hard to know where to start.  

A/B testing your creatives take a couple weeks to conduct to get proper statistical significant, so it’s often difficult to test every possible creative variation.  So, how should a digital marketer get started with A/B testing their banner ads?

Thunder has conducted hundreds of A/B tests, and distilled our learnings into the best practices for designing creative tests.  When followed, these tips can reduce the amount of time required to optimize your creative!

What is Test Significance?

Before we begin, we should address a commonly misunderstood concept: test significance. Marketers with no background in statistics often miss a critical fact: your tests may tell you less than you think.  

The reason is simple: our testing approach basically surveys the opinions of a smaller group of people within our target population, and sometimes, these small groups don’t completely represent the true opinion of our target population. This can expose marketers to faulty decisions that are based on false positives, that is, tests in which the apparent winner is not the actual over-performer in the target population.  

Statisticians have overcome these sampling errors with “statistical significance” to correct for this type of error, and you should always ask your A/B test vendor how they control for sampling errors including false positives.  If our goal is to learn from our creative testing, then we must ensure that our outcomes are statistically significant!

#1 Test Hypotheses, Not Ads

The first question to ask when designing a creative A/B test is this: What hypothesis do we want to test?  Common hypotheses to test include:

  • Value Proposition (ex: 10% off vs. $25 off)
  • Image (ex. red car vs. blue car)
  • Tagline (ex. “Just do it” vs. “Do it”)
  • Call to Action Text (ex. “Subscribe now!” vs. “Learn more”)
  • Single Frame vs Multi-Frame

Each test should allow you to answer a question, for example: “do my customers like 10% off, or do they like $25 off?”

Many creative tests make the mistake of testing creatives that were created independently of each other, and thus vary in more than one way.  The reason why these tests are ineffective is that the marketer can’t distill the test into a lesson to be applied to future creative design. The only learning from such a test is that the brand should shift traffic to the winning ad.  But no lessons for the next new ad result from such a test.

For example, the A/B test below is comparing different layouts, images, value propositions and CTA text all at the same time.  Let’s say Creative B wins. What have we learned? Not much, other than in this particular set of ads, Creative B outperforms Creative A.  But we don’t know why, and thus have learned nothing that we can apply to future ads.

A/B Test with No Hypothesis

 

By comparison, the following two A/B tests have specific hypotheses – “do red cars work better than blue cars?”  At the end of this test, we will learn that either red SUV’s or blue sports cars outperform the other, and can apply this learning to future creatives.

Hypothesis-Driven A/B Test: Car Type Drives Performance

 

In this next A/B test, the hypothesis is that the value proposition in the tagline drives performance.  A common first A/B test for a brand is to compare feature-based vs value-based taglines.

Hypothesis-Driven A/B Test: Value Proposition Drives Performance

 

#2 Test Large Changes before Small Changes

Large changes should be tested first because they generate larger differences in performance, so you want these learnings to be uncovered and applied first.  

Larger changes – such as value proposition and image – are also more likely to perform differently for different audience segments that small changes – like the background of the CTA button.  As such, by breaking out your A/B test results by audience segment, you can learn what tagline or image pop with particular segments, which can guide the design of a creative decision tree.

Large changes: Value Proposition, Brand Tagline, Image, Product Category, Price/Value vs Feature, Competitive Claims

Smaller changes: CTA text, CTA background, Styling and formatting, Multiframe vs Single Frame

Small changes are likely to drive small lift.  Only test this after testing bigger changes.

 

#3 Test multiple creative changes with Multivariate Test Design

Multivariate test designs (MVT) sound more complex than they are.  Multivariate tests simply allow you to run 2 or 3 A/B tests at the same time, using the same target population.  They are a statistically rigorous way to break Rule #1 above that says you should test a single change at a time.  In the case of MVT test design, you can more than one change by creating a separate creative for every combination of changes, and then learning from these tests.  

For example, if, as below, you are testing 2 changes – message and image – each of which have 2 variations, you have a 2×2 MVT test and need to create 4 ads.

Multivariate test that tests Image and Message at the same time

 

When the test is done, aggregate test results along each dimension to evaluate the results of each A/B test independently. If you have enough sample, you can even evaluate all the individual creatives against each other to look for particular interactions of message and image that drive performance.

To Summarize:

To drive more optimizations more quickly and generate demand and budget for more testing, following these simple tips:

  1. Test hypotheses that generate learnings for subsequent creative design
  2. Test large changes first and setting up multiple variate tests
  3. Test one change at a time, or set up a multivariate test framework

Happy testing!