Why Marketers Need More Than One DSP – Understanding The Risks

The average advertiser uses 3 DSPs.  Part #1 of this series examined the reasons digital advertisers make use of multiple DSPs in their programmatic bidding.  Of course, the use of multiple DSPs also creates its own challenges. So in Part #2 below, we look at the challenges created around frequency and bidding against oneself by using multiple DSPs, and how the smart marketer overcomes these challenges.

Don’t multiple DSPs just bid against each other for the same inventory?

When advertisers think of using multiple DSPs to bid on inventory, the most common concern that comes to mind is that the inventory between the DSPs will overlap, and the DSPs will be bidding against each other.  In other words, the advertiser will be bidding against itself, thus inflating its bids and artificially driving up media costs.

And in a world of 2nd price auctions, marketers can see why this is a scary prospect.  We discussed in detail how bidding worked in Part #1 of this series, but here’s a brief summary:

First, DSPs conduct internal auctions and then sends the winning bid to an exchange or SSP for a subsequent auction.  These DSP internal auctions are conducted on a 2nd price basis, which means that an advertiser bidding $25 for an impression will really only bid $5 in the SSP auction if the 2nd highest bid in the DSP’s internal auction was $5.  

What does this mean if the same advertiser had multiple DSPs? Well, if the 2nd highest bid for the same impression in the advertiser’s other DSP was $10, then now the SSP is choosing between bids of $10 and $5 from the same advertiser.  And if the 3rd price in the other DSP was $4, then the advertiser would have cleared the SSP auction at $4 if it had only used the first DSP.

This scenario is certainly possible, but marketers have increasingly overlooked this concern for two reasons. These reasons both stem from the rise of header bidding.

First, for all the bids that are inflated due to the use of multiple DSPs, there are as many bids that a single-DSP marketer that will lose in a header bidding world without using multiple DSPs. As was explained in Part #1 of this series, precisely because SSPs conduct 2nd price auctions, an advertiser can win an exchange’s auction, but lose the unified auction to an exchange that had a higher 2nd price that was lower than the advertiser’s actual bid price.  So, if the advertiser’s main goal is to reach its audience, then it will want to use more DSPs (and win more internal auctions). This inevitably translates to more exchanges submitting the advertiser’s winning 2nd price bid to the header bidding unified auctions, and more wins overall.  

Is this bidding against oneself?  Perhaps, but with header bidding, this is often required to simply win enough auctions to achieve desired scale.

Second, header bidding is bringing about a seismic shift in real-time bidding from 2nd price auctions to 1st price auctions within SSPs and exchanges in order to eliminate this scenario in the first place.  Since header bidding unified auctions select the highest price submitted by participating SSPs and exchanges, SSPs and exchanges are incentivized to maximize the chance of winning the auction, which means submitting the bid with the highest price. In practice, they follow 1st price auctions and submit the winner with their 1st price bid, rather than the 2nd price.  Many SSPs, such as Pubmatic and OpenX, are now following this practice for precisely this reason. Once SSPs and exchanges are using 1st price auctions, the risk of inflating one’s bid goes away as long as an advertiser bids the same amount for the same category of inventory across their multiple DSPs.

How to control frequency with multiple DSPs?

A more serious challenge raised by the use of multiple DSPs than inflating bid prices is the loss of control over ad frequency.  And here, this challenge remains largely underserved, even if the demand for solutions continue to grow among large advertisers.

The main reasons why managing frequency of ads served to individuals matters are (i) to limit the frequency of ad serving to individuals in order to reduce waste, and (ii) to avoid burnout and negative brand associations from over-exposure.  We have all seen bad ad experiences where a brand bombards us with the same ads. So when using a single DSP, advertiser often follow the best practices of capping frequency by day (otherwise known as pacing) and by month, campaign duration or user’s lifetime (to limit the overall exposure to a brand’s advertising).  

However, when using multiple DSPs, frequency capping becomes impossible to accomplish on the DSP level (since the DSP’s don’t actually talk to each other). What solutions are there?

One solution is to control frequency capping on the ad server.  Doubleclick Campaign Manager supports frequency capping, but rather than suppress media buying (as a DSP would via frequency capping) DCM serves a blank ad. This solution is pretty unsatisfying to the advertiser, as it results in significant wasted media spend.  

DMPs, such as Adobe Audience Manager and Oracle BlueKai, claim to offer cross-DSP frequency capping, by tracking ad impressions and then suppressing users via existing integrations with DSPs.  It’s not uncommon to use a DMP to create suppression audiences, so this seems like a natural extension of this capability. Unfortunately, Google blocks DMPs from tracking impressions on GDN inventory.  Currently 14 DMPs are blocked by Google from tracking impressions in GDN. Since Google touches a significant portion of display inventory, frequency capping becomes much less useful without cooperation from the Google ecosystem.

We expect the use of multiple DSPs to be a growing trend for major advertisers that require scale given the evolving mechanics of real-time bidding auctions. Spurred on by this new trend, these same advertisers who need scale will also be the ones most concerned with solving for control over frequency. Stay tuned for solutions that emerge in the marketplace.

Continue Reading

Why Marketers Need More Than One DSP – Understanding Demand Side Platforms

The average advertiser uses 3 DSPs.  There are strong reasons for digital advertisers to make use of multiple DSPs in their programmatic bidding – if you have wondered why advertisers use multiple DSPs, then Part #1 of this explainer is for you.  

Of course, the use of multiple DSPs also creates its own challenges. So in Part #2, we will look at the challenges created around frequency and bidding against oneself by using multiple DSPs, and how the smart marketer overcomes these challenges.

Why do Marketers use Multiple DSPs?

The primary benefits to advertisers of using multiple DSPs are: (i) differentiated DSP features which are needed to execute each campaign, (ii) accessing DSP-specific audience data, and (iii) scaling out the reach of campaigns. Let’s deep dive into each reason.

Benefit #1: Competition among DSPs around Features and Take Rates

DSPs are differentiated in many ways.  One key area is their take rates – the percentage of media spend they charge advertisers.  Another is that DSPs vary in ease of use and level of support. For example, AppNexus has lower take rates than others, but also offers less hands-on support and a powerful but complicated API.  The Trade Desk and MediaMath, conversely, are well known for their customer education and easier-to-use interface. The targeting options they offer and the reporting and analytics available for media insights also vary between each platform.  

By employing multiple DSPs, trading desks also are able to pressure the DSPs to add features and lower take rates by moving spend across DSPs easily.  Most recently, some DSPs have agreed to increased transparency by revealing the fees charged by exchanges, and SSPs that provide the ad inventory. This is a great example of DSPs accommodating customer demands in a competitive environment.

Benefit #2: Audience Data

Many DSPs have unique sources of audience data.  DoubleClick Bid Manager, of course, brings data on users of Google Display Network sites to make targeting options available for AdX sites (most of AdX inventory is GDN) that are not available in other DSPs.  Amazon Audience Platform brings audience data unique to Amazon. MediaMath has a 2nd party data co-op called Helix that benefits many advertisers. Some DSPs, like AppNexus and The Trade Desk, offer IP-range targeting.  

Marketers may be running different strategies with various campaigns, and leveraging multiple targeting options across DSPs empowers them to do so.

Benefit #3: Scale

Ultimately, the primary driver for using multiple DSPs may be the challenge of achieving scale in large budget campaigns with only a single DSP.  A trading desk may simply be unable to spend the budget for a target audience in a large campaign without using additional DSPs.

Why is that?  It’s complicated.  But the explanation below breaks it down.

First, bidding on multiple DSPs increases the odds of winning auctions.  

How?  There’s a couple reasons:

Each DSP conducts its own internal auction before submitting a winning bid to an exchange, which then conducts its own auction to decide which DSP wins.  An advertiser can lose an internal auction in one DSP (for example, DoubleClick Bid Manager), and win an auction in another DSP (say, AppNexus) for the same ad impression.  That’s because DSPs select winning bids not based on bid price alone, but also on the profile of the user and performance factors specific to each advertiser (whether the viewer is likely to click on the ad).  As such, one strategy some trading desks pursue to maximize their chances of winning is to intentionally add a smaller DSP to the mix because they will face less competition winning that DSP internal auction for this reason.

But even once an advertiser wins the DSP auction and the exchange auction, there is increasingly another auction that comes next that they might still not win – the header bidding unified auction.  Before header bidding, publishers would run an auction through a single exchange, and if the winning bid is rejected for some reason, it would run a subsequent auction through another exchange, all in a waterfall process.  With header bidding, publishers run a unified auction across multiple exchanges. Because the exchanges conduct 2nd price auctions (the advertiser pays the price of the 2nd highest bidder), an advertiser could win an exchange’s auction, but lose the unified auction to an exchange that had a higher 2nd price but lower than the advertiser’s actual bid price.  So, the more DSPs with the advertiser’s bid, the more exchanges will have the advertiser’s winning bid, the better chance the advertiser will win header bidding unified auctions.

Here’s an example auction to put this in illustration:

DSP A: The bids are: Advertiser A – $2.00, Advertiser B – $1.00, and Advertiser C – $0.50 -the winning bid is Advertiser A – $1.00 (price paid by Advertiser B)

DSP B: The bids are: Advertiser C – $1.50, Advertiser D – $1.25, and Advertiser E – $0.75 -the winning bid is Advertiser C – $1.25 (price paid by Advertiser D)

The Exchange would look at DSP A and B, and decide the winner to be Advertiser D paying $1.25.

Second, DSPs can’t always bid on every impression on behalf of every advertiser. The infrastructure demands on DSPs to bid on every auction are considerable even before header bidding became ubiquitous.  With the mass adoption of header bidding, a process which duplicates the auction across multiple exchanges at the same time, DSPs’ infrastructure demands become further compounded.

As a result, DSPs can’t always factor every advertiser line item in every internal auction.  There’s a lot of confusion around whether all DSPs can see and bid on all inventory. But that’s really the wrong way of thinking about it.  

In reality, even though DSPs have access to over 90% of the same inventory, they don’t necessarily use their sophisticated and resource-intensive algorithms to score and bid on every single impression they have access to.  They have to filter (partly for cost, partly for other performance factors). This process, of course, leads us back to the first reason advertisers gain scale from using multiple DSPs – you can lose the internal auction of one DSP because you weren’t included in the auction, and win the auction of another DSP, for the exact same impression.

So, there’s several benefits to advertisers from using multiple DSPs – scale, audience data and competition for your business.  In fact, this trend has somewhat altered the trend of in-housing digital advertising operations within brands. Supporting multiple DSPs would be a lot of work for a brand, and is generally handled by trading desks, both agency trading desks and independent trading desks.  

However, the use of multiple DSPs is not without its challenges, as we’ll learn in Part #2 of this blog series.

Continue Reading

How to Test Ad Creatives: Beginner’s Guide to Optimize Your Display Ad Tests

There are so many creative elements that digital marketers can test in their banner ads – from value propositions to taglines to images and styling – that it can be hard to know where to start.  

A/B testing your creatives take a couple weeks to conduct to get proper statistical significant, so it’s often difficult to test every possible creative variation.  So, how should a digital marketer get started with A/B testing their banner ads?

Thunder has conducted hundreds of A/B tests, and distilled our learnings into the best practices for designing creative tests.  When followed, these tips can reduce the amount of time required to optimize your creative!

What is Test Significance?

Before we begin, we should address a commonly misunderstood concept: test significance. Marketers with no background in statistics often miss a critical fact: your tests may tell you less than you think.  

The reason is simple: our testing approach basically surveys the opinions of a smaller group of people within our target population, and sometimes, these small groups don’t completely represent the true opinion of our target population. This can expose marketers to faulty decisions that are based on false positives, that is, tests in which the apparent winner is not the actual over-performer in the target population.  

Statisticians have overcome these sampling errors with “statistical significance” to correct for this type of error, and you should always ask your A/B test vendor how they control for sampling errors including false positives.  If our goal is to learn from our creative testing, then we must ensure that our outcomes are statistically significant!

#1 Test Hypotheses, Not Ads

The first question to ask when designing a creative A/B test is this: What hypothesis do we want to test?  Common hypotheses to test include:

  • Value Proposition (ex: 10% off vs. $25 off)
  • Image (ex. red car vs. blue car)
  • Tagline (ex. “Just do it” vs. “Do it”)
  • Call to Action Text (ex. “Subscribe now!” vs. “Learn more”)
  • Single Frame vs Multi-Frame

Each test should allow you to answer a question, for example: “do my customers like 10% off, or do they like $25 off?”

Many creative tests make the mistake of testing creatives that were created independently of each other, and thus vary in more than one way.  The reason why these tests are ineffective is that the marketer can’t distill the test into a lesson to be applied to future creative design. The only learning from such a test is that the brand should shift traffic to the winning ad.  But no lessons for the next new ad result from such a test.

For example, the A/B test below is comparing different layouts, images, value propositions and CTA text all at the same time.  Let’s say Creative B wins. What have we learned? Not much, other than in this particular set of ads, Creative B outperforms Creative A.  But we don’t know why, and thus have learned nothing that we can apply to future ads.

A/B Test with No Hypothesis

 

By comparison, the following two A/B tests have specific hypotheses – “do red cars work better than blue cars?”  At the end of this test, we will learn that either red SUV’s or blue sports cars outperform the other, and can apply this learning to future creatives.

Hypothesis-Driven A/B Test: Car Type Drives Performance

 

In this next A/B test, the hypothesis is that the value proposition in the tagline drives performance.  A common first A/B test for a brand is to compare feature-based vs value-based taglines.

Hypothesis-Driven A/B Test: Value Proposition Drives Performance

 

#2 Test Large Changes before Small Changes

Large changes should be tested first because they generate larger differences in performance, so you want these learnings to be uncovered and applied first.  

Larger changes – such as value proposition and image – are also more likely to perform differently for different audience segments that small changes – like the background of the CTA button.  As such, by breaking out your A/B test results by audience segment, you can learn what tagline or image pop with particular segments, which can guide the design of a creative decision tree.

Large changes: Value Proposition, Brand Tagline, Image, Product Category, Price/Value vs Feature, Competitive Claims

Smaller changes: CTA text, CTA background, Styling and formatting, Multiframe vs Single Frame

Small changes are likely to drive small lift.  Only test this after testing bigger changes.

 

#3 Test multiple creative changes with Multivariate Test Design

Multivariate test designs (MVT) sound more complex than they are.  Multivariate tests simply allow you to run 2 or 3 A/B tests at the same time, using the same target population.  They are a statistically rigorous way to break Rule #1 above that says you should test a single change at a time.  In the case of MVT test design, you can more than one change by creating a separate creative for every combination of changes, and then learning from these tests.  

For example, if, as below, you are testing 2 changes – message and image – each of which have 2 variations, you have a 2×2 MVT test and need to create 4 ads.

Multivariate test that tests Image and Message at the same time

 

When the test is done, aggregate test results along each dimension to evaluate the results of each A/B test independently. If you have enough sample, you can even evaluate all the individual creatives against each other to look for particular interactions of message and image that drive performance.

To Summarize:

To drive more optimizations more quickly and generate demand and budget for more testing, following these simple tips:

  1. Test hypotheses that generate learnings for subsequent creative design
  2. Test large changes first and setting up multiple variate tests
  3. Test one change at a time, or set up a multivariate test framework

Happy testing!

Continue Reading

Doubleclick ID Alternatives for my Doubleclick Campaign Manager (DCM) logs?

tl;dr DoubleClick logs are used today by marketers for verification, attribution modeling, and other analysis beyond what is available in standard DCM dashboards.  

Log-based analytics require a device or user identifier, so DoubleClick’s removal of the DoubleClick ID represents a disruption of the status quo for log-based analytics solutions.  

Fortunately, DCM logs are not the only source of log-level data, or even the best.  Brands and agencies increasingly use tracking pixels from measurement vendors that have access to deterministic IDs as a replacement for ad server logs and to support more advanced analysis. Skip to the end if you are just looking for a list of recommendations.

How important are logs in digital advertising?

What Happened

Google’s announcement last Friday that DoubleClick is removing the Doubleclick ID from its logs resulted in panic in many corners of the digital advertising world.  What is the DoubleClick ID? For that matter, what are logs and why do people use them? Confused as to what the big deal is?

Here are the answers:

Beginning on May 25, DCM will stop populating the hashed UserID field (which stores the DoubleClick cookie ID and mobile device IDs) in DoubleClick Campaign Manager and DoubleClick Bid Manager (DBM) logs for impressions, clicks and website activities associated with users in the European Union. DoubleClick intends to apply these changes globally, and will announce timing for non-EU countries later this year.

What this Means for Advertisers

DoubleClick, like most adtech platforms, provides reporting dashboards to monitor performance KPIs.  While dashboards provide a good summary on performance, they can’t answer more granular questions that marketers want of their data.  That’s why many marketers ingest logs from their ad servers and DSPs. These logs are broken out into impression logs, click logs and site activity logs.

In order to perform custom analytics with these logs, the logs need to share a common identifier, so that the marketer can tie together recorded impressions from multiple sources (DCM, DSP, etc.) that belong to the same person, as well as clicks and site actions from that person.  

That common identifier is generally the cookie ID or, in the case of mobile app ads, mobile device ID.  DoubleClick currently has a field in all of their logs called UserID that stores a hashed version of the DoubleClick cookie ID or the mobile device ID tied to an impression, a click or a site action.

By removing this field from their logs, DoubleClick is effectively ending their support for ad server logs that are used for analytics, verification, measurement, or attribution modeling. Without the UserID field, marketers can no longer tie together impressions, clicks and site actions. For example, if you were previously filtering suspicious traffic based on frequency of engagement, you will no longer be able to do so (because each row becomes unique without a deduplicating identifier).

The alternative proposed by Google is for marketers to pay to use the dashboard found in the Google Ads Data Hub.  The big issue with this approach is that the marketer has to trust Google to grade their own homework, making the marketing standard “trust, but verify” approach all but impossible.

As a result, brands and agencies using DoubleClick logs will no longer be able to independently:

  • Verify frequency by cookie or person
  • Count total ad exposure by person
  • Analyze true reach of media placements and campaigns
  • Compare reach and duplication by media placement and campaign
  • Attribute or de-duplicate conversions and clicks
  • Report on user conversion rates
  • Identify unique site traffic

What’s the Back Story

This announcement is part of two trends in the market – GDPR as a pretext for raising the walls of walled gardens, and the shift from logs to trackers to collect data for custom analytics.  

First, Google is saying that the upcoming EU law, GDPR, is forcing them to do this, something many pundits have questioned. Walled gardens are continuing to grow taller, and increasingly are leveraging privacy concerns as the pretext for doing so. Media sellers are also now further pushing their own measurement and attribution solutions in a bid to grade their own homework and prevent cross-platform comparison.  

Google has built a more full-featured measurement and attribution product that is currently in pilot with selected large brands known as Google Attribution 360, part of Google Ads Data Hub.  The announcement to remove the DoubleClick ID from logs is connected strategically to the broader release of Attribution 360 later this year. In fact, Google Ads Data Hub was even plugged in the email to agencies informing them of this change.

Second, this announcement is a reaction to the trend of measurement and attribution vendors disrupting the importance of ad server logs, making Google’s decision seemingly reasonable.

Marketers are increasingly relying on vendors to improve their accuracy through features that are not a part of the traditional ad server log. Specifically, savvy marketers want (a) cross-device graphs and (b) the ability to perform causal attribution modeling. Neither of these goals are unlocked by DCM logs today, leading to the emergence of an ecosystem of measurement platforms, each with their own trackers tied to a cross-device graph for data collection. Of course, one such vendor is Google, whose Attribution 360 offering has both of these advanced features.

As such, DoubleClick’s announcement simply represents a formal passing of the torch in responsibilities from the ad server to the measurement provider for those marketers who have already reduced their dependence on DCM logs.

Recommendations

Brands and agencies need to identify vendors who can provide tracking and measurement capabilities (full disclosure – Thunder Experience Cloud is one such vendor). This change needs to occur before current dashboards built off of DCM logs become disrupted.  

If you are evaluating vendors to address this change, we recommend the following as requirements:

  • Ability to source data from impression trackers rather than logs
  • Visibility across all ad exchanges (several vendors are classified as DMPs by Google and thus blocked from tracking impressions on AdX)
  • Can provide the following categories of metrics:
    • Frequency by person and total ad exposure by person
    • True reach and overlap of media placements and campaigns
    • Attribution using any configurable attribution model, both position-based and algorithmic
  • Media agnostic (be wary of solutions that grade their own homework)
  • Independent of any arbitrage of audience data segments that are evaluated by their measurement product

In addition, some “nice to haves” include:

  • Backed by a deterministic people-based graph
  • Can provide reliable logs with interoperable customer ID to other identified vendors within the brand’s adtech stack if requested
Continue Reading

Digiday eBook: The ABC’s of People-Based Testing

Ad testing is meant to solve a very specific problem: Marketers are tired of launching their ads into a void, crossing their fingers and hoping for a boost in conversions. But, as Digiday reports in a new eBook, a number of widely used ad testing techniques dodge the question by failing to keep track of the individual on the other side of the screen.

As a result, people-based testing techniques are slowly but surely catching on, making it far easier for industry pros to identify real effectiveness and impact to put more media budget behind.  To learn more, check out Digiday’s Did Your Ad Work: The ABC’s of People-Based Testing.

Continue Reading