Webinar: Surviving the Doubleclick ID Loss

Alongside Adweek and Neustar, Thunder engaged in a webinar on the topic of the upcoming Doubleclick ID loss in 2019 and how to prepare for it if you’re a data-driven marketer. Learn what sort of advertiser needs to consider switching to an open ID and who is better off sticking with Google’s ID. Watch the full presentation and discussion below:

More on the Ads Data Hub series

  1. What is Google’s Ads Data Hub and is it right for me?
  2. How does Google’s Ads Data Hub Affect My Data Management Platform (DMP)?
  3. How does Google’s Ads Data Hub Affect My Analytics?

 

 

Continue Reading

Call for Advertising Industry to Protect Consumer Privacy, Provide Ad Transparency, and Secure Publisher Data

Thunder’s mission is to solve bad ads. To that end, Thunder joined the Coalition for Better Ads at the end of 2017. Now, Thunder is calling for the industry to go beyond just higher standards for creative. Thunder wants to put in place stronger protection for consumers and publishers while also providing greater transparency for advertisers.

Thunder had the recent honor of guest writing in the Association of National Advertisers (ANA) on what Cambridge Analytica taught the ad industry about what consumers expect and what publishers will need to do going forward. In this column, Thunder CEO also touches on how advertisers can work with these groups to ensure a better Internet where only effective, non-intrusive advertising rules. Here’s an excerpt:

Ultimately, everyone has to give a little something to get much more in return. Moving advertising to an anonymized ID tied to ad exposure will benefit the entire internet. Consumers will get better advertising and privacy, publishers will remove their liability and data leakage, and advertisers will gain transparency into their advertising.

 

 

Continue Reading

Neustar and Thunder join forces to deliver better customer experiences, powered by people-based intelligence

SAN FRANCISCO, Aug. 28, 2018 (GLOBE NEWSWIRE) — Thunder Experience Cloud, the leader in people-based ad serving, and Neustar Marketing Solutions (a division of Neustar, Inc.), the leading unified marketing intelligence platform for marketers, today announced the integration of Thunder’s people-based ad server with the Neustar Identity Data Management Platform (IDMP) and the Neustar MarketShare solution. The partnership will enable brands and agencies to quickly customize ad creatives to each customer, as well as measure performance for real-time optimization.

Thunder’s dynamic creative optimization (DCO) solution is a people-based, dynamic ad server that enables advertisers to factor in data signals such as CRM, weather, device type, time, media exposure, and now, audience data from large Data Management Platforms (DMP) like Neustar.

Customers of Neustar and Thunder will be able to target creative messaging for individual, real people and audience segments across digital channels such as display, video and mobile. By synchronizing people IDs on the open web, they can achieve a higher level of personalization, consistency and accuracy, eliminating irrelevant or redundant advertising.

In addition, Thunder’s people-based Experience Measurement solution tracks the performance of ads from exposure to viewer to conversion to allow for a high level of optimization. From there, joint customers can quickly and easily activate media by person tracked on the open web through the Neustar IDMP. This people-based data set will also be integrated within the Neustar IDMP and the Neustar MarketShare solution.

“Advertisers must be able to have a clear view of how their marketing performs across channels – which creatives and messages are being shown to whom, when and where. Neustar is dedicated to giving the industry access to independent and accurate media exposure data, ensuring brands and agencies have the tools they need for personalized, measurable experiences at scale,” said Steve Silvers, General Manager, IDMP, Neustar.

“There is no excuse for a bad ad,” added Victor Wong, CEO of Thunder. “This integration is another step toward ensuring every ad meets the highest standard of relevancy, frequency and impact, ultimately creating a better customer experience.”

About Thunder:
Thunder solves bad ads. Thunder Experience Cloud enables enterprises to produce, personalize, and track their ads cross-channel to achieve the right consistency, relevancy and frequency. Consumers maintain privacy, publishers safeguard data, and brands gain transparency through Thunder for a better ad experience for all.  To learn more visit: https://www.makethunder.com/

About Neustar Marketing Solutions
Neustar, Inc. helps companies grow and guard their business in a connected world. Neustar Marketing Solutions provides the world’s largest brands with the marketing intelligence needed to drive more profitable programs and to create truly connected customer experiences. Through a portfolio of solutions underpinned by the Neustar OneID® system of trusted identity and through a privacy by design approach, we enhance brands’ CRM and digital audiences, enable advanced segmentation and modeling, and provide measurement and analytics all tied to a persistent identity key. Neustar’s position as a neutral information services provider, and as a partner to Google, Facebook and Amazon, provides marketers access to the most comprehensive customer intelligence and marketing analytics in the industry. More information is available at www.marketing.neustar.

Continue Reading

How does Google’s Ads Data Hub Affect My Analytics? (Part III of the Ads Data Hub Series)

Note: We provided an overview of Ads Data hub in Part 1, and how Ads Data Hub will impact DMP’s in Part 2. This post covers data lakes and how analytics will be impacted in the Ads Data Hub world.

Many large brands today have set up “data lakes” where all their data gets stored and made available to other applications for processing and analysis. These data lakes combined with business intelligence tools such as Tableau have created powerful analytics environments where brands can answer questions such as:

  • What customer segment is most responding to my ads?
  • Which ads are leading to the most amount of lifetime customer value?
  • Do people who see my ads spend more with me?
  • Am I spending more money to reach my customers than they are spending with me?

Brands have staffed up data analysts and data scientists to make sense of all this data and answer these important business questions to improve strategy and validate what partners are telling the brand.

Data lakes ultimately rely on data to flow into them. Google’s recent changes with Ads Data Hub keeps data locked within Google Cloud and cannot be combined outside of Google’s controlled environment. As a result, data lakes for marketing are under threat by recent changes by Google.

Data Lakes without Data

Consequently brands with sensitive customer data are forced to decide whether to upload that data to Google to run in a Google-controlled data lake or keep it off the Google Cloud where they’ll need to find other vendors to solve their needs for tracking, analyzing, and modeling.

If you want to maintain control of your own data lake and preview it from drying up, talk to Thunder about our Experience Measurement solution. 

More on the Ads Data Hub series

Continue Reading

How does Google’s Ads Data Hub Affect My Data Management Platform (DMP)? (Part II of the Ads Data Hub series)

Note: We provided an overview of Ads Data hub in Part 1. In this post, we look at how Ads Data Hub will impact DMP’s in general.

Data management platforms (DMPs) power the marketer’s ability to track, segment, and target audiences across programmatic media. Leading DMP solutions include Salesforce DMP (previously known as Krux), Neustar IDMP, Oracle BlueKai and Adobe Audience Manager

If you weren’t paying close attention, you may not realize that the changes Google have announced have blown a hole in your DMP.

 

Two major capabilities are affected by the pending DoubleClick ID removal from logs and push toward using Google’s Ads Data Hub: (1) segmentation and (2) frequency capping.

First, marketers currently use DMPs to create new audience segments based on media exposure. A DMP can keep track of media exposure if its own tags/pixels can run with the ad, but on many publisher inventory such as Google’s Ad Exchange, DMPs are banned from running their code. These publishers are worried about data leakage, which happens when the DMP pixels proprietary audiences on media (such as sports lovers on ESPN.com) and purchased these users elsewhere without paying the publisher.

Historically, the DMP could still get a record of media exposure from the ad server such as DoubleClick, which would share data on who saw the ads running. Using DoubeClick’s data, the marketer could then still segment audiences within the DMP based on who saw the ad, who converted, etc.

Now that Google has discontinued the sharing of logs with IDs, DMPs are no longer able to see media exposure on either inventories on which they are explicitly banned and or inventories where they are allowed to operate but that Google’s DoubleClick ad server is used by the advertiser. If DMPs are to continue to be useful to the marketer, they will need a new source of data.

Second, some marketers use DMPs to create frequency caps across media platforms. By getting their pixel/code to run with an ad, or by ingesting ad serving logs, DMPs can count impressions exposed to a particular user ID and then send a signal to platforms like DSPs to stop buying a user after a certain amount of exposure. However, without log level data, DMPs will not be able to count frequency for inventory in which they are banned, leading to less accurate frequency measurement and therefore less precise frequency capping.

How do I keep my DMP running at full performance?

Marketers who have invested in a DMP and want to keep its capabilities at full power would be advised to either buy more digital media that allow DMP tracking or find an alternative ad tracking or serving solution that can data transfer log files to the DMP. A combination of these two strategies would allow a brand to continue using its DMP to its fullest by giving the DMP the complete picture of ad exposure tied to person.

If you want to add an independent ad tracker to your DoubleClick stack or to keep powering your DMP with data, talk to Thunder about our Experience Measurement solution. 

More on the Ads Data Hub series

Continue Reading

What is Google’s Ads Data Hub and is it right for me? (Part I of the Ads Data Hub series)

What happened to DoubleClick?

Most marketers today use DoubleClick Campaign Manager (DCM) as their primary ad server for delivering ads and tracking ad exposure to conversion. The largest advertiser and most sophisticated advertisers relied on DCM data to do analytics, attribution, and media activation.

These advertisers would “data transfer” log-level data (the raw data for each impression rather than the aggregate data that hides user-level and impression-level information) to their data management platform, data lakes, and vendors that do analytics or attribution modeling.

 

In April, Google announced it will no longer allow data transfer of DoubleClick log-level data with IDs. This decision effectively destroyed most of the value of the log-level data exported from DCM because advertisers wouldn’t know who saw the ads but only how often an ad in total was served. DoubleClick could be used only to verify that the total amount of impressions bought were actually delivered but all the other powerful use cases like analytics, attribution, and data management would no longer be possible with DoubleClick data.

In June, Google announced it was sunsetting DoubleClick as a brand and folding everything under Google’s brand.

R.I.P. DoubleClick.

Enter Google Ads Data Hub

At the same time, Google pushed forward its own solution to this new problem for marketers — Ads Data Hub. This product is essentially a data warehouse where ad exposure data is housed and can be connected to Google’s own solutions for attribution, analytics, and data management.

One new benefit is access to the Google ID, which is a powerful cross-device ID that uses data from users logging into Google services like Android, Maps, YouTube, etc. Previously, DoubleClick was only tracking and sharing a cookie-based DoubleClick ID, which neither connected cross-device ad exposure and conversion nor reconciled multiple IDs to the same person. For many advertisers doing log-level data analysis and activation, this new ID is a big upgrade because it provides more accurate measurement.

One major downside is that this data cannot leave Ads Data Hub. Consequently, you cannot do independent verification of Google’s attribution or analytics modeling. If Google says Google performs better than its competitors, you will have to trust Google at its word. In the past, you would at least have the raw data to apply your own attribution model if you so wanted, or to re-run Google’s calculations to verify its accuracy (since big companies are not infallible).

By extension, outside ad tech providers (such as DMPs, MTA, etc.) who may be best in-class will have a much harder time working with Google solutions. As a result, you will be dependent on Google.

To do matching of off-Google data such as other ad exposure or conversions that happen offline, Ads Data Hub now requires you to upload and store your customer data in the Google Cloud. In that environment, it can be matched with Google’s ID and tracking so you can build a Google-powered point of view of the consumer journey.

In a way, Ads Data Hub is for those who trust but don’t need to verify. It is a good solution for advertisers who today spend the vast majority (75%+) of their ad budget with Google because ultimately if their advertising isn’t working, no matter what Google says about how it is performing, it would be ultimately accountable for the results. You wouldn’t need to verify calculations to know if your ad budget is wasted.

What else can I do?

Another solution is to add independent ad serving and/or tracking in addition to or in replacement of Google. By doing so, you can still generate log-level data for Google-sold media but it will not be tied to a Google ID. Instead, you will be using your own ID or a vendor’s cross-device ID to understand who saw what ad when, where, and how often.

This approach is best suited for large advertisers who want best in class ad tech solutions to work together, and who cannot spend all their money on a single media platform to achieve their desired results. Typically brands large enough to afford data lakes, independent attribution providers, and data management platforms are the ones who will have the most to lose by moving to Ads Data Hub.

If you already realize you want to take a trust, but verify approach in your ads, talk to Thunder about our Experience Measurement solution. 

More on the Ads Data Hub series

Continue Reading

What CMO’s Say About Ad Experiences

Marc Pritchard famously said “It is time for marketers and tech companies to solve the problem of annoying ads and make the ad experience better for consumers.”

What do his peers think? The CMO Club has partnered with Thunder to publish a “Guide to Solving Bad Ad Experiences,” which includes a survey of over 80 CMOs and an interview with the CMO of Farmers Insurance on the impact of bad ads and how people-based marketing can fix them.

Some key findings include:
  • 74% of CMOs consider brand loyalty as most negatively affected by bad ads
  • 55%+ of CMOs consider frequency and relevancy as the top factors in bad ads
  • 78% of CMOs consider it “inexcusable” to serve ads for products the customer already bought from them
  • 71% of CMOs consider frequency capping important for ad experience but 60% aren’t confident in even their frequency counting!

Click here to download the full research report.

Continue Reading

What is a CMP?

CMP is the hot adtech acronym of 2018. There are actually two meanings to this term: (1) Creative Management Platform and (2) Consent Management Platform. Here’s an overview of both these products and why you may need one.

Creative Management Platform

Introduced in 2016 by Thunder, the CMP acronym original stood for “creative management platform,” a tool for producing and trafficking ad creatives. Rather than just a general purpose creative editor like Adobe Photoshop or Animate, which are applications built for a single designer to use by him or herself, CMPs are meant for an enterprise that has a scale issue with creative.

Many brands, agencies and publishers are increasingly needing to build ads in different sizes and versions for different audiences and media formats. Consequently, creative production demands have grown exponentially while most creative organizations can only scale linearly in their capability by adding more designers and programmers. Because traditional creative editors were built for highly advanced users, a creative bottleneck formed as demand went up and not enough talent or payroll existed to fill the void.

Creative Management Platforms radically simplified ad production by providing easier interfaces and automated production tasks like re-sizing. Forrester began recognizing CMPs in 2017 as part of their broader creative ad tech research which has been timed with the rise in enterprise demand for new marketing creative technologies.

Consent Management Platform

Introduced in 2018, the new CMP acronym stands for “consent management platform.” The European privacy laws known as GDPR required publishers and marketers to obtain explicit consent for certain tracking and targeting data. As a result, a new category of tools emerged to specifically help these enterprises collect and keep track of user consent.

The CMP then feeds that consent information tied to an ID to other selected partners in the digital advertising supply chain. As a result, every party in a publisher’s supply chain understands what data they may use and for what.

Which CMP do I need?

It depends if you’re looking to solve a creative problem or a data privacy problem. Talk to Thunder if you need help with your data-driven creative problems or digital creative production problems. Check out these consent management vendors if you’re looking to solve a privacy preference problem.

Continue Reading

Why Marketers Need More Than One DSP – Understanding The Risks

The average advertiser uses 3 DSPs.  Part #1 of this series examined the reasons digital advertisers make use of multiple DSPs in their programmatic bidding.  Of course, the use of multiple DSPs also creates its own challenges. So in Part #2 below, we look at the challenges created around frequency and bidding against oneself by using multiple DSPs, and how the smart marketer overcomes these challenges.

Don’t multiple DSPs just bid against each other for the same inventory?

When advertisers think of using multiple DSPs to bid on inventory, the most common concern that comes to mind is that the inventory between the DSPs will overlap, and the DSPs will be bidding against each other.  In other words, the advertiser will be bidding against itself, thus inflating its bids and artificially driving up media costs.

And in a world of 2nd price auctions, marketers can see why this is a scary prospect.  We discussed in detail how bidding worked in Part #1 of this series, but here’s a brief summary:

First, DSPs conduct internal auctions and then sends the winning bid to an exchange or SSP for a subsequent auction.  These DSP internal auctions are conducted on a 2nd price basis, which means that an advertiser bidding $25 for an impression will really only bid $5 in the SSP auction if the 2nd highest bid in the DSP’s internal auction was $5.  

What does this mean if the same advertiser had multiple DSPs? Well, if the 2nd highest bid for the same impression in the advertiser’s other DSP was $10, then now the SSP is choosing between bids of $10 and $5 from the same advertiser.  And if the 3rd price in the other DSP was $4, then the advertiser would have cleared the SSP auction at $4 if it had only used the first DSP.

This scenario is certainly possible, but marketers have increasingly overlooked this concern for two reasons. These reasons both stem from the rise of header bidding.

First, for all the bids that are inflated due to the use of multiple DSPs, there are as many bids that a single-DSP marketer that will lose in a header bidding world without using multiple DSPs. As was explained in Part #1 of this series, precisely because SSPs conduct 2nd price auctions, an advertiser can win an exchange’s auction, but lose the unified auction to an exchange that had a higher 2nd price that was lower than the advertiser’s actual bid price.  So, if the advertiser’s main goal is to reach its audience, then it will want to use more DSPs (and win more internal auctions). This inevitably translates to more exchanges submitting the advertiser’s winning 2nd price bid to the header bidding unified auctions, and more wins overall.  

Is this bidding against oneself?  Perhaps, but with header bidding, this is often required to simply win enough auctions to achieve desired scale.

Second, header bidding is bringing about a seismic shift in real-time bidding from 2nd price auctions to 1st price auctions within SSPs and exchanges in order to eliminate this scenario in the first place.  Since header bidding unified auctions select the highest price submitted by participating SSPs and exchanges, SSPs and exchanges are incentivized to maximize the chance of winning the auction, which means submitting the bid with the highest price. In practice, they follow 1st price auctions and submit the winner with their 1st price bid, rather than the 2nd price.  Many SSPs, such as Pubmatic and OpenX, are now following this practice for precisely this reason. Once SSPs and exchanges are using 1st price auctions, the risk of inflating one’s bid goes away as long as an advertiser bids the same amount for the same category of inventory across their multiple DSPs.

How to control frequency with multiple DSPs?

A more serious challenge raised by the use of multiple DSPs than inflating bid prices is the loss of control over ad frequency.  And here, this challenge remains largely underserved, even if the demand for solutions continue to grow among large advertisers.

The main reasons why managing frequency of ads served to individuals matters are (i) to limit the frequency of ad serving to individuals in order to reduce waste, and (ii) to avoid burnout and negative brand associations from over-exposure.  We have all seen bad ad experiences where a brand bombards us with the same ads. So when using a single DSP, advertiser often follow the best practices of capping frequency by day (otherwise known as pacing) and by month, campaign duration or user’s lifetime (to limit the overall exposure to a brand’s advertising).  

However, when using multiple DSPs, frequency capping becomes impossible to accomplish on the DSP level (since the DSP’s don’t actually talk to each other). What solutions are there?

One solution is to control frequency capping on the ad server.  Doubleclick Campaign Manager supports frequency capping, but rather than suppress media buying (as a DSP would via frequency capping) DCM serves a blank ad. This solution is pretty unsatisfying to the advertiser, as it results in significant wasted media spend.  

DMPs, such as Adobe Audience Manager and Oracle BlueKai, claim to offer cross-DSP frequency capping, by tracking ad impressions and then suppressing users via existing integrations with DSPs.  It’s not uncommon to use a DMP to create suppression audiences, so this seems like a natural extension of this capability. Unfortunately, Google blocks DMPs from tracking impressions on GDN inventory.  Currently 14 DMPs are blocked by Google from tracking impressions in GDN. Since Google touches a significant portion of display inventory, frequency capping becomes much less useful without cooperation from the Google ecosystem.

We expect the use of multiple DSPs to be a growing trend for major advertisers that require scale given the evolving mechanics of real-time bidding auctions. Spurred on by this new trend, these same advertisers who need scale will also be the ones most concerned with solving for control over frequency. Stay tuned for solutions that emerge in the marketplace.

Continue Reading

Why Marketers Need More Than One DSP – Understanding Demand Side Platforms

The average advertiser uses 3 DSPs.  There are strong reasons for digital advertisers to make use of multiple DSPs in their programmatic bidding – if you have wondered why advertisers use multiple DSPs, then Part #1 of this explainer is for you.  

Of course, the use of multiple DSPs also creates its own challenges. So in Part #2, we will look at the challenges created around frequency and bidding against oneself by using multiple DSPs, and how the smart marketer overcomes these challenges.

Why do Marketers use Multiple DSPs?

The primary benefits to advertisers of using multiple DSPs are: (i) differentiated DSP features which are needed to execute each campaign, (ii) accessing DSP-specific audience data, and (iii) scaling out the reach of campaigns. Let’s deep dive into each reason.

Benefit #1: Competition among DSPs around Features and Take Rates

DSPs are differentiated in many ways.  One key area is their take rates – the percentage of media spend they charge advertisers.  Another is that DSPs vary in ease of use and level of support. For example, AppNexus has lower take rates than others, but also offers less hands-on support and a powerful but complicated API.  The Trade Desk and MediaMath, conversely, are well known for their customer education and easier-to-use interface. The targeting options they offer and the reporting and analytics available for media insights also vary between each platform.  

By employing multiple DSPs, trading desks also are able to pressure the DSPs to add features and lower take rates by moving spend across DSPs easily.  Most recently, some DSPs have agreed to increased transparency by revealing the fees charged by exchanges, and SSPs that provide the ad inventory. This is a great example of DSPs accommodating customer demands in a competitive environment.

Benefit #2: Audience Data

Many DSPs have unique sources of audience data.  DoubleClick Bid Manager, of course, brings data on users of Google Display Network sites to make targeting options available for AdX sites (most of AdX inventory is GDN) that are not available in other DSPs.  Amazon Audience Platform brings audience data unique to Amazon. MediaMath has a 2nd party data co-op called Helix that benefits many advertisers. Some DSPs, like AppNexus and The Trade Desk, offer IP-range targeting.  

Marketers may be running different strategies with various campaigns, and leveraging multiple targeting options across DSPs empowers them to do so.

Benefit #3: Scale

Ultimately, the primary driver for using multiple DSPs may be the challenge of achieving scale in large budget campaigns with only a single DSP.  A trading desk may simply be unable to spend the budget for a target audience in a large campaign without using additional DSPs.

Why is that?  It’s complicated.  But the explanation below breaks it down.

First, bidding on multiple DSPs increases the odds of winning auctions.  

How?  There’s a couple reasons:

Each DSP conducts its own internal auction before submitting a winning bid to an exchange, which then conducts its own auction to decide which DSP wins.  An advertiser can lose an internal auction in one DSP (for example, DoubleClick Bid Manager), and win an auction in another DSP (say, AppNexus) for the same ad impression.  That’s because DSPs select winning bids not based on bid price alone, but also on the profile of the user and performance factors specific to each advertiser (whether the viewer is likely to click on the ad).  As such, one strategy some trading desks pursue to maximize their chances of winning is to intentionally add a smaller DSP to the mix because they will face less competition winning that DSP internal auction for this reason.

But even once an advertiser wins the DSP auction and the exchange auction, there is increasingly another auction that comes next that they might still not win – the header bidding unified auction.  Before header bidding, publishers would run an auction through a single exchange, and if the winning bid is rejected for some reason, it would run a subsequent auction through another exchange, all in a waterfall process.  With header bidding, publishers run a unified auction across multiple exchanges. Because the exchanges conduct 2nd price auctions (the advertiser pays the price of the 2nd highest bidder), an advertiser could win an exchange’s auction, but lose the unified auction to an exchange that had a higher 2nd price but lower than the advertiser’s actual bid price.  So, the more DSPs with the advertiser’s bid, the more exchanges will have the advertiser’s winning bid, the better chance the advertiser will win header bidding unified auctions.

Here’s an example auction to put this in illustration:

DSP A: The bids are: Advertiser A – $2.00, Advertiser B – $1.00, and Advertiser C – $0.50 -the winning bid is Advertiser A – $1.00 (price paid by Advertiser B)

DSP B: The bids are: Advertiser C – $1.50, Advertiser D – $1.25, and Advertiser E – $0.75 -the winning bid is Advertiser C – $1.25 (price paid by Advertiser D)

The Exchange would look at DSP A and B, and decide the winner to be Advertiser D paying $1.25.

Second, DSPs can’t always bid on every impression on behalf of every advertiser. The infrastructure demands on DSPs to bid on every auction are considerable even before header bidding became ubiquitous.  With the mass adoption of header bidding, a process which duplicates the auction across multiple exchanges at the same time, DSPs’ infrastructure demands become further compounded.

As a result, DSPs can’t always factor every advertiser line item in every internal auction.  There’s a lot of confusion around whether all DSPs can see and bid on all inventory. But that’s really the wrong way of thinking about it.  

In reality, even though DSPs have access to over 90% of the same inventory, they don’t necessarily use their sophisticated and resource-intensive algorithms to score and bid on every single impression they have access to.  They have to filter (partly for cost, partly for other performance factors). This process, of course, leads us back to the first reason advertisers gain scale from using multiple DSPs – you can lose the internal auction of one DSP because you weren’t included in the auction, and win the auction of another DSP, for the exact same impression.

So, there’s several benefits to advertisers from using multiple DSPs – scale, audience data and competition for your business.  In fact, this trend has somewhat altered the trend of in-housing digital advertising operations within brands. Supporting multiple DSPs would be a lot of work for a brand, and is generally handled by trading desks, both agency trading desks and independent trading desks.  

However, the use of multiple DSPs is not without its challenges, as we’ll learn in Part #2 of this blog series.

Continue Reading

How to Test Ad Creatives: Beginner’s Guide to Optimize Your Display Ad Tests

There are so many creative elements that digital marketers can test in their banner ads – from value propositions to taglines to images and styling – that it can be hard to know where to start.  

A/B testing your creatives take a couple weeks to conduct to get proper statistical significant, so it’s often difficult to test every possible creative variation.  So, how should a digital marketer get started with A/B testing their banner ads?

Thunder has conducted hundreds of A/B tests, and distilled our learnings into the best practices for designing creative tests.  When followed, these tips can reduce the amount of time required to optimize your creative!

What is Test Significance?

Before we begin, we should address a commonly misunderstood concept: test significance. Marketers with no background in statistics often miss a critical fact: your tests may tell you less than you think.  

The reason is simple: our testing approach basically surveys the opinions of a smaller group of people within our target population, and sometimes, these small groups don’t completely represent the true opinion of our target population. This can expose marketers to faulty decisions that are based on false positives, that is, tests in which the apparent winner is not the actual over-performer in the target population.  

Statisticians have overcome these sampling errors with “statistical significance” to correct for this type of error, and you should always ask your A/B test vendor how they control for sampling errors including false positives.  If our goal is to learn from our creative testing, then we must ensure that our outcomes are statistically significant!

#1 Test Hypotheses, Not Ads

The first question to ask when designing a creative A/B test is this: What hypothesis do we want to test?  Common hypotheses to test include:

  • Value Proposition (ex: 10% off vs. $25 off)
  • Image (ex. red car vs. blue car)
  • Tagline (ex. “Just do it” vs. “Do it”)
  • Call to Action Text (ex. “Subscribe now!” vs. “Learn more”)
  • Single Frame vs Multi-Frame

Each test should allow you to answer a question, for example: “do my customers like 10% off, or do they like $25 off?”

Many creative tests make the mistake of testing creatives that were created independently of each other, and thus vary in more than one way.  The reason why these tests are ineffective is that the marketer can’t distill the test into a lesson to be applied to future creative design. The only learning from such a test is that the brand should shift traffic to the winning ad.  But no lessons for the next new ad result from such a test.

For example, the A/B test below is comparing different layouts, images, value propositions and CTA text all at the same time.  Let’s say Creative B wins. What have we learned? Not much, other than in this particular set of ads, Creative B outperforms Creative A.  But we don’t know why, and thus have learned nothing that we can apply to future ads.

A/B Test with No Hypothesis

 

By comparison, the following two A/B tests have specific hypotheses – “do red cars work better than blue cars?”  At the end of this test, we will learn that either red SUV’s or blue sports cars outperform the other, and can apply this learning to future creatives.

Hypothesis-Driven A/B Test: Car Type Drives Performance

 

In this next A/B test, the hypothesis is that the value proposition in the tagline drives performance.  A common first A/B test for a brand is to compare feature-based vs value-based taglines.

Hypothesis-Driven A/B Test: Value Proposition Drives Performance

 

#2 Test Large Changes before Small Changes

Large changes should be tested first because they generate larger differences in performance, so you want these learnings to be uncovered and applied first.  

Larger changes – such as value proposition and image – are also more likely to perform differently for different audience segments that small changes – like the background of the CTA button.  As such, by breaking out your A/B test results by audience segment, you can learn what tagline or image pop with particular segments, which can guide the design of a creative decision tree.

Large changes: Value Proposition, Brand Tagline, Image, Product Category, Price/Value vs Feature, Competitive Claims

Smaller changes: CTA text, CTA background, Styling and formatting, Multiframe vs Single Frame

Small changes are likely to drive small lift.  Only test this after testing bigger changes.

 

#3 Test multiple creative changes with Multivariate Test Design

Multivariate test designs (MVT) sound more complex than they are.  Multivariate tests simply allow you to run 2 or 3 A/B tests at the same time, using the same target population.  They are a statistically rigorous way to break Rule #1 above that says you should test a single change at a time.  In the case of MVT test design, you can more than one change by creating a separate creative for every combination of changes, and then learning from these tests.  

For example, if, as below, you are testing 2 changes – message and image – each of which have 2 variations, you have a 2×2 MVT test and need to create 4 ads.

Multivariate test that tests Image and Message at the same time

 

When the test is done, aggregate test results along each dimension to evaluate the results of each A/B test independently. If you have enough sample, you can even evaluate all the individual creatives against each other to look for particular interactions of message and image that drive performance.

To Summarize:

To drive more optimizations more quickly and generate demand and budget for more testing, following these simple tips:

  1. Test hypotheses that generate learnings for subsequent creative design
  2. Test large changes first and setting up multiple variate tests
  3. Test one change at a time, or set up a multivariate test framework

Happy testing!

Continue Reading

Doubleclick ID Alternatives for my Doubleclick Campaign Manager (DCM) logs?

tl;dr DoubleClick logs are used today by marketers for verification, attribution modeling, and other analysis beyond what is available in standard DCM dashboards.  

Log-based analytics require a device or user identifier, so DoubleClick’s removal of the DoubleClick ID represents a disruption of the status quo for log-based analytics solutions.  

Fortunately, DCM logs are not the only source of log-level data, or even the best.  Brands and agencies increasingly use tracking pixels from measurement vendors that have access to deterministic IDs as a replacement for ad server logs and to support more advanced analysis. Skip to the end if you are just looking for a list of recommendations.

How important are logs in digital advertising?

What Happened

Google’s announcement last Friday that DoubleClick is removing the Doubleclick ID from its logs resulted in panic in many corners of the digital advertising world.  What is the DoubleClick ID? For that matter, what are logs and why do people use them? Confused as to what the big deal is?

Here are the answers:

Beginning on May 25, DCM will stop populating the hashed UserID field (which stores the DoubleClick cookie ID and mobile device IDs) in DoubleClick Campaign Manager and DoubleClick Bid Manager (DBM) logs for impressions, clicks and website activities associated with users in the European Union. DoubleClick intends to apply these changes globally, and will announce timing for non-EU countries later this year.

What this Means for Advertisers

DoubleClick, like most adtech platforms, provides reporting dashboards to monitor performance KPIs.  While dashboards provide a good summary on performance, they can’t answer more granular questions that marketers want of their data.  That’s why many marketers ingest logs from their ad servers and DSPs. These logs are broken out into impression logs, click logs and site activity logs.

In order to perform custom analytics with these logs, the logs need to share a common identifier, so that the marketer can tie together recorded impressions from multiple sources (DCM, DSP, etc.) that belong to the same person, as well as clicks and site actions from that person.  

That common identifier is generally the cookie ID or, in the case of mobile app ads, mobile device ID.  DoubleClick currently has a field in all of their logs called UserID that stores a hashed version of the DoubleClick cookie ID or the mobile device ID tied to an impression, a click or a site action.

By removing this field from their logs, DoubleClick is effectively ending their support for ad server logs that are used for analytics, verification, measurement, or attribution modeling. Without the UserID field, marketers can no longer tie together impressions, clicks and site actions. For example, if you were previously filtering suspicious traffic based on frequency of engagement, you will no longer be able to do so (because each row becomes unique without a deduplicating identifier).

The alternative proposed by Google is for marketers to pay to use the dashboard found in the Google Ads Data Hub.  The big issue with this approach is that the marketer has to trust Google to grade their own homework, making the marketing standard “trust, but verify” approach all but impossible.

As a result, brands and agencies using DoubleClick logs will no longer be able to independently:

  • Verify frequency by cookie or person
  • Count total ad exposure by person
  • Analyze true reach of media placements and campaigns
  • Compare reach and duplication by media placement and campaign
  • Attribute or de-duplicate conversions and clicks
  • Report on user conversion rates
  • Identify unique site traffic

What’s the Back Story

This announcement is part of two trends in the market – GDPR as a pretext for raising the walls of walled gardens, and the shift from logs to trackers to collect data for custom analytics.  

First, Google is saying that the upcoming EU law, GDPR, is forcing them to do this, something many pundits have questioned. Walled gardens are continuing to grow taller, and increasingly are leveraging privacy concerns as the pretext for doing so. Media sellers are also now further pushing their own measurement and attribution solutions in a bid to grade their own homework and prevent cross-platform comparison.  

Google has built a more full-featured measurement and attribution product that is currently in pilot with selected large brands known as Google Attribution 360, part of Google Ads Data Hub.  The announcement to remove the DoubleClick ID from logs is connected strategically to the broader release of Attribution 360 later this year. In fact, Google Ads Data Hub was even plugged in the email to agencies informing them of this change.

Second, this announcement is a reaction to the trend of measurement and attribution vendors disrupting the importance of ad server logs, making Google’s decision seemingly reasonable.

Marketers are increasingly relying on vendors to improve their accuracy through features that are not a part of the traditional ad server log. Specifically, savvy marketers want (a) cross-device graphs and (b) the ability to perform causal attribution modeling. Neither of these goals are unlocked by DCM logs today, leading to the emergence of an ecosystem of measurement platforms, each with their own trackers tied to a cross-device graph for data collection. Of course, one such vendor is Google, whose Attribution 360 offering has both of these advanced features.

As such, DoubleClick’s announcement simply represents a formal passing of the torch in responsibilities from the ad server to the measurement provider for those marketers who have already reduced their dependence on DCM logs.

Recommendations

Brands and agencies need to identify vendors who can provide tracking and measurement capabilities (full disclosure – Thunder Experience Cloud is one such vendor). This change needs to occur before current dashboards built off of DCM logs become disrupted.  

If you are evaluating vendors to address this change, we recommend the following as requirements:

  • Ability to source data from impression trackers rather than logs
  • Visibility across all ad exchanges (several vendors are classified as DMPs by Google and thus blocked from tracking impressions on AdX)
  • Can provide the following categories of metrics:
    • Frequency by person and total ad exposure by person
    • True reach and overlap of media placements and campaigns
    • Attribution using any configurable attribution model, both position-based and algorithmic
  • Media agnostic (be wary of solutions that grade their own homework)
  • Independent of any arbitrage of audience data segments that are evaluated by their measurement product

In addition, some “nice to haves” include:

  • Backed by a deterministic people-based graph
  • Can provide reliable logs with interoperable customer ID to other identified vendors within the brand’s adtech stack if requested
Continue Reading

Digiday eBook: The ABC’s of People-Based Testing

Ad testing is meant to solve a very specific problem: Marketers are tired of launching their ads into a void, crossing their fingers and hoping for a boost in conversions. But, as Digiday reports in a new eBook, a number of widely used ad testing techniques dodge the question by failing to keep track of the individual on the other side of the screen.

As a result, people-based testing techniques are slowly but surely catching on, making it far easier for industry pros to identify real effectiveness and impact to put more media budget behind.  To learn more, check out Digiday’s Did Your Ad Work: The ABC’s of People-Based Testing.

Continue Reading

Thunder Unveils Experience Cloud, Appoints BJ Fox as VP of Engineering

thunder creative management platform press

San Francisco, CA – (Feb 21, 2018) – Thunder, the original and leading Creative Management Platform (CMP), today announces it has expanded its offering to three enterprise products: In addition to its Creative Management Platform, Thunder now also offers Dynamic Creative Optimization and Experience Measurement. In addition, the company has added industry veteran BJ Fox to manage and scale product development. Fox brings over 20 years of experience across Internet-based software companies, ranging from startups to large enterprise companies.

The Thunder Experience Cloud is comprised of:

  • Creative Management Platform (CMP)
    • Thunder’s CMP enables users to produce brand experiences across channels and traffic them to the appropriate media platforms.
  • Dynamic Creative Optimization (DCO)
    • Thunder’s DCO personalizes and optimizes brand experiences to increase conversions and media efficiency.
  • Experience Measurement
    • Tracks and measures a brand’s customer lifetime experience, decreasing media waste and improving understanding of how ads work through creative, media and exposure data.

“We are excited to bring BJ on board as we roll out the Thunder Experience Cloud,” said Victor Wong, CEO of Thunder. “His experience in building and managing large engineering teams will be a major asset as we take our platform to the next level of scale and innovation.”

In his new role, Fox will be responsible for overseeing and growing the product development team for Thunder, which expects to grow headcount by 25% in the coming months alone.

Previously, Fox was vice president of engineering at Getjar, a late stage venture-backed mobile ad network that sold to Sungy Mobile before working at Glympse, the popular consumer app for location sharing where he built out the company’s enterprise offerings for Fortune 100 customers and partners. Prior to that, Fox was director of software development at Microsoft, where he held key roles in Windows Azure, Research, and Xbox, managing a team across three continents while driving the initial launch of Windows Azure. He also served as chief architect of Signio, an angel-funded payment startup that was purchased by VeriSign in 2000 for $820M. Most recently, he was vice president of engineering at Xevo, a connected car platform for the auto industry.

Fox said, “Thunder has long been an industry leader; the first Creative Management Platform, it is now paving the way for true people-based marketing with Experience Measurement. I have tremendous respect for Victor and am thrilled to join his team of boundary-pushing thinkers and innovators.”

About Thunder:

Named one of Forbes’ 100 Most Promising Companies in America, Thunder powers ad creative personalization, decisioning and analytics for advertisers, agencies, and publishers across the globe.

Thunder Experience Cloud enables brands to personalize, optimize, and connect ad experiences cross-channel for people-based marketing. Leading brands such as Anheuser-Busch and McCormick rely on Thunder for its creative management platform, dynamic creative optimization and experience measurement to increase conversions and decrease media waste.

####

Press Contact:
Cassady Nordeen
Blast PR on behalf of Thunder
Cassady@blastpr.com

Continue Reading

Panel discussion at OMMA Programmatic –  Can Robots Fix Programmatic Creative?

Thunder CEO Victor Wong had the pleasure of sitting in on a panel about programmatic creative at this year’s OMMA Programmatic event.

“Less than 1/3rd of online users today feel that internet advertising is relevant to them.

“Through programmatic creative and using data, if we can bring that up to 50%, that’s going to lead to hugely impactful outcomes for our clients and hopefully cut down on ad blocking and improve the way people perceive advertising in the future,” said Andrew Sandoval, Director, Biddable Media at The Media Kitchen in an opening statement.

Watch the video recording:

Below is a summary version of the main takeaways from the talk.

Continue Reading

Fireside Chat with Wells Fargo on Customer-Centric Marketing

At a recent insider marketing event in Palo Alto, Thunder CEO Victor Wong sat down with Dane Hulquist, ‎SVP, Head of Media and Retail Channels at Wells Fargo, to talk about customer-centric marketing.

A key focus of the talk was how brands with multiple products often times end up competing as they overlap in targeting a customer, bid against themselves, and create inefficiencies. The interview below has been edited and condensed for clarity.

Hulquist spoke about Wells Fargo’s high-level cultural and strategic shift which was a move toward centralization to eliminate internal competition and focus on company goals.

Continue Reading

What is the difference between a CRM and DMP in cross-channel advertising?

Customer Relationship Management (CRM) systems and Data Management Platform (DMP) products are complementary and competing software for targeting people digitally.

A CRM tracks only your registered customers (prospects, loyal, and churned).

A DMP tracks unregistered and registered audiences of your digital media and advertising, which can be a larger set of user profiles than your CRM.

Both technologies are important to data-driven marketers looking to personalize advertising with unique ads to unique sets of targets via a creative solution like a creative management platform.

How do CRMs and DMPs work?

Continue Reading

Thunder Taps Industry Veteran as Sales Director

Former Rocket Fuel Director John Huffman to Support Leading Programmatic Creative Company’s Expansion

San Francisco, CA – (May 22, 2017) – Thunder, the original and leading Creative Management Platform (CMP), has appointed seasoned digital advertising sales executive John Huffman as Sales Director. In his new role, Huffman will be based in Dallas, covering Texas and surrounding states in response to strong market demand for Thunder’s innovative solutions.

Huffman brings over 20 years of experience in digital media sales, maximizing revenue and margin growth for major players in the space, including Adobe, Quantcast, Rocket Fuel and Yahoo!. At Adobe, he grew his business sector from zero clients to over $4 million in revenue within 18 months. During his eight years at Yahoo!, he beat his quota 18 consecutive quarters and was consistently one of the top 5 revenue performers at the company — leading one customer to spend more than $44 million annually.

“I was immediately impressed with Thunder’s offering,” said Huffman. “The company is at the forefront of programmatic creative technology, offering incredible revenue building opportunities for advertisers and agencies. Today’s marketers need a fast, scalable way to cut through the noise and reach consumers with highly personalized messages across channels. Thunder is enabling them to do that in a way that’s never been possible before.”

“We are thrilled to have John on board,” said Victor Wong, CEO of Thunder. “John’s deep data expertise, long-standing industry relationships and proven track record of expanding territories and increasing revenue will be immensely valuable as Thunder continues its rapid growth.”

About Thunder:

Named one of Forbes’ 100 Most Promising Companies in America, Thunder powers ad creative personalization, decisioning and analytics for advertisers, agencies, and publishers across the globe.

Thunder is the original and leading Creative Management Platform. Thunder CMP customers include leading Fortune 1000 companies such as Anheuser-Busch and McCormick, and acclaimed agencies like J. Walter Thompson.

 

####

 

Press Contact:

Cassady Nordeen

Blast PR on behalf of Thunder

Cassady@blastpr.com

 

Continue Reading

What is the difference between dynamic creative and data-driven creative?

dynamic creative vs data-driven creative

Dynamic creatives are ads that can change content on the fly at any time.

Data-driven creatives use information about a customer to inform creative messaging.

Thus, a creative can be dynamic and data-driven if the same creative puts content in the ad that can be changed at any time, AND the content was chosen is based on data.

A creative may be dynamic but not data-driven if it simply changes content without regard to who the targeted user is.

Continue Reading

What Is Data-Driven Marketing? – Definition, Examples and Case Studies

How data-driven marketing evolved from 2000-2017

Data-driven marketing is the strategy of using customer information for optimal and targeted media buying and creative messaging. It is one of the most transformational changes in digital advertising that has every occurred.

The rising quality and quantity of marketing data have been followed by explosive growth in the technologies for creative production and automation. These burgeoning mar-tech and ad-tech sectors now enable personalization of every aspect of the marketing experience.

Data-driven decision-making is taking the answers to questions like who, when, where, what message, and making those answers actionable.

Usage and activation of data, often in an automated or semi-automated manner, allows for a significantly more optimized media and creative strategy. This people-first marketing strategy is more personalized. It has also been responsible for driving considerable ROIs for marketers.

Continue Reading