Marketers for many years relied on measuring their audience reach, frequency, and conversions by using the ubiquitous “cookie,” a temporary identifier dropped onto user browsers until they delete them periodically (typically every 30 days or so). This identifier made it possible to track people who saw an ad (reach) and who showed up to the advertiser’s website to do some action (conversions).
However, with the advent of mobile devices like smartphones and tablets, the cookie started to crumble as a reliable way to have a complete view of a consumer’s digital ad experience. In fact, with the introduction of mobile in-app experiences and mobile device IDs, the consumer digital identity fragmented further, making it hard to accurately measure and therefore personalize and optimize ads.
Now with people-based marketing, brands are unifying their view of a consumer by connecting multiple cookies and device IDs. Deterministic identity graphs such as Thunder and LiveRamp are able to use authenticated logins across multiple devices to build a complete, accurate picture of who the consumer is across devices. Most of ad tech hasn’t yet been updated for the people-based world and so still run on just cookies or single device IDs for measurement, causing misleading results.
Thunder teamed up with LiveRamp to produce the first open, people-based ad server and as a result of running billion of impressions a month, Thunder has been able to study the difference between the traditional cookie-based measurement and new people-based measurement.
The fact there is a difference isn’t surprising but the size was quite shocking. Using cookies-only have been shown to only ~50% accurate now in the new multi-device world!
Everyone wants to optimize their advertising and that means figuring out what works and what doesn’t. To that, you need to figure out not only a winner and a loser but you need confidence that you are right.
Unfortunately, it’s not as simple as just comparing the current performance of one ad with another ad — that’s like saying just because one basketball team is currently ahead in the 1st half of the game that it is the best team. You need enough time and data (enough matchups in basketball) to determine a true champion.
In statistics, confidence comes from having enough “statistical significance” which essentially means knowing the results are likely not just chance but would be likely repeated if the match up happened again. To achieve statistical significance, you need a minimum sample size of data based on a target desired lift in performance that would make one ad a meaningful winner over another.
Thunder has built an easy to use sample size calculator which will allow you to now input basic variables in your creative experiment to determine the minimum sample size necessary whether you’re thinking about how many impressions or how many people need to be in the test to get a meaningful result. Thunder Experience Cloud uses minimum sample sizes to ensure it has enough data for each creative version it is testing before its Dynamic Creative Optimization solution serves the winning creative to all consumers, thereby achieving maximum media impact and efficiency with highest confidence.
If a vendor tells you there isn’t a minimum or that they can test thousands of ad versions, you need to ask them if they have a sample size calculator or how they will achieve statistical signfiance with their results. Otherwise, it is highly suspect they are really testing and optimizing your advertising.
Digital marketers just rushed to meet GDPR compliance in May 2018 for digital marketing in Europe. They now need to rush to meet a new California privacy law put in place that will go in effect in January 2020. Compared to GDPR, the California Consumer Privacy Act (also known as CaCPA or CCPA) balances commercial and consumer interest much more to enable digital marketers to continue data-driven marketing while giving consumers more protections and options.
Both CaCPA and GDPR
apply to businesses that are not located within their borders
assign responsibility for enforcement to a governmental authority
do not permit discrimination against individuals who exercise their legal rights
provide individuals with certain rights with respect to personal data; including the right to access and delete their personal data
address some similar concerns (e.g., the importance of access and transparency)
will require businesses to expend time and money to achieve compliance
GDPR comprehensively addresses many privacy concerns (e.g., disclosures businesses must make to data subjects, process for data breach notification to individuals and regulators, implementation of data security, cross-border data transfers, etc.) while CaCPA is focused on consumer privacy rights and disclosures.
GDPR provides comprehensive private rights of action while CaCPA does not create a private right of action except for data breaches (and with many requirements).
GDPR provides a more comprehensive set of rights to consumers, including the right to data correction and the right to data portability, which CaCPA does not have (unless the business decides to respond to a request for portability by providing the data electronically, in which case it must do so it in a readily useable format that can be transmitted to another entity only to the extent it is technically feasible).
GDPR includes considerably more comprehensive requirements on businesses, including privacy by design and default, foreign company registration requirements, data protection impact assessments, 72-hour breach notification, data protection officer requirement, and restrictions on cross border transfers.
GDPR requires data controllers to sign formal, written agreements with processors that meet stated requirements of a processor’s handling of personal data. CaCPA requires only requires a written agreement with a third party in very limited circumstances.
GDPR requires businesses to assume and contract for appropriate technical and organizational security precautions. CaCPA does not mention security other than to provide a cause of action for lawsuits on behalf of consumers for the unauthorized access, exfiltration, theft, or disclosure of personal information that is not encrypted or redacted that results from the failure to implement and maintain reasonable security procedures and practices.
The GDPR requires that businesses must have a legal justification before it collects, processes, or transfers personal information, with a consumer’s informed and unambiguous consent as a single means of achieving that legal justification. CaCPA on the other hand does not require businesses to have such legal justification and uses an opt-out approach
Thunder Experience Cloud enables the advertising ecosystem to balance consumer interests in privacy with commercial interests in data-driven advertising. Thunder helps ad platforms prevent data leakage, consumers gain privacy, and advertisers obtain transparency through its anonymized people-based measurement solution. Ask us how to protect consumer data while supporting data-driven advertising if you’re interested to learn more.
San Francisco, CA – (Sept 25, 2018) – Thunder Experience Cloud and LiveRamp®, an Acxiom® company (NASDAQ: ACXM) and leading provider of omnichannel identity resolution, today announced a partnership to enable people-based marketing in three key areas: targeting, measurement, and personalization.
The partnership provides marketers with a more holistic view of their customers by giving them the ability to track ad exposure and conversion across devices directly to their own person IDs, rather than relying on less accurate identifiers such as Cookie IDs or third party measurement providers.
LiveRamp customers use its identity graph for CRM targeting across the open web and walled gardens. Now, with the addition of Thunder’s people-based dynamic ad server, marketers can run campaigns from start to finish on LiveRamp IDs without pause. Ads can be dynamically personalized and measured in real-time using LiveRamp’s identity graph.
“This partnership is truly changing the standards of measurement and relevance in advertising,” said Paul Turner, GM of Technology at LiveRamp. “With Thunder Experience Cloud, marketers have a one stop shop for creating and measuring high-performing omnichannel campaigns based on the person, rather than the device or cookie, ensuring the right ad gets in front of the right person on any device, and bringing us closer than ever before to achieving true people-based marketing, while maintaining LiveRamp’s high standards of transparency and customer privacy.”
“Thunder is the only open, deterministic people-based ad serving and tracking solution today,” added Victor Wong, CEO of Thunder Experience Cloud. “By partnering with LiveRamp, one of the most trusted data platforms, we are giving marketers person-level measurement accuracy on their advertising while protecting the privacy of the consumer through state of the art encryption and anonymization.”
Alongside Adweek and Neustar, Thunder engaged in a webinar on the topic of the upcoming Doubleclick ID loss in 2019 and how to prepare for it if you’re a data-driven marketer. Learn what sort of advertiser needs to consider switching to an open ID and who is better off sticking with Google’s ID. Watch the full presentation and discussion below:
Note: We provided an overview of Ads Data hub in Part 1, and how Ads Data Hub will impact DMP’s in Part 2. This post covers data lakes and how analytics will be impacted in the Ads Data Hub world.
Many large brands today have set up “data lakes” where all their data gets stored and made available to other applications for processing and analysis. These data lakes combined with business intelligence tools such as Tableau have created powerful analytics environments where brands can answer questions such as:
What customer segment is most responding to my ads?
Which ads are leading to the most amount of lifetime customer value?
Do people who see my ads spend more with me?
Am I spending more money to reach my customers than they are spending with me?
Brands have staffed up data analysts and data scientists to make sense of all this data and answer these important business questions to improve strategy and validate what partners are telling the brand.
Data lakes ultimately rely on data to flow into them. Google’s recent changes with Ads Data Hub keeps data locked within Google Cloud and cannot be combined outside of Google’s controlled environment. As a result, data lakes for marketing are under threat by recent changes by Google.
Consequently brands with sensitive customer data are forced to decide whether to upload that data to Google to run in a Google-controlled data lake or keep it off the Google Cloud where they’ll need to find other vendors to solve their needs for tracking, analyzing, and modeling.
Most marketers today use DoubleClick Campaign Manager (DCM) as their primary ad server for delivering ads and tracking ad exposure to conversion. The largest advertiser and most sophisticated advertisers relied on DCM data to do analytics, attribution, and media activation.
These advertisers would “data transfer” log-level data (the raw data for each impression rather than the aggregate data that hides user-level and impression-level information) to their data management platform, data lakes, and vendors that do analytics or attribution modeling.
In April, Google announced it will no longer allow data transfer of DoubleClick log-level data with IDs. This decision effectively destroyed most of the value of the log-level data exported from DCM because advertisers wouldn’t know who saw the ads but only how often an ad in total was served. DoubleClick could be used only to verify that the total amount of impressions bought were actually delivered but all the other powerful use cases like analytics, attribution, and data management would no longer be possible with DoubleClick data.
At the same time, Google pushed forward its own solution to this new problem for marketers — Ads Data Hub. This product is essentially a data warehouse where ad exposure data is housed and can be connected to Google’s own solutions for attribution, analytics, and data management.
One new benefit is access to the Google ID, which is a powerful cross-device ID that uses data from users logging into Google services like Android, Maps, YouTube, etc. Previously, DoubleClick was only tracking and sharing a cookie-based DoubleClick ID, which neither connected cross-device ad exposure and conversion nor reconciled multiple IDs to the same person. For many advertisers doing log-level data analysis and activation, this new ID is a big upgrade because it provides more accurate measurement.
One major downside is that this data cannot leave Ads Data Hub. Consequently, you cannot do independent verification of Google’s attribution or analytics modeling. If Google says Google performs better than its competitors, you will have to trust Google at its word. In the past, you would at least have the raw data to apply your own attribution model if you so wanted, or to re-run Google’s calculations to verify its accuracy (since big companies are not infallible).
By extension, outside ad tech providers (such as DMPs, MTA, etc.) who may be best in-class will have a much harder time working with Google solutions. As a result, you will be dependent on Google.
To do matching of off-Google data such as other ad exposure or conversions that happen offline, Ads Data Hub now requires you to upload and store your customer data in the Google Cloud. In that environment, it can be matched with Google’s ID and tracking so you can build a Google-powered point of view of the consumer journey.
In a way, Ads Data Hub is for those who trust but don’t need to verify. It is a good solution for advertisers who today spend the vast majority (75%+) of their ad budget with Google because ultimately if their advertising isn’t working, no matter what Google says about how it is performing, it would be ultimately accountable for the results. You wouldn’t need to verify calculations to know if your ad budget is wasted.
What else can I do?
Another solution is to add independent ad serving and/or tracking in addition to or in replacement of Google. By doing so, you can still generate log-level data for Google-sold media but it will not be tied to a Google ID. Instead, you will be using your own ID or a vendor’s cross-device ID to understand who saw what ad when, where, and how often.
This approach is best suited for large advertisers who want best in class ad tech solutions to work together, and who cannot spend all their money on a single media platform to achieve their desired results. Typically brands large enough to afford data lakes, independent attribution providers, and data management platforms are the ones who will have the most to lose by moving to Ads Data Hub.
tl;dr DoubleClick logs are used today by marketers for verification, attribution modeling, and other analysis beyond what is available in standard DCM dashboards.
Log-based analytics require a device or user identifier, so DoubleClick’s removal of the DoubleClick ID represents a disruption of the status quo for log-based analytics solutions.
Fortunately, DCM logs are not the only source of log-level data, or even the best. Brands and agencies increasingly use tracking pixels from measurement vendors that have access to deterministic IDs as a replacement for ad server logs and to support more advanced analysis. Skip to the end if you are just looking for a list of recommendations.
Google’s announcement last Friday that DoubleClick is removing the Doubleclick ID from its logs resulted in panic in many corners of the digital advertising world. What is the DoubleClick ID? For that matter, what are logs and why do people use them? Confused as to what the big deal is?
Here are the answers:
Beginning on May 25, DCM will stop populating the hashed UserID field (which stores the DoubleClick cookie ID and mobile device IDs) in DoubleClick Campaign Manager and DoubleClick Bid Manager (DBM) logs for impressions, clicks and website activities associated with users in the European Union. DoubleClick intends to apply these changes globally, and will announce timing for non-EU countries later this year.
What this Means for Advertisers
DoubleClick, like most adtech platforms, provides reporting dashboards to monitor performance KPIs. While dashboards provide a good summary on performance, they can’t answer more granular questions that marketers want of their data. That’s why many marketers ingest logs from their ad servers and DSPs. These logs are broken out into impression logs, click logs and site activity logs.
In order to perform custom analytics with these logs, the logs need to share a common identifier, so that the marketer can tie together recorded impressions from multiple sources (DCM, DSP, etc.) that belong to the same person, as well as clicks and site actions from that person.
That common identifier is generally the cookie ID or, in the case of mobile app ads, mobile device ID. DoubleClick currently has a field in all of their logs called UserID that stores a hashed version of the DoubleClick cookie ID or the mobile device ID tied to an impression, a click or a site action.
By removing this field from their logs, DoubleClick is effectively ending their support for ad server logs that are used for analytics, verification, measurement, or attribution modeling. Without the UserID field, marketers can no longer tie together impressions, clicks and site actions. For example, if you were previously filtering suspicious traffic based on frequency of engagement, you will no longer be able to do so (because each row becomes unique without a deduplicating identifier).
The alternative proposed by Google is for marketers to pay to use the dashboard found in the Google Ads Data Hub. The big issue with this approach is that the marketer has to trust Google to grade their own homework, making the marketing standard “trust, but verify” approach all but impossible.
As a result, brands and agencies using DoubleClick logs will no longer be able to independently:
Verify frequency by cookie or person
Count total ad exposure by person
Analyze true reach of media placements and campaigns
Compare reach and duplication by media placement and campaign
Attribute or de-duplicate conversions and clicks
Report on user conversion rates
Identify unique site traffic
What’s the Back Story
This announcement is part of two trends in the market – GDPR as a pretext for raising the walls of walled gardens, and the shift from logs to trackers to collect data for custom analytics.
First, Google is saying that the upcoming EU law, GDPR, is forcing them to do this, something many pundits have questioned. Walled gardens are continuing to grow taller, and increasingly are leveraging privacy concerns as the pretext for doing so. Media sellers are also now further pushing their own measurement and attribution solutions in a bid to grade their own homework and prevent cross-platform comparison.
Google has built a more full-featured measurement and attribution product that is currently in pilot with selected large brands known as Google Attribution 360, part of Google Ads Data Hub. The announcement to remove the DoubleClick ID from logs is connected strategically to the broader release of Attribution 360 later this year. In fact, Google Ads Data Hub was even plugged in the email to agencies informing them of this change.
Second, this announcement is a reaction to the trend of measurement and attribution vendors disrupting the importance of ad server logs, making Google’s decision seemingly reasonable.
Marketers are increasingly relying on vendors to improve their accuracy through features that are not a part of the traditional ad server log. Specifically, savvy marketers want (a) cross-device graphs and (b) the ability to perform causal attribution modeling. Neither of these goals are unlocked by DCM logs today, leading to the emergence of an ecosystem of measurement platforms, each with their own trackers tied to a cross-device graph for data collection. Of course, one such vendor is Google, whose Attribution 360 offering has both of these advanced features.
As such, DoubleClick’s announcement simply represents a formal passing of the torch in responsibilities from the ad server to the measurement provider for those marketers who have already reduced their dependence on DCM logs.
Brands and agencies need to identify vendors who can provide tracking and measurement capabilities (full disclosure – Thunder Experience Cloud is one such vendor). This change needs to occur before current dashboards built off of DCM logs become disrupted.
If you are evaluating vendors to address this change, we recommend the following as requirements:
Ability to source data from impression trackers rather than logs
Visibility across all ad exchanges (several vendors are classified as DMPs by Google and thus blocked from tracking impressions on AdX)
Can provide the following categories of metrics:
Frequency by person and total ad exposure by person
True reach and overlap of media placements and campaigns
Attribution using any configurable attribution model, both position-based and algorithmic
Media agnostic (be wary of solutions that grade their own homework)
Independent of any arbitrage of audience data segments that are evaluated by their measurement product
In addition, some “nice to haves” include:
Backed by a deterministic people-based graph
Can provide reliable logs with interoperable customer ID to other identified vendors within the brand’s adtech stack if requested