Media Planning & Buying

Brand Safety: How Marketers Can Navigate Digital Placements

April 26, 2017

Over the past month, some advertisers have been pulling their ads from popular platforms, such as Google, YouTube and Facebook, amid the growing concern that their advertisements are appearing alongside offensive content, negatively affecting perception of brands. AT&T, for example, recently paused all media on Google-owned YouTube due to their ads being placed adjacent to inappropriate videos featuring terrorism and hate content. The brand explained their decision in a statement: “until Google can ensure this won’t happen again, we are removing our ads from Google’s non-search platforms.”

To combat the growing backlash, platforms have started to implement updated moderation guardrails, including automated methods, AI-based systems and placement opt-outs to ensure that marketers are at low risk of negative exposure in association with morally questionable content. These steps, while improvements, shouldn’t be treated as a final, fool-proof protection for brands and advertising. Instead, it’s crucial for marketers to understand the role of these features, how to manage and take responsibility of their advertisements, and lastly, how to work with brands to confidently market and place their content on the right platforms.

 

The Current Role of Automated Moderation

Automated methods of content policing have significant appeal because they reduce the need for costly and inefficient manual monitoring for potential violations. Done properly, these types of systems can solve issues with inappropriate content, both visual and text-based, without needing to rely on individuals to review, flag, or verify offensive content. Some automated systems, such as the Hashing System which creates a computer-readable representation of child-abuse images published by the Internet Watch Foundation, can provide some protection for brands and relief for individuals who would have to manually view and review potentially objectionable content. However, solutions like these use only known images, meaning social networks can only protect brands from content that has already been identified as abusive.

How Artificial Intelligence Systems Could Help

AI-based systems show promise for identifying potentially objectionable content. Using sophisticated models, publishers could identify content similar to objectionable images and text to protect advertisers before ads are placed against the piece of content. Ultimately though, a balance must be struck between the openness of a platform for content creators and brand safety for advertisers. There will likely be intermittent pains as the aggressiveness of any algorithm is fine-tuned and stories of either overly-aggressive policing or lax policing occur.

Even with the best algorithms, these systems require accurate tagged information or similar content to cross-reference and match against in order to flag a particular piece of content. While machine-learning models are getting better at some elements of recognition, it has been demonstrated by University of Washington researchers that video recognition systems such as the public API provided by Google are subverted fairly easily. Intentional subversion and trolling of content to fool brand-safety filters are examples of potential counter-measures that can or have been deployed. Malicious misrepresentation of locations of advertisements can also be a way to fool automated systems where objectionable content or sites may be hidden or labeled as other content and re-sold across multiple networks.

The Human Touch: Leveraging Multiple Tools & Taking Responsibility

Automated methods and AI-based systems are not enough to protect brands against the wider variety of modern internet security threats. A single solution will not protect against all placement concerns. In fact, many of the offensive placements can occur as a result of marketer oversight and error for incorrectly selecting ad placement preferences. To take effective measures against unwanted ad placements, advertisers need to take responsibility and thoroughly review and fully utilize the ad-safety tools at their disposal. Something as simple as content or keyword exclusion targeting can go a long way when setting up campaigns.

Brands should also look to their agencies to take additional measures. 360i and Dentsu, for example, support TAG (the Trustworthy Accountability Group) to promote a safer space for all clients. It’s in both the platforms’ and advertisers’ best interest to take full advantage of planning, booking and executing campaign and consider all available tools. This way, platforms can regain their credibility and retain revenue, while marketers are able to accurately showcase their brand messaging in a safe space.

 

What Marketers Can Do Next

In an increasingly complex digital marketplace, it is critical that brands and their agencies have tough conversations to understand the ad-fraud and brand-safety players, the risks of going without guardrails, and the best measures for providing additional safeguards for the broad mix of digital initiatives.

There are three primary approaches to take based on the individual brand’s comfort level:

  1. Willing to risk: know the associated risks for brand and fraud implications and be willing to take them.
  2. Want to be safe, but don’t want to short the opportunity: use the full spectrum of Media Rating Council (MRC) accredited safety tools with both pre-bid and post-bid measures to ensure buys are as safe as possible but still have broad campaign delivery.
  3. Zero tolerance: in this instance the brand has no tolerance for any delivery in fraudulent or unsafe content. This approach requires whitelisting site by site as well as use of an MRC accredited safety tool.

Unless advertisers understand where the brand stands, it is impossible to make the wisest choices with them. The best defense against these threats against the confidence of advertising markets is for clients to work closely with agencies and publishers to monitor, validate, and review the effectiveness of protections.
 
Kathi Wolfsthal, Social Marketing Supervisor, Valentina Bettiol, Social Marketing Supervisor, Allison Kolber, Group Media Director VP and Lawrence O’Donnell, Data Science Manager at 360i contributed to this post.