Facebook’s latest official update to the program provided new insights into how it intends to reduce brands’ exposure to content they may find objectionable. “From our conversations with the industry, we know that this solution may not solve the needs of every advertiser and that some advertisers are looking for content-level granularity,” Meta said in a blog post about the brand safety controls.
“We’re excited to be able to give advertisers more choice in the News Feed brand suitability controls available to them,” said Samantha Stetson, Meta’s VP Client Council and industry trade relations, in the announcement, adding, “especially as we start to develop and test a content-based control next year. It will take time but it’s the right work to do.”
The “topic exclusion” categories are one way to lower the incidence rate of brands appearing above or below content they find objectionable, or outside their brand suitability guidelines. For years, Facebook had been reluctant to introduce controls for ads in News Feed, believing that context was not an important factor in advertising in that setting. Facebook had already offered “topic exclusion” in other ad settings such as within in-stream video ads. In that context, brands could pick and choose the types of videos they sponsored. However, News Feed is a dynamic, ever-changing stream of posts, and was considered a far harder challenge.
Brands want to have assurances that their ads do not show up next to any posts that could be considered harmful, whether they include hate speech or disinformation. The subject took on greater urgency within the past two years. In July 2020, there was an advertising boycott, when brands joined the NAACP and Anti-Defamation League to protest disinformation and hate speech on Facebook. It’s a subject that the company has repeatedly promised to tackle, and has shown quarterly reports of how it catches hate speech and removes other offensive material that breaks its community guidelines.
On Thursday, Facebook explained how “topic exclusions” worked, so far. In August, Ad Age reported on how Mondeléz saw its advertising campaigns change by testing the “topic exclusions.” Applying the controls increased the prices of ads by 15%, partly because it limited the potential audience for the ads. It was a fair trade, higher prices for more security, Mondeléz said at the time.
Facebook’s controls work by looking for accounts that are most likely to have engaged with posts that fall under the topics like crime, tragedy, news and politics, and then not show certain ads to those users.
“When an advertiser selects one or more topics, their ad will not be delivered to people recently engaging with those topics in their News Feed,” Meta said in Thursday’s blog post.
On Thursday, Meta said the Facebook News Feed controls were effective 99% of the time, keeping brands away from posts related to “tragedy and conflict.” It was 94% effective at keeping ads away from news and political adjacency, Meta said in the blog post.
Facebook also announced that it would introduce third-party verification before the end of the year. In July, Ad Age reported that Integral Ad Science and Double Verify were hoping to become partners to help measure the effectiveness of the controls.
“Before the end of the year, we also plan to collaborate with third-party brand safety partners to develop a solution to verify whether content adjacent to an ad in News Feed aligns with a brand’s suitability preferences,” Meta’s blog post said. “We’ll start with a request for proposals in the coming weeks.”