2017 is the year when some of the world’s largest advertisers became aware of the incomplete control faced by their branding in the digital media ecosystem. This is a fact that does not seem totally absurd if you consider the ever-increasing automation of digital campaign management (Facebook ads, Google Adwords, Programmatic) in a virtually “infinite” digital ecosystem. According to an eMarketer survey, overall media quality is the most important issue of Media Buying for 2017.
By trying to get to the Brand Safety concept, let’s start with the common demand for brands to associate with that online content and appeal to that online audience that suits their identity. Indeed, according to a comScore survey, it appears that the effectiveness of advertising is influenced by the environment in which it is placed. On the contrary, there are thematic categories that most brands would not want to associate in any way. Examples of such categories are accidents, alcohol, crime, deaths, disasters, substances, weapons, gambling, pornography, smoking, terrorism, extremism, etc. [ Peer39 Brand Safety Categories ].
While brands are asking to be associated with the best online content, the digital media ecosystem is “giant” at rates described as ” Content Shock “. It is really inconceivable the volume of content of any kind of category that “rises” per minute in digital media in terms of style, purpose, and means of projection.
The “Aeolus bagel” opened in March 2017 when the Guardian stopped viewing on Google and Youtube after finding ads next to extremist content. This move followed by big names in Great Britain and Ireland, such as McDonald’s UK, Tesco and Sainsbury’s, Audi UK, L’Oreal, Royal Bank of Scotland, HSBC and Lloyds, triggering a series of announcements and commitments on behalf of Google for improving controls, and reversing the “Brand Safety” issue in one of the most “hot” 2017 topics in the field of advertising. The crisis has also passed in the United States, with Starbucks, Dish, AT & T, Pepsi, General Motors, Walmart and Johnson & Johnson stopping or limiting their advertising spending on Youtube.
Developments – Policies – Artificial Intelligence
The following months bring changes to Google’s policies on both Youtube and the Adsense Ad Servers to reduce the content of extremist, hate speech, violence, etc. In one of its recent announcements, Google published the review which came from manual human control of more than 1,000,000 videos, aimed at perfecting the automatic flagging technology. It also announced that now 98% of videos removed due to non-compliance with content policies are made by applying this technology.
Recently, Youtube has announced that it is considering strengthening the human content control capacity to 10,000 by 2018. At the same time, it has assured that it is constantly improving the accuracy and speed of Artificial Intelligence algorithms, which are now in a position to they account for 70% of the videos within 8 hours of uploading them, which corresponds to the weekly work of 180,000 workers. Similar actions are being done by the other major players such as Facebook, Twitter, Netflix, etc., by adopting stricter content acceptance policies and by improving the controls made in this content. Indicatively, around 100,000 workers are estimated to have already been given the task of controlling and “clearing” social media (Twitter, Facebook), mobile apps and cloud storage services, from content contrary to the policies of the medium, as stated in an earlier article wired.com.
The most promising solution seems to come from the application of Artificial Intelligence, which “learns” how people control, remove or approve digital content, and promises to ensure the quality of the content of the media. At the same time, the TN application in how to manage and optimize digital campaigns aims to promote the right advertising messages to the right people at the right time and in a “safe” environment.