Loading...

13 Nov 2024 05:51

Advertising & Marketing

Marketing in the age of assumption

There is no doubt we live in a post-factual world and that is as true of marketing as the news media. We have more data than ever on what people do but we rarely try to understand it. Instead we leave interpretation to algorithms, AI and machine learning. We assume that the results are valid and the resulting actions correct.

In the past we did not have all the data we needed to understand why people were buying our brand or not. So we identified the biggest blanks and filled them in as best we could. Maybe the solution was to conduct a few focus groups, maybe it was to conduct a usage and attitude study, or maybe we just hung out with shoppers and asked them a few questions. We made the best assumptions we could and moved on. Then researchers started to combine data sets to get an idea of which metrics really did anticipate behavior, and the degree of assumption involved in interpreting results declined.

At Kantar Millward Brown our early efforts focused on anticipating the how well a specific TV ad might generate brand-linked ad awareness on a tracking study and the Link pre-test came into being. Then we moved on to try to predict the impact on sales. At the same time we started to investigate which attitudinal equity measures were most strongly indicative of purchasing and BrandDynamics was created. In each case, the biggest task was to assemble a data set which would combine attitudinal and behavioral data to work with. Then we conducted extensive desk research and experimentation to try and identify how things actually worked before we created a measurement system to collect the most relevant and actionable data.

Today the process of producing findings from data has been reversed by the huge amount of data freely available to us on what people say and do in the digital world and the dramatic increase in computing power that can be brought to bear on these data sets. But I cannot help but think that this has once again increased the degree of assumption attached to the findings. In part this is due to the confidence that we appear to invest in conclusions drawn by computation rather than questioning. The belief is that we no longer need to create data to help answer a specific set of questions we can simply interrogate the available data ex-post to find answers to questions we never knew we needed to ask.

In some cases this is absolutely true but to assume this can be done for all information needs seems wrong to me. There is always the need to validate that what we measure actually does relate to behavioral outcomes that matter. And I believe that there will always be a need to ask some questions if only to confirm conclusions initially drawn from big data. Today too many assumptions are made about what behavioral patterns mean and whether the available data is still applicable and reliable. For an example, just think of the crass way in which you have been retarget by advertising for brands which you have already bought or decided not to buy.

Does that mean I think we should avoid using big data and computation to uncover findings? Absolutely not, just look at the way Kantar Millward Brown uses dynamic linear modeling of social and search data to identify shifts in brand salience as they happen. We should make the most of the new opportunities that big data presents but we would do well to remind ourselves that making assumptions is never a good idea. But what do you think? 

 

Written by Nigel Hollis, Executive Vice President and Chief Global Analyst at Kantar Millward Brown.

NULL
(Visited 3 times, 1 visits today)
Top