Marketers Getting Data-Driven Round the Bend: Three actions applicable to all businesses running MMM to ensure your data team gives accurate and actionable insights & recommendations
By: Aidan Mark. Global Director, Performance Strategy
In 2006 Tesco marketing’s Clive Humby claimed that ‘data is the new oil’. This thought has hung around marketing circles ever since, accelerated by the successes of technology companies like Amazon, Google and Facebook; all companies that are famous for using data to drive huge amounts of revenue growth.
In the modern era of marketing, it would take a brave marketer to claim that they do not use at least some forms of data driven marketing to guide their decision making. Pressure is on most marketers to locate and/or build their own data oil field.
And that’s exactly what most companies are doing… but with varying levels success.
Whilst many brands are successfully deploying a data-led approach to marketing, others are struggling. CMO’s in data struggling organisations often report that data teams ‘share insights and recommendations that don’t make sense’ and many claim that they ‘followed a data-led decision-making process, but this led to worse results’.
So where are we going wrong? Has data been oversold as a concept? Do we need new data sets? Or do we simply need to hire a better data team?
Here we will explore some of the most common pitfalls often seen in marketing effectiveness, specifically in relation to big data and MMM (Marketing Mix Modelling) analysis, along with recommendations for how data teams and marketers can work together to gain more actionable insights that get buy in from all stakeholders.
Was the Black Friday promotion a huge success because it generated more sales than any other weekend? Or was that always to be expected because… well, we ran it during Black Friday and slashed our prices by 50%?
A misunderstanding of correlation vs causation
Whilst not every brand runs a Black Friday promotion, almost every brand does run marketing at times when external variables are either helping or hindering the marketing team’s efforts. Most businesses have an element of seasonality to them, which inevitably influences results. And even for those rare businesses with little seasonality, those businesses are still influenced by factors outside of their control. Every brand is affected by the macro economy as well as the actions of competitors. These are external factors that influence how marketing is performing, both positively and negatively. Simply put, it is harder for marketing to drive excellent results when competitors are spending heavily on media and/or running their own price promotion. Yet all too often these factors are not considered when judging whether marketing is performing or not, and the extent to which these variables influence results.
Data science teams are not well versed in marketing theory and the consumer decision-making process
The best data scientists in marketing are schooled in at least two disciplines. Firstly, they need to be excellent maths, statistics, and data modelling. But what is often neglected is to ensure that the same team understand how marketing is supposed to be influencing the consumer behaviour that drives revenue and profit. This lack of understanding influences many of the points below this.
If data teams don’t understand how marketing is supposed to work, then they will inevitably find it hard to build accurate models that judge the contribution of marketing.
Accept that data often gives conflicting signals
With complex data sets with varying levels of accuracy, data analysis is prone to more subjectivity than your average marketer may realise. Just because a different data scientist would do things differently, this does not mean the in-house team is wrong and need to be replaced. Data modelling involves both art and science. A background in marketing theory will help data scientists know which data signals to listen most closely to.
Don’t expect different forms of marketing effectiveness measurement to give consistent answers
Often when data teams produce complex marketing mix models (MMM), the models are perceived to be wrong because they give such different readings of effectiveness when compared to other, more basic measurement techniques that happen to be more readily available. Most marketers have been heavily exposed to ‘easy to produce’ forms of measurement like digital attribution and brand trackers, and this exposure often structures our understanding of the world and what looks ‘right’.
It is perfectly possible for these measurement methods to report different levels of effectiveness, but still be right and accurate despite the inconsistencies. This is because the different forms of measurement are measuring different things with different methodologies. MMM seeks to quantify incremental metrics, whereas digital attribution does not deal in incrementality. And similarly, awareness brand trackers can go up and down despite the best efforts of marketing, because brand awareness is influenced by many factors, of which paid media is just one.
The marketing team has placed artificial restrictions on the sales and revenue that they’re asking the data team to model in MMM
I’ve witnessed CMO’s give directions to data modelling teams that mean the models cannot possibly work as intended. For example, one CMO of a major telco told their data team to not include 3rd party retail sales based on the logic ‘the retail partner is driving this sale, not marketing’ whilst ignoring that brand fame and reputation is influencing customer choice within the retail store.
I’ve seen another CMO instruct their team to ‘don’t attribute sales to offline media when we know the sale came from online media’. They ‘know’ this because the user clicked on some form of digital ad prior to converting. This brings us back to correlation vs causation – it is possible for the cause of the conversion to come from another factor as digital is often the navigation as part of the process. A more mature method may look at fractional attribution between these two drivers, but human bias often prevents this from happening.
And I’ve even seen a CMO use MMM to gain political influence within their organisation. Most MMM models will attribute some level of sales and revenue to ‘baseline’ – this is sales and revenue that is not directly attributable to marketing. When a CMO asks their data team to remove the baseline and credit all sales and revenue to marketing, that can allow the CMO to gain influence within the organisation but renders the data model far less accurate.
So, if you’re reading this, what can you do to ensure your data team gives accurate and good answers? I’d propose 3 actions that could be applicable to all businesses running MMM:
- Give data scientists a hypothesis where data led evidence can be collected to help prove or disprove it. Data teams tend to work best in this way, and you’ll get far more useful answers that guide strategy rather than simply sharing lots of data and asking for insights.
- Accept that data scientists take an artisan approach to data. Outliers might be removed, data might be cleaned, cleansed, and erratic data sources may be discounted. Marketers tend to think that all data is clean and ready to use, but any experienced data scientist will tell you that is not the reality for most business data
- Use MMM models as part of a mixed measurement set up where each measurement technique is used for its own unique strength. There are essentially 3 forms of measurement that seek to explain ‘why sales happen’ but each gives very different results, which is often why MMM models are seen as wrong. MMM is best used for budget setting and effectiveness by channel. Digital attribution is best used for optimisation within channel and sub-channel. And for effectiveness insights not covered well by these disciplines; a controlled vs exposed experiment is often the best solution, and these learnings can be used to further enhance the MMM model