It’s not too surprising to me that digital targeting is starting to be labelled discriminatory. When applied just to marketing messages and pushing specific brands,targeting by gender, race, political affiliation has been accepted. However, as digital targeting has been used for other purposes rather than just marketing, it raises questions of egalitarian principles and in this case, legality. It’s a good reminder for those of us in the field to consider the ramifications of what we do as we start using accepted marketing methods on other problems.
There are so many types of Attribution and the terminology can be seemingly interchangeable. I’ve watched three people discuss attribution – each meaning a different type – for a full five minutes before they realized that they weren’t discussing the same thing. Imagine how challenging this can be if one company is attempting to implement an attribution project, but not everybody at the company is trying to implement the same thing. I’ve been there! I was once brought in to complete the “attribution” project where multiple key stakeholders on the project all thought it meant something different. Clear definition setting was critical to getting that project to success.
So what are the different definitions of attribution?
- Digital only attribution (most common)
- Online to Store
- Multiple screens
- Offline (often TV but can be for other offline channels as well) to Online
- Unified Measurement
- Campaign experimentation tests
- Causal lift
- Marketing Mix Modeling
- Generic data connections that can support some of these methods
In future posts, I’ll dive into each of these for greater definition and link through this post. Realize that software vendors often offer multiple combinations of these attributions which can increase the confusion in the market. I have found that creating these definitions – even if one tool or client is considering multiple types — helps to give everyone a shared, clear understanding and lets you focus on the potential goals of each.
Attribution requires focusing on evolving and improving.
Inevitably when I deliver a first version of attribution to a client, I find that a strong advocate for the project suddenly questions if it’s worthwhile. It’s happened regularly enough that I’ve nicknamed them “The Perfectionist”. The Perfectionist suddenly thinks of the data missing from the first version – lack of facebook views, or that the sales team phone calls aren’t being considered, or even that direct mail isn’t integrated — and wants all of that included before we could ever consider publishing our findings and making decisions from it. And what we have for our first version isn’t perfect. But you have to remember that your Marketing Attribution methods cannot and will not be perfect. Both data limitations and the ever-changing nature of marketing technology and campaigns make it nigh-impossible to build the “perfect” attribution method. And that’s okay. The lack of perfection should not stand in your way of progress.
Just like when a person attempts to improve their health by changing their diet and exercise, you don’t consider it a failure if you don’t have the perfect body right away! And they may never have the perfect body — most of us won’t. But they will see improvement. You want to be in a that mindset — improving the present, not focusing on the perfect. If you only think about what attribution is NOT doing, not using it to make any decisions until it’s ‘perfect’… then you might never benefit from it at all.
And realize today that you are already discussing attribution within your organization. When you discuss how well a campaign is doing, most likely your organization looks at some numbers which are already based on attribution (often last touch). Even companies that lack reporting on campaign performance have a method of defining success around campaigns and channels in order to make changes to those campaigns and channels. Instead of focusing on how to build the perfect attribution, I encourage clients to first think about where they believe their attribution methods are letting them down. If the data capture in these areas isn’t too difficult or costly, then focus on these! They are often the most actionable and productive ways to improve attribution.
In the end, the most successful analytics projects in my experience focus on improvement, rather than trying to build the best solution ever. You’ll improve attribution through iterations, allowing you to get better one step at a time, while you both make better decisions today AND justify investments for tomorrow’s improvements.
I’ve been spending much of the last few years working in the field, implementing, evaluating, and exploring Marketing Attribution. Recently, I was requested to put together a training session on Attribution and I’ve decide to take some of that content and make it publicly available on my blog. I’ve discovered that the word ‘Attribution’ is often misused, described as a new feature or a new testing method, confusing marketing professionals who don’t know what to expect out of an Attribution software or project — and creating dissatisfaction. So I’ll be starting a series on Attribution, discussing multiple elements of it, and attempting to help those interested understand it better.
First and foremost, let’s define attribution: Attribution is a way of assigning credit for a desirable outcome. In the world of marketing, we use attribution to assign credit to the marketing efforts that may have helped create a lead or close a sale. Attribution is a tool — a measuring stick used to evaluate marketing efforts. It is NOT a new project or campaign, but some vendors have special features that may be named Attribution, so it is important to get clarity if discussing with a vendor.
The classic attribution example is that of a basketball game.
In this example, attribution is a way to help explain which basketball players were important to the process of scoring a goal. Obviously, only one basketball player touched it last, and he was the one who was said to have ‘scored’ — but did he do it all alone? Who else assisted him? Was he only able to score because one player stole the ball from the opposite team, and passed it to yet another player, who then passed it to our scorer? All three players in this example should be attributed some of the credit — to varying degrees! Attribution then helps you understand which players were the most valuable. In marketing, our points are things like leads and sales, and our players are campaigns and channels.
If you’ve looked at the market, you know that currently the marketing attribution world is exploding with software options. In addition, many vendors are adding attribution software options to their existing offerings.
There are so many options and it’s hard to make an apples-to-apples comparison. A large number of companies have attempted attribution projects only to eventually abandon them as a failure. While there are often multiple reasons for these failures, I believe the root cause is a lack of knowledge and clearly defined objectives. But don’t worry! With a good overall understanding of Attribution, you’ll be able to define your goals, evaluate if a given Attribution project satisfies those goals, and set your company up for successful Attribution. This blog series will give you the knowledge you need to succeed.
For those who are successful in implementing Attribution, the results are worth it. It can produce amazing improvements to marketing efficiency, by giving you insight into HOW the marketing dollars that are being spent in various areas are actually translating into sales and leads.
I found this article while planning for my upcoming panel session (https://datascience.salon) tomorrow on Artificial Intelligence in Marketing Analytics. I have struggled so much with the term. So many vendors or data scientists are saying that they are doing AI. Recently, I had a friend post that they wrote their first AI and it was only four lines of code. Umm… that’s just not how it works. (Sorry, friend, if you are reading this…)
I think about some of the analysis that I was doing in 2006 and I believe if we had a PR or sales person attached to our team, they’d label that “AI” today. At the time, I was doing pricing with a model whose data would be refreshed all the time and the model output was tweaked every few months by yours truly and the output of the suggested price changes were evaluated by a pricing manager. Yet, it was an ever-improving, ever-changing fast model that made decisions that didn’t require or need human intervention. In 2006, AI was the trendy term and we’d never think to call it that, but…
Is it just me who thinks we have lowered the standard in what is meant by Artificial Intelligence? (Here’s a dated article that I think helps capture my point.)
I’m excited to announce that I’ll be speaking at a conference this month in Dallas. Please come join me at the upcoming #DSSDallas event, April 27th. Join me and other esteemed speakers as we discuss the latest trends in #AI and #MachineLearning with @DataSciSalon. https://lnkd.in/eJSipsT You can even get 20% off using my special discount code! Code: “speaker20”
I worked with a fantastic marketing EVP, that adored analytics and would use it to drive her decisionmaking despite not being so math-savvy. She quickly discovered that not all insights were equally strong. Due to differences from assumptions, measurements of error, or dirtiness of data, some insights were much better than others. She asked me to color-code the insights coming from my team with my assessment of how strong the insights were. I was honored by her trust in me to net out the confidence of each insight, but also felt the responsibility of taking all the uncertainty of an analysis and communicating it with a color. It was challenging.
In recent years, I’ve recognized that I continue to do this communication – maybe not as directly with color coding, but with the rounding and format of my charts and insights. I try not to present data as more exact or precise than it actually is, and this helps set expectations for the viewers.
Take for example, this picture of a speed limit sign from a downtown street. It’s a bit silly — we all know that vehicle speeds aren’t measured that precisely. But yet, when you do an analysis and output a value, if you show a number, like 5.2435, the recipient thinks that you have a very exact answer — and it doesn’t matter if elsewhere on the screen you say ‘results are accurate to +/- 1%’. Rounding is a conscious choice, and the rounding that you choose communicates your confidence in the preciseness of the answer. To an analyst, sometimes, model outputs are just a number… but for a business person, there is value in whether you round to the nearest penny, dollar or thousands of dollars. It communicates your certainty in your analysis.
Since realizing this, I have tried to always be intentional in my rounding and consider in graphs what I show and what it means. The better your communication, the better your chance to make an impact!