As someone who has used online customer data extensively in multiple roles and at multiple companies, I really believe most companies are respectful in their use of customer data and are attempting to improve the customer’s life. I tend towards sharing my own data with companies that request it that I respect and have a relationship with. Perhaps for me especially that is why it is so difficult to hear that this extremely personal data was being taken without knowledge, consent, or request – and used for purposes undefined.
I found this to be a very blunt and honest warning of why Data Projects can fail. It’s a good (if pessimistic) read for anyone who is running or planning to run a data science project within their company.
I’ve mentioned before that click-through rate (or CTR) isn’t the only method for measuring a marketing campaign. While it’s a useful metric, focusing on it without considering other ways a campaign is performing might not give you a complete, accurate picture of what’s really going on.
In part one of this video, I explained that the goal of the campaign is very important to an understanding of the results because they can impact the measurement drastically. But just as a refresher: Would you use the same measurement for a campaign whose goal was to increase sales and checkouts versus a campaign whose goal was to introduce a product? For the first example, you’d look at the people who were served a goal and determine if their sales and checkouts increased to determine if your ad was successful — whereas when introducing a product, you would see if they explored the product area of your website, searched for the product, discussed the product with a sales representative, or downloaded material about the product. The goal of the campaign is critical to determining overall success.
But there are still more options for evaluating how well a campaign is doing it’s job, and the two I touch on in this video are:
- A/B testing
- Marketing Mix Modeling results – Sales impacts in comparison to other campaigns (isolating for other changes to the business)
While multi-billion dollar companies often use marketing mix models — advanced modeling that attempts to isolate changes from a campaign from other impacts to the business, and derive a revenue impact — most companies are content with A/B testing to determine if their marketing is incrementally improving. In A/B testing, you measure two ads (or creatives or other marketing elements) at the same time (sending customers to one or the other randomly) and compare their performance. It is a very useful approach to attempt to determine if a proposed ad is better than the current ad and allows a business to analytically improve their approaches.
In the end, CTR is only one measurement to help measure the success of a campaign and as our industry grows in its intelligence and customization of marketing campaigns, we will need more than just CTR to determine if a campaign is successful or not.
This content was created per request of Multiview and included in their blog posting. Please see the resulting interview and original post here.
Many people look at work done in the past, find flaws, and are quick to throw away the old in favor of the new. I’m all for evolving our thinking, but I find it difficult to stomach when an analyst or analytics consumer says flat-out, “This measurement is pure BS and doesn’t work at all.” In most cases, there’s a reason why a given measurement received enough acceptance for business to depend on it for decision-making. The measurement may be outdated, and there may be ways to evolve the approach — but it’s important to understand the history before you toss the old measurement away.
Here’s the life of an analyst: You have TONS of data. An avalanche of new measurements. Your first approach is to sort it, and segment the data into categories so you can start to see performance differences in the segments. As your understanding of the data evolves, your measurements also evolve. This is the approach that Adolphe Quetelet took when he created the BMI (Body Mass Index) metric in the 19th century. He had a fair amount of data on height and weight and created a metric that correlated with diseases like type II diabetes. The BMI was proven to be such a useful and predictive indicator that it was factored into life insurance policies post WWII. In the mid-1990s, it became part of the World Health Organization’s approved metrics for obesity, and even popular with members of the general public as their doctors discussed their BMIs with them.
A large part of why BMI was so successful was the ease of measurement. Height and weight became data points easily captured at a doctors visit and were strictly objective measures.
BMI is still widely utilized for trend-setting, population demographic measurements, and in predictive models to determine disease propensity or progression. It performs very well in all of these and will continue to be a worthy measurement for many purposes.
This same principle could hold true with online marketing analytics and click-through rate (CTR). CTR has often been the golden metric for online advertising – but while CTR is a good metric for seeing trends and getting an elementary level of understanding of the data, our application of the metrics must change as our understanding and use of the data does.
The pushback on BMI came when it was considered at an individual level. For statistical samples, BMI can be a great predictor of how many people in the sample will get a disease… but on an individual level, there are so many things that can impact the BMI metric that it might very well be useless for a particular individual.
For example, one person could be an outlier to the model. The general assumption is that BMI is a measurement for obesity, but by purely considering height and weight, it fails in unexpected ways, and for some of the healthiest people, like bodybuilders. Dwayne “The Rock” Johnson has a BMI of 34.3 – ‘obese’ by the BMI model!
Should The Rock’s doctor tell him he needs to lose weight? In the same way that you wouldn’t want to hold ALL patients accountable for their BMI in the exact same way, you must not blindly treat all CTRs equally, or you might find out too late that your advertising campaign had the CTR equivalent of bodybuilders – great health, just with different goals.
If, for example, you are running an online campaign with the purpose of branding (where CTR is not as desired as impressions), you may have a stellar campaign with a weak CTR. Should you cut it? “Yes” is the easy answer… but what if I told you sales numbers have increased significantly since the campaign started?
Does this mean CTR is a bad measurement? No, not at all. It is simply one measurement of many, and a very strong one that the industry gravitated to early because of the ease of capture and strong correlation to great campaigns. It is not, however, the only measurement and simply having a low CTR does not necessarily mean that campaigns need to be revamped.
This content was created per request of Multiview and included in their blog posting. Please see the resulting interview and original post here.
Hasn’t everyone these days already heard about how analytics has improved a business; making it more nimble, more efficient, more able to customize to consumer needs? So, why are there still unsuccessful analytics projects? In large part, analytics has definitely piqued most everyone’s interest. The majority of businesses want analytics as part of their decision making. The problem that arises is that the nontechnical business decision maker doesn’t always have the vision — or the background — to dream up a role where analytics can participate… or gives analytics too large of a role. The analysts at first will not fully understand the business needs or problems that the company is struggling with, and how to translate a model into a recommendation for that business decision. In addition, the analysts may not know how to communicate the limitations of their analysis or struggles of their role.
As I look across my career in analytics, I start to see some trends between the projects I THOUGHT would be successful and the ones that actually were successful and it left me with some thoughts on what makes an analytics project successful.
Meet a strategic business need: There are so many projects that envisioned that they would forever change the company. The problem is, as an analyst, we are rarely in control of those decisions. The “build it and they will come” strategy has not traditionally worked well for me. Instead, I get a project where I didn’t have funding for it and the business is disinterested in my really cool model. I recommend instead finding a buyer BEFORE you build it. Find someone who is energetic and interested in using analytics to help make their decisions, and make sure that what you’re building is something that could have an impact on THEIR success and is also in line with the company’s overall goals. This is easy to say, but sometimes so hard to do… especially when you’re the analyst, and you see a potential project that could be so cool, but the decision makers just aren’t yet open to allowing the change, or it isn’t in line with where the company is going. Those are the projects you should bypass, and instead go after the (sometimes less sexy) ones that will get more investment in terms of resources and business attention.
Explain analytics in a nontechnical way: If the analysis cannot be explained or you don’t have the capability on your team to help the business decision makers understand HOW your analysis can impact their decision, acceptance will be rare, and very difficult. This is especially true as you get into more advanced modeling or more strategic (aka bigger $$ impact) decisions. I’ve spent much of my career coming up with creative ways to explain advanced modeling to business people. It can be a great deal of fun too. For example, I’ve explained elasticity as the different between adding flour versus pepper to a soup. One has a much bigger impact to the taste. I still believe this is very important, but as analytics plays more of a life in everyday consumer lives, this is also easier. I can always talk about the everyday analytics I’ve used myself – “It’s like when Netflix knows what movies you want…” or “It’s like UPS estimating when your package will arrive.”
Demonstrate ROI: The very last thing that a project must do is actually impact ROI. I’ve had several projects that, while able to increase predictability of an outcome, were not able to do it with enough of an increase over the regular business decision making to warrant an investment in the model. For example, I completed a project to attempt to identify customers who were likely to be detractors in terms of Net Promoter Score (NPS), with the idea that the team would reach out to them and attempt to correct their concerns proactively. The model, while successful — able to slightly increase the attitude of likely detractors — did not provide enough of an additional gain versus looking at three main variables that the team was already using. So, the project was discontinued. Being realistic about the actual impact of what your team’s modeling could accomplish will help you select the work that will have the greatest impact.
So how can you as an analyst become more successful? Here’s a few thoughts…
Walk a mile in their shoes: First way for an analyst to be successful at a company is to understand the decision well. How is it made today? Why has it been decided in certain ways in the past? What does the business decision maker consider when making the decision? The analyst needs to think of the pressures on the business decision maker. What are their goals? How can data help make them successful? What would the business decision maker like to tell his or her boss or board? If the analyst is able to think of the world from the business decision maker’s perspective, the analyst will be much more successful at generating analyses that will actually be used, and those successes will generate trust and further funding for more advanced analyses.
Run presentation past a nontechnical supporter: Nothing will get someone without a background in analytics to roll their eyes as much as discussing p values or detailed modeling approaches. You need to tailor your presentation for the skillset that your business decision maker has. They want to trust you, and believe that you have done the right hold out value and made the best assumptions. I’ve coached many team members to remove the two pages worth of assumptions from their deck. Highlight one or two of the large ones if appropriate, but you aren’t meeting in your stats class now. This is business and explaining what you’ve done and why you’ve done it in an easy to comprehend way is vastly more important. How do I do this? I think of trying to explain my presentation to my ten year old niece. How would she respond to my explanation of what I’m doing? Or perhaps to my mother… (assuming your mom isn’t a stats professor of course.) Then if possible, I run my presentation past someone outside of analytics and get their feedback and see what questions they raise. Often when you are dealing with the details day in and day out, one might not even realize that you are using jargon that others might not understand. Doesn’t everybody know what segmentation is? Actually, no.
Create proof points for your techniques: Why should the business decision maker use your new model? In the world of business, showing the strength of your model alone might not be enough. Are other people in the company using a similar model to impact a different decision? I used this approach explaining how a model around company investment used a very widely accepted marketing investment approach. Are there other executives in the company that are already using models that you have built? Your own reputation is a great way to help build trust in your model. If one executive says to another – “I’m not sure how her team does it, but they doubled our impact with a model they built.” that can greatly assist with your growth. Do they have trusted technical team members? Finding their trusted analytics person and meeting with them to explain the details of your analytics in great detail and convincing them to accept the approach can make the official business acceptance a mere formality. Even better, you can incorporate their respected analytics person into the design process — they likely understand the business problems better than you and will make your first version even stronger. If you don’t have any of those, but are using a modeling approach that has been used in the past in other industries to assist with a problem, and your approach is similar, highlighting that! Still too cutting edge in your approach or these proof points aren’t enough: Can it be tested on a small subset? If so, what is the business decision maker risking? A dubious decision maker is much more likely to greenlight a limited test than a full-scale roll-out.
All in all – Analytics is a growing field and there are more than enough ideas of where to make improvements to the model. It’s an exciting place to be and with a few small changes in prioritizing and implementation, you can be much more impactful as an analyst.
Big data will change the way that business is done forever. It will revolutionize marketing, product management, shipping, pricing, and even customer service. The sheer amount of data coming in will provide companies with more predictability and customization power than ever before.
Where did big data start? Big data started with the internet and the tracking of people on the internet through web analytics. Web analysts were the trailblazers of big data and the first to start to explore how to use the power of data to predicting people’s behavior, desires and needs.
I’ve started to notice in my career a similar progression for any business decision as it slowly flows from a non-analytical business owner making ‘gut calls’ to completely data-driven decision making. This transformation is widespread, occurring everywhere from marketing investment decisions to HR recruitment analytics to predicting IT system failures. For the purposes of simplicity, I’m going to talk about marketing campaign decisions as an example — but the process seems to hold true no matter the context.
Think of the data available to marketing analytics back in the 1960s. If you’ve watched Mad Men, you know how much gut instinct went into decision making. In one episode, an analyst gets dismissed by Don Draper as knowing very little and being of almost no help. Those gurus, who may not have even been able to articulate why they were making decisions, were the trusted leaders. But, suddenly, relevant data becomes available and people begin to dream what this data could provide. The first step is reporting on trends. As the trend reports come out, the business starts to see connections between some trends and business performance. This then leads to Key Performance Indicator reports where a metric or set of metrics has gained business acceptance. At this stage in the analytics evolution of a business, a report that referrals are down this quarter might cause concern that sales will miss their target. Why are they concerned? Because there is a belief that the metric is connected with performance. This is an important and necessary step to move into the next phase.
Business Intelligence or Data Exploration is where you start to really dive into the data and combine it with the knowhow of those who have been doing the work for 20 years — the experts of the company who have an instinctive feel for why one campaign will work and another will fail. This stage can also be critical to helping the analyst understand and dissect some of the decision making and for the current experts to start trusting the analyst team to understand of the intricacies of the business. Sometimes this phase is skipped… but in my experience, this can be extremely detrimental to the progression of analytic decision making in a company, resulting in a more adversarial relationship, with analytics fighting traditional business decision makers and the experts they’ve relied on in the past.
The most exciting phase for most analysts is once the business is ready to not just use data to figure out what’s going on now, but to also use it to ask ‘what if?’: predictive modeling. This is the phase where most statistics majors get excited – where advanced modeling can be created to predict which campaign will perform best, which IT systems are likely to fail, which patient is most likely to develop diabetes in the next year. As these models continue to be developed and prove their worth, the next phase comes into play: the data is collected real-time and decisions can be made automatically in response with analysts only monitoring the system. New data continues to come in, and results in an iterative approach. As new or more detailed data sources come in, we still go through the phases (Finding Trends, KPIs, Data Extraction, Modeling), but the integration of the new data goes faster.
At each point in these phases, the acceptance of the analytics from the business is an important part of progression to the next level. This has become easier as other companies and industries have gained public renown for their use of analytics, and provided pressure to follow suit… but still there can be resistance.
Ownership of the business decision making also follows a phased progression eventually ending with the analytics team taking over ownership of the business decision. Let’s take a brief walk down history lane. Pricing analytics actually first started in the airline industry. The idea of pricing based on plane fill rate and timing came up and some executive was willing to give it a test — the first phase of the movement towards analytics under ownership of the decision. Assuming the test is successful — as it most definitely was in the airline business — the business decision maker is willing to pass along more control and suggests developing across all markets or lines of businesses, but isn’t ready to hand over the reins, and wants to stay in the loop as a decision approver. Eventually, even this approval effort becomes a formality as the analytics increases in sophistication and the third phase is reached where the analyst has the approval and is expected to review all decisions made by the model. Eventually the speed and quantity of incoming data becomes great enough, and model sophistication becomes strong enough, that no human intervention is required in the business decision, only monitoring to improve the models.
As an exception that proves the rule, here’s an example of where algorithms designed on models to set pricing resulted in an out of print book soaring to $2 M dollars before the analysts noted it and modified their pricing model. Quite fun! It shows the exciting and somewhat silly decisions that we have to look forward to as more of our day to day decisions become machine and model driven.
I think it also shows the importance of having analysis define reasonable upper and lower bounds for the decisions that come out of their model — it’s the lack of these that cause things like flash crashes in the stock market.
Eventually there will be complete integration of the business decision making and the analytics. We are already starting to see this in several areas including pricing and digital marketing. Can you imagine someone being in digital marketing that doesn’t understand page views and referrals and feels comfortable looking at numbers and predicting future impacts? I cannot. This shows how integrated analytics has become with business decision making in the digital marketing space. I believe in the next 5 years, this blurring of analytics and business decision making will continue to take over other areas as well. It’s a great time to be in analytics!
Hi everyone – SMX recorded and published my keynote presentation while I was at SMX. Please check it out. THANKS!