Moving to a new site

Hi everyone!  I’ve updated my site.  Please check it out and follow me there in the future.

Rise of Chief Marketing Technologist

Whether as a CMT (Cheif Marketing Technologist) or CAO (Chief Analytics Officer), I’m so pleased to see analytics reaching a strategic executive function. There are so many times in my career when I believed if someone who understood the data could represent at the executive function, companies would have a competitive advantage.

Prewire results to the under-performers

“I’ve made mistakes in delivering results. I used to take the results into the big meetings and it would shame the under-performing division making that division shrink in the chair and then get defensive and question the results. I then learned, it was worthwhile to do the work before the meeting. Share with the under-performing division the results BEFORE the big meeting. Ask them to co-present. Let them ask for the funding to change their performance. The CEO then agrees to the funding and the meeting goes great. The work is still done, it’s just done before the meeting instead of after, and everyone feels more collaborative.” (paraphrased from presentation by Kate Woodcock, VP of Customer Advocacy from VMWare)
I so agree with her. That’s a hard learned lesson that seems so obvious in retrospect. The upfront work is worth it.

Statistics aren’t politically correct

There certainly are some limitations to human decision making, and when seeding with human decisions, the models don’t have the guile to hid the statistics of those decisions.  In this example, the AI pointed out that it is statistically significant that Amazon’s decision to hire is based heavily on gender – even going down to scoring women’s only colleges as undesirable.

In addition, it sounds like this model that Amazon built for recruitment wasn’t successfully able to provide qualified candidates for positions.  I’m surprised that this was rolled out across recruiters at all.  I know of a statistician who would testify in court in the 1970s – 2000s about whether such things were statistically significant in corporate decisions for discriminatory cases.  This algorithm produced is really no different in terms of how discrimination was measured, but it’s hard for me that this wasn’t caught before a recommendations engine was given to recruiters.  Obviously, this is not the story that Amazon would like leaked under any circumstances.

Lessons to learn here:

  • It’s important to be aware of that when building models to mimic current human decisions that human discrimination might be part of the model.
  • Validate that what your model is using to make recommendations fits within your plans for the future before rolling out to people to assist in recommendations.


Discrimination Issues with Digital Targeting

It’s not too surprising to me that digital targeting is starting to be labelled discriminatory.   When applied just to marketing messages and pushing specific brands,targeting by gender, race, political affiliation has been accepted.  However,  as digital targeting has been used for other purposes rather than just marketing, it raises questions of egalitarian principles and in this case, legality.   It’s a good reminder for those of us in the field to consider the ramifications of what we do as we start using accepted marketing methods on other problems.

Really? Is it AI?

I found this article while planning for my upcoming panel session ( tomorrow on Artificial Intelligence in Marketing Analytics.  I have struggled so much with the term.  So many vendors or data scientists are saying that they are doing AI.  Recently, I had a friend post that they wrote their first AI and it was only four lines of code.  Umm… that’s just not how it works.  (Sorry, friend, if you are reading this…)

I think about some of the analysis that I was doing in 2006 and I believe if we had a PR or sales person attached to our team, they’d label that “AI” today.  At the time, I was doing pricing with a model whose data would be refreshed all the time and the model output was tweaked every few months by yours truly and the output of the suggested price changes were evaluated by a pricing manager.  Yet, it was an ever-improving, ever-changing fast model that made decisions that didn’t require or need human intervention.  In 2006, AI was the trendy term and we’d never think to call it that, but…

Is it just me who thinks we have lowered the standard in what is meant by Artificial Intelligence?  (Here’s a dated article that I think helps capture my point.)


Does it really matter if you round your model results?

I worked with a fantastic marketing EVP, that adored analytics and would use it to drive her decisionmaking despite not being so math-savvy.  She quickly discovered that not all insights were equally strong.  Due to differences from assumptions, measurements of error, or dirtiness of data, some insights were much better than others.  She asked me to color-code the insights coming from my team with my assessment of how strong the insights were.   I was honored by her trust in me to net out the confidence of each insight, but also felt the responsibility of taking all the uncertainty of an analysis and communicating it with a color.  It was challenging.

In recent years, I’ve recognized that I continue to do this communication – maybe not as directly with color coding, but with the rounding and format of my charts and insights. I try not to present data as more exact or precise than it actually is, and this helps set expectations for the viewers.

IMG_20171026_152951Take for example, this picture of a speed limit sign from a downtown street.  It’s a bit silly — we all know that vehicle speeds aren’t measured that precisely.  But yet, when you do an analysis and output a value, if you show a number, like 5.2435, the recipient thinks that you have a very exact answer — and it doesn’t matter if elsewhere on the screen you say ‘results are accurate to +/- 1%’. Rounding is a conscious choice, and the rounding that you choose communicates your confidence in the preciseness of the answer.  To an analyst, sometimes, model outputs are just a number… but for a business person, there is value in whether you round to the nearest penny, dollar or thousands of dollars.  It communicates your certainty in your analysis.

Since realizing this, I have tried to always be intentional in my rounding and consider in graphs what I show and what it means. The better your communication, the better your chance to make an impact!