Rise of Chief Marketing Technologist

Whether as a CMT (Cheif Marketing Technologist) or CAO (Chief Analytics Officer), I’m so pleased to see analytics reaching a strategic executive function. There are so many times in my career when I believed if someone who understood the data could represent at the executive function, companies would have a competitive advantage.  http://ow.ly/MdBa30mtgEw

MIT DFW Data Science & Machine Learning Seminar

We had such a great turnout at the MIT DFW Data Science & Machine Learning Seminar event last Tuesday.

 

Speaking at the MIT Club on Oct 30th

I’m honored to be representing Dextro Analytics at the MIT – Machine Learning and Data Science – on Oct 30th.

Please come out and join me with some other fantastic colleagues of mine.

http://dallasftworth.alumclub.mit.edu/s/1314/2015/club-class-main.aspx?sid=1314&gid=194&pgid=45135&cid=68968&ecid=68968&crid=0&calpgid=61&calcid=7729

Prewire results to the under-performers

“I’ve made mistakes in delivering results. I used to take the results into the big meetings and it would shame the under-performing division making that division shrink in the chair and then get defensive and question the results. I then learned, it was worthwhile to do the work before the meeting. Share with the under-performing division the results BEFORE the big meeting. Ask them to co-present. Let them ask for the funding to change their performance. The CEO then agrees to the funding and the meeting goes great. The work is still done, it’s just done before the meeting instead of after, and everyone feels more collaborative.” (paraphrased from presentation by Kate Woodcock, VP of Customer Advocacy from VMWare)
I so agree with her. That’s a hard learned lesson that seems so obvious in retrospect. The upfront work is worth it.

The technology world

Aside

It’s just a little too true.

https://marketoonist.com/2018/10/transformation.html

Statistics aren’t politically correct

There certainly are some limitations to human decision making, and when seeding with human decisions, the models don’t have the guile to hid the statistics of those decisions.  In this example, the AI pointed out that it is statistically significant that Amazon’s decision to hire is based heavily on gender – even going down to scoring women’s only colleges as undesirable.

In addition, it sounds like this model that Amazon built for recruitment wasn’t successfully able to provide qualified candidates for positions.  I’m surprised that this was rolled out across recruiters at all.  I know of a statistician who would testify in court in the 1970s – 2000s about whether such things were statistically significant in corporate decisions for discriminatory cases.  This algorithm produced is really no different in terms of how discrimination was measured, but it’s hard for me that this wasn’t caught before a recommendations engine was given to recruiters.  Obviously, this is not the story that Amazon would like leaked under any circumstances.

Lessons to learn here:

  • It’s important to be aware of that when building models to mimic current human decisions that human discrimination might be part of the model.
  • Validate that what your model is using to make recommendations fits within your plans for the future before rolling out to people to assist in recommendations.

https://www.businessinsider.com/amazon-built-ai-to-hire-people-discriminated-against-women-2018-10?utm_source=reddit.com