Speaking at the MIT Club on Oct 30th

I’m honored to be representing Dextro Analytics at the MIT – Machine Learning and Data Science – on Oct 30th.

Please come out and join me with some other fantastic colleagues of mine.


Prewire results to the under-performers

“I’ve made mistakes in delivering results. I used to take the results into the big meetings and it would shame the under-performing division making that division shrink in the chair and then get defensive and question the results. I then learned, it was worthwhile to do the work before the meeting. Share with the under-performing division the results BEFORE the big meeting. Ask them to co-present. Let them ask for the funding to change their performance. The CEO then agrees to the funding and the meeting goes great. The work is still done, it’s just done before the meeting instead of after, and everyone feels more collaborative.” (paraphrased from presentation by Kate Woodcock, VP of Customer Advocacy from VMWare)
I so agree with her. That’s a hard learned lesson that seems so obvious in retrospect. The upfront work is worth it.

Statistics aren’t politically correct

There certainly are some limitations to human decision making, and when seeding with human decisions, the models don’t have the guile to hid the statistics of those decisions.  In this example, the AI pointed out that it is statistically significant that Amazon’s decision to hire is based heavily on gender – even going down to scoring women’s only colleges as undesirable.

In addition, it sounds like this model that Amazon built for recruitment wasn’t successfully able to provide qualified candidates for positions.  I’m surprised that this was rolled out across recruiters at all.  I know of a statistician who would testify in court in the 1970s – 2000s about whether such things were statistically significant in corporate decisions for discriminatory cases.  This algorithm produced is really no different in terms of how discrimination was measured, but it’s hard for me that this wasn’t caught before a recommendations engine was given to recruiters.  Obviously, this is not the story that Amazon would like leaked under any circumstances.

Lessons to learn here:

  • It’s important to be aware of that when building models to mimic current human decisions that human discrimination might be part of the model.
  • Validate that what your model is using to make recommendations fits within your plans for the future before rolling out to people to assist in recommendations.