Whether as a CMT (Cheif Marketing Technologist) or CAO (Chief Analytics Officer), I’m so pleased to see analytics reaching a strategic executive function. There are so many times in my career when I believed if someone who understood the data could represent at the executive function, companies would have a competitive advantage. http://ow.ly/MdBa30mtgEw
I’m honored to be representing Dextro Analytics at the MIT – Machine Learning and Data Science – on Oct 30th.
Please come out and join me with some other fantastic colleagues of mine.
It’s just a little too true.
There certainly are some limitations to human decision making, and when seeding with human decisions, the models don’t have the guile to hid the statistics of those decisions. In this example, the AI pointed out that it is statistically significant that Amazon’s decision to hire is based heavily on gender – even going down to scoring women’s only colleges as undesirable.
In addition, it sounds like this model that Amazon built for recruitment wasn’t successfully able to provide qualified candidates for positions. I’m surprised that this was rolled out across recruiters at all. I know of a statistician who would testify in court in the 1970s – 2000s about whether such things were statistically significant in corporate decisions for discriminatory cases. This algorithm produced is really no different in terms of how discrimination was measured, but it’s hard for me that this wasn’t caught before a recommendations engine was given to recruiters. Obviously, this is not the story that Amazon would like leaked under any circumstances.
Lessons to learn here:
- It’s important to be aware of that when building models to mimic current human decisions that human discrimination might be part of the model.
- Validate that what your model is using to make recommendations fits within your plans for the future before rolling out to people to assist in recommendations.