Hi everyone! I’ve updated my site. Please check it out and follow me there in the future. http://www.nataliekortum.com
Whether as a CMT (Cheif Marketing Technologist) or CAO (Chief Analytics Officer), I’m so pleased to see analytics reaching a strategic executive function. There are so many times in my career when I believed if someone who understood the data could represent at the executive function, companies would have a competitive advantage. http://ow.ly/MdBa30mtgEw
There certainly are some limitations to human decision making, and when seeding with human decisions, the models don’t have the guile to hid the statistics of those decisions. In this example, the AI pointed out that it is statistically significant that Amazon’s decision to hire is based heavily on gender – even going down to scoring women’s only colleges as undesirable.
In addition, it sounds like this model that Amazon built for recruitment wasn’t successfully able to provide qualified candidates for positions. I’m surprised that this was rolled out across recruiters at all. I know of a statistician who would testify in court in the 1970s – 2000s about whether such things were statistically significant in corporate decisions for discriminatory cases. This algorithm produced is really no different in terms of how discrimination was measured, but it’s hard for me that this wasn’t caught before a recommendations engine was given to recruiters. Obviously, this is not the story that Amazon would like leaked under any circumstances.
Lessons to learn here:
- It’s important to be aware of that when building models to mimic current human decisions that human discrimination might be part of the model.
- Validate that what your model is using to make recommendations fits within your plans for the future before rolling out to people to assist in recommendations.
It’s not too surprising to me that digital targeting is starting to be labelled discriminatory. When applied just to marketing messages and pushing specific brands,targeting by gender, race, political affiliation has been accepted. However, as digital targeting has been used for other purposes rather than just marketing, it raises questions of egalitarian principles and in this case, legality. It’s a good reminder for those of us in the field to consider the ramifications of what we do as we start using accepted marketing methods on other problems.
I found this article while planning for my upcoming panel session (https://datascience.salon) tomorrow on Artificial Intelligence in Marketing Analytics. I have struggled so much with the term. So many vendors or data scientists are saying that they are doing AI. Recently, I had a friend post that they wrote their first AI and it was only four lines of code. Umm… that’s just not how it works. (Sorry, friend, if you are reading this…)
I think about some of the analysis that I was doing in 2006 and I believe if we had a PR or sales person attached to our team, they’d label that “AI” today. At the time, I was doing pricing with a model whose data would be refreshed all the time and the model output was tweaked every few months by yours truly and the output of the suggested price changes were evaluated by a pricing manager. Yet, it was an ever-improving, ever-changing fast model that made decisions that didn’t require or need human intervention. In 2006, AI was the trendy term and we’d never think to call it that, but…
Is it just me who thinks we have lowered the standard in what is meant by Artificial Intelligence? (Here’s a dated article that I think helps capture my point.)
I worked with a fantastic marketing EVP, that adored analytics and would use it to drive her decisionmaking despite not being so math-savvy. She quickly discovered that not all insights were equally strong. Due to differences from assumptions, measurements of error, or dirtiness of data, some insights were much better than others. She asked me to color-code the insights coming from my team with my assessment of how strong the insights were. I was honored by her trust in me to net out the confidence of each insight, but also felt the responsibility of taking all the uncertainty of an analysis and communicating it with a color. It was challenging.
In recent years, I’ve recognized that I continue to do this communication – maybe not as directly with color coding, but with the rounding and format of my charts and insights. I try not to present data as more exact or precise than it actually is, and this helps set expectations for the viewers.
Take for example, this picture of a speed limit sign from a downtown street. It’s a bit silly — we all know that vehicle speeds aren’t measured that precisely. But yet, when you do an analysis and output a value, if you show a number, like 5.2435, the recipient thinks that you have a very exact answer — and it doesn’t matter if elsewhere on the screen you say ‘results are accurate to +/- 1%’. Rounding is a conscious choice, and the rounding that you choose communicates your confidence in the preciseness of the answer. To an analyst, sometimes, model outputs are just a number… but for a business person, there is value in whether you round to the nearest penny, dollar or thousands of dollars. It communicates your certainty in your analysis.
Since realizing this, I have tried to always be intentional in my rounding and consider in graphs what I show and what it means. The better your communication, the better your chance to make an impact!