Statistics aren’t politically correct

There certainly are some limitations to human decision making, and when seeding with human decisions, the models don’t have the guile to hid the statistics of those decisions.  In this example, the AI pointed out that it is statistically significant that Amazon’s decision to hire is based heavily on gender – even going down to scoring women’s only colleges as undesirable.

In addition, it sounds like this model that Amazon built for recruitment wasn’t successfully able to provide qualified candidates for positions.  I’m surprised that this was rolled out across recruiters at all.  I know of a statistician who would testify in court in the 1970s – 2000s about whether such things were statistically significant in corporate decisions for discriminatory cases.  This algorithm produced is really no different in terms of how discrimination was measured, but it’s hard for me that this wasn’t caught before a recommendations engine was given to recruiters.  Obviously, this is not the story that Amazon would like leaked under any circumstances.

Lessons to learn here:

  • It’s important to be aware of that when building models to mimic current human decisions that human discrimination might be part of the model.
  • Validate that what your model is using to make recommendations fits within your plans for the future before rolling out to people to assist in recommendations.

https://www.businessinsider.com/amazon-built-ai-to-hire-people-discriminated-against-women-2018-10?utm_source=reddit.com

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s