In most cases, if not all, Artificial Intelligence systems learn through training so that they remember their core functions when used in the real world. In the case of facial recognition technology, an algorithm trained with, let us say 10,000 random faces of one color results in bias.
I recently came across a tweet that mentioned some problems with Zoom’s facial recognition software which removed a black user’s head when he was using what is called a Virtual Background in the popular video conferencing software.
Credit: Colin Madland
According to Colin Madland, Zoom probably has a problem of not recognizing black tones but would rather pick up a nice pale globe in the background and determine that it’s better of a face than the obvious.
This is nothing new of a problem as it has existed since Artificial Intelligence became a thing with government and private reports suggesting facial recognition systems are also responsible for wrongful convictions in a number of American cases.
Besides the fact that this causes issues with commonplace activities, how do we guarantee equal opportunity across race in a world run by Artificial Intelligence systems?
AI is already ubiquitous and powerful, and it becomes more so every year. Every time we use an app or a computer, we have our decisions and experiences shaped by AI. What we see on Facebook is organized by AI. Our email spam filters are run with AI. Our credit card companies use AI to detect fraud. When we talk to Siri or Alexa, it is AI that translates our words into computer commands. AI makes our lives more comfortable, more efficient, and safer.
Except when it doesn’t.
It is disturbing to read computer scientist Janelle Shane’s statement, “Multiple companies already offer AI-powered résumé-screening or video-interview-screening services, and few offer information about what they’ve done to address bias.”
Are these screening systems increasing or decreasing racial discrimination in hiring? Apparently, no one knows.