• Live
    • Audio Only
  • google plus
  • facebook
  • twitter
News > Science and Tech

Tech Bias: Self-driving Cars More Likely to Hit Black People

  • “Tech companies have a responsibility to ensure that their products are used to strengthen communities, not deepen racial inequities,” Joy Buolamwini said.

    “Tech companies have a responsibility to ensure that their products are used to strengthen communities, not deepen racial inequities,” Joy Buolamwini said. | Photo: Reuters

Published 9 March 2019
Opinion

A study by researchers at the Georgia Institute of Technology found racial bias in automated cars' object-detection systems.

Self-driving cars are more likely to hit anyone who isn’t white based on algorithmic bias, according to a recent study.

RELATED: 
Leaked Docs Show US Border Patrol Gathering Intelligence on Journalists Covering Caravan

Researchers at the Georgia Institute of Technology, United States found that detection systems like sensors and cameras of automated cars detect people with lighter skin tones more easily.

While the report, “Predictive Inequity in Object Detection,” has its limitations, those limits expose a greater problem -- companies don’t make their data transparent for research like this, which is particularly concerning when matters are of public interest.

“In an ideal world, academics would be testing the actual models and training sets used by autonomous car manufacturers,” tweeted co-director of the AI Now Research Institute Kate Crawford. “But given those are never made available (a problem in itself), papers like these offer strong insights into very real risks.”

Technological bias is an increasingly common theme as advancements in artificial intelligent progress. Infamously, Google was slammed in 2015 when the company’s image-recognition system mislabeled African Americans as “gorillas.”

The authors of the report said the object-detection models they had researched had mostly been trained on examples of light-skinned pedestrians, and the models didn’t place enough weight on learning from the few examples of dark-skinned people that were included - though including more examples of Black and Brown people could also help resolve the issue.

Algorithmic systems “learn” from the information they’re given. If, for instance, they don’t receive a sufficient amount of examples from Black women during the learning stage, they’ll have a harder time recognizing them when deployed.

Joy Buolamwini, a Ghanaian-American computer scientist and digital activist at Massachusetts Institute of Technology (MIT) calls it the "coded gaze.” She founded the Algorithmic Justice League, an organization challenging bias in decision-making software.

“As AI technology continues to evolve, tech companies have a responsibility to ensure that their products are used to strengthen communities, not deepen racial inequities,” Joy Buolamwini told the Ford Foundation.

Comment
0
Comments
Post with no comments.