AI trained on racist data will mirror racism of the input dataset.

Imagine that you create an AI to determine if someone is lying based on a video. If that dataset is human-curated and is labeled with racist tendencies (for example people who look a certain way are labeled as lying more even if that isn’t the truth) then the AI will learn that.

But even a perfectly true dataset can train a racist AI. Imagine that the previous dataset only has lying examples for people who look a certain way (or the vast majority of those examples are lying) whereas another group of people is only lying 10% of the time. The AI will probably extrapolate that all of the first group are lying because they have seen no (or few) counterexamples.

Create a post

Subscribe to see more stories about technology on your homepage

  • 1 user online
  • 111 users / day
  • 359 users / week
  • 457 users / month
  • 748 users / 6 months
  • 15 subscribers
  • 1.28K Posts
  • Modlog