top of page

Complexity Simplified #7: Why is AI Biased? (And Can We Fix It?)

  • Writer: Amir Bder
    Amir Bder
  • 2 days ago
  • 2 min read

Welcome back to Complexity Simplified at TheComplexWorldOfAI. We’ve talked about how AI thinks, codes, and even makes art. But today, we’re tackling the elephant in the room: Why is AI sometimes "mean" or "unfair"?

If you’ve ever noticed an AI giving a stereotypical answer or favoring one group of people over another, you’ve seen AI Bias in action. Here is why it happens.


The Tech Talk

Machine Learning models are trained on massive datasets (like the internet). If the training data contains historical, social, or statistical imbalances, the model will codify these biases into its weights. This results in algorithmic bias, where the output reflects the prejudices of the source material rather than objective reality.

In other words...


The Mirror and the Library

Imagine a library where 90% of the books say that "All Dragons are Green." If you ask a robot who lived in that library to draw a dragon, it will always draw it green. It doesn’t "know" what a dragon is, it just knows what the books said.

The Internet is the Library. AI is trained on everything we’ve ever written—the good, the bad, and the ugly.

  • The Mirror Effect: AI is essentially a mirror. If the mirror is reflecting a world where certain jobs are usually held by men, or certain cultures are ignored, the AI will reflect that same image back at you.

  • The "Average" Problem: AI tries to find the most likely answer. If it sees a pattern 1,000 times and a different pattern 10 times, it might ignore the 10 and assume the 1,000 is the only "right" way.

Why This Matters for You (students)

As a student using AI for research or projects, bias can sneak into your work without you realizing it.

  • The Old Way (Human Research): You might read five different books and notice they have different perspectives.

  • The New Way (AI): You ask an AI a question, and it gives you one "perfect" answer. But that answer might be missing the perspective of an entire culture or group because the AI didn't "read" enough about them.

The Bottom Line

  • AI isn't "thinking" when it’s being biased it's just repeating what it learned from us.

  • Data is the Diet: If we feed AI "junk food" (biased data), it will output "junk" results.

The future of AI isn't just about making it smarter it's about making it fairer. We have to teach the AI to look at the whole library, not just the loudest books.

Comments


Hi, I'm Amir Bder

  • Facebook
  • LinkedIn
  • Instagram
bottom of page