Ameliorating Algorithmic Bias, or Why Explainable AI Needs Feminist Philosophy
About Ameliorating Algorithmic Bias, or Why Explainable AI Needs Feminist Philosophy
Artificial Intelligence (AI) is making big decisions these days—who gets a job interview, which videos pop up in your feed, even how policing works. It sounds futuristic, but there’s a catch: AI isn’t neutral. It’s created by humans using human data, which means all the old biases—racism, sexism, classism—can sneak right in.
In this 2022 article, a group of researchers from Taiwan challenge the idea that we can fix AI bias with just technical tools. Their bold claim? If we really want AI to be fair, we need to bring in feminist philosophy. Yep—philosophy in the lab. And not just any philosophy—feminist epistemology, which explores how our social positions shape what and how we know.
The article takes on Explainable AI (XAI), a popular approach meant to make AI decisions more transparent. But here’s the twist: most XAI strategies still rely on a narrow group of experts—usually engineers and computer scientists. That’s a problem. Because when bias shows up, it might not be obvious to people who’ve never experienced discrimination. The authors argue that unless you include voices from diverse, often marginalized communities, your “explanation” is probably missing the point.
They call for a new model: Integrated XAI, which invites programmers, philosophers, community leaders, ethicists, and more into the same room. The goal? Build smarter systems by blending tech knowledge with real-world lived experience. This isn’t about blaming tech—it’s about doing better, together.
Before You Read
When you think of bias, you might picture a person making a bad or unfair judgment. But what if the bias is built into the system? What if a hiring algorithm is trained on data that favors men for leadership roles—just because that’s how it’s always been?
This article digs into that kind of bias—algorithmic bias—and asks a big question: how do we make AI systems that are actually fair? The authors argue that we can’t just throw more code at the problem. We need to think about power, perspective, and whose voices are (or aren’t) shaping technology.
Before diving in, consider this: when we talk about “fixing” AI, who gets to decide what counts as a fix? And who is most affected when AI gets it wrong?
Guiding Questions
- What is algorithmic bias, and why is it more than just a technical problem?
- What are the limitations of current Explainable AI (XAI) approaches?
- How does feminist philosophy—especially the idea of situated knowledge—help us understand bias in AI?
- What does Integrated XAI propose, and why is collaboration across disciplines important for building better AI?