Over the last couple of years we have witnessed an increase in the disconnect between where technology is heading and where we want our culture to be, and this has been particularly true for AI. While we have begun to understand that AI is biased, that many of the processes we utilize to build AI are fundamentally flawed, and that although there are ways to begin to integrate more equity into AI, there are still some tough questions we need to answer.
But the answers to these questions cannot be found solely within data and computer sciences themselves. Rather, cultural studies and philosophy will provide important perspectives into how we must position our automated technologies. While our technological tools are often described as the agents that guarantee objectivity in the data they generate, it is the cultural context in which our technologies are developed that influences that technology. There is no technology that exists in a cultural vacuum or without bias. As such, we need an interdisciplinary approach to ensure that automated technologies are fair, equitable, and just. This talk provides cultural analyses and perspectives to empower anyone engaging in the discourse around the mitigation of algorithmic bias. We will discuss different viewpoints that support the discourse on mitigating algorithmic oppression.