Originally posted to Facebook on November 14, 2015 and written in response to this post on Blog Baladi. I am working on a series of pieces about implicit bias and technology. If you have tips, leads, or other resources to share, feel free to email me here.
Technology shapes our reality. We inherit its presumptions, its biases about the world and we change in response. You might say, “but that’s true of everything, right? What we pay attention to, who we talk to, how and where & when we were raised (our skin color, our genitals, our eye color, which hand we write with, how we present our gender, our education, our parents’ education…) — they all shape what we experience and how we experience.”
Yes. But. Though there are times we forget that our experiences are not universal, but singular, to a certain extent we know that we experience the world differently. (In 2015, this is more apparent than ever…to the white and wealthy people who are finally joining this party.)
But experiential presumption is insidious in tech. It’s insidious because most uses of technology start from the notion that tech is objective.
Tech is not objective. Tech is made by people. People are not objective.
People infect what we create with the way we think. We neutralize that infection (or learn what IS good and universal and should grow) by working with people who are not exactly like us.
When Facebook lets you mark yourself safe in Paris but not in Beirut, Facebook indicates to us how THEY think about the world and reveals the bias in the experience they bring to us. They also show us how they are changing our thinking, how they encourage us to favor western countries (European countries) above others. They reveal what they think about tragedy and — I think it’s fair to say — they reveal their biases about whose lives count.
Remember, this isn’t about conniving EXPLICIT manipulation. (You’ve read about that stuff in the news already.) This is about IMPLICIT bias. This is about experiential presumption. Of course EXPLICITLY, Facebookers don’t think people in Beirut matter less. But our explicit thoughts are poor predictors of our actual emotions, attachments, and actions. (If you’re white and you still think you’re not racist, I’m looking at you.)
It doesn’t matter how “inclusive” a tech platform tries to be. What matters is what its creators do to not be exclusive — what *actions* they take to neutralize biases, whether or not they see them.
Step back and look at what this situation is. It’s not just about a stupid button on a stupid social media site. We are literally talking about how we account for human lives here. This is not a toy and this is not a small mistake.
To read the post that inspired the tweets and Facebook post this blog is based on, head here — and please do. I think it’s important to note that I didn’t and wouldn’t have noticed the lack of a safety button for Beirut because of limits in my social network as well as my own biases.