[ad_1]
I got curious about computer science when I was nine years old. I was watching PBS and they were interviewing someone from MIT who had created a social robot named Kismet. It had big ears that moved. It could smile. I didn’t know you could do that with machines. So, from when I was little, I had in my mind that I wanted to be a robotics engineer and I was going to MIT.
Eventually, I did reach MIT, but I went to Georgia Tech for my undergraduate degree. I was working to get a robot to play peekaboo because social interactions show some forms of perceived intelligence. It was then that I learned about the code bias: Peekaboo doesn’t work when your robot doesn’t see you.
[A few years later] at MIT, when I was creating [a robot] that would say, “Hello, beautiful,” I was struggling to have it detect my face. I tried drawing a face on my hand and it detected that. I happened to have a white [Halloween] mask in my office. I put it on [and it detected that]. I was literally coding in whiteface. I did another project where you could “paint walls” with your smile. Same issue: Either my face wasn’t detected, or when it was I was labeled male.
Cathy O’Neil’s book Weapons of Math Destruction talks about the ways technology can work differently on different groups of people or make certain social conditions worse, which made me feel less alone. [At the time], I was a resident tutor at Harvard and the dining hall workers were on strike. I heard people protesting and wondered, Do I just follow this comfortable path that I’m on or might I take the risk and fight for algorithmic justice?
I changed my research focus and started testing different systems that analyze faces. That became my MIT master’s work, [a project] called Gender Shades. I collected a data set of members of parliament from three African countries and three European countries — and found AI systems worked better overall on lighter-skinned faces. Across the board, they work the worst on people most like me: darker-skinned women.
[ad_2]
Source link