Twitter’s Photo Cropping Algorithm Draws Heat for Possible Racial Bias
Back in January of 2018, Twitter introduced an auto-cropping AI that detects the most interesting part of your image and crops the ‘preview’ photo to match. This works with everything from airplane wings to people, but as one engineer showed this weekend, it may suffer from some inherent bias.
For context, these are the photos he used, each of which are about 600 x 3000px. Notice the extreme amount of white space between the photo on top and the one on the bottom:
“Which [face] will the Twitter algorithm pick: Mitch McConnell or Barack Obama?,” asked Arcieri. In this particular case, using these two images, the answer was always McConnell, no matter what order the photos are stacked.
Here’s Arcieri’s original post, which has been retweeted over 76K times and liked over 190K times as of this writing:
Trying a horrible experiment…
Which will the Twitter algorithm pick: Mitch McConnell or Barack Obama? pic.twitter.com/bR1GRyCkia
— Tony “Abolish (Pol)ICE” Arcieri 🦀 (@bascule) September 19, 2020
After the post went viral, Arcieri ran a couple of other experiments to try and address some criticisms and alternative theories that users had brought up. For example, swapping out the red tie for a blue tie did not change the results:
"It's the red tie! Clearly the algorithm has a preference for red ties!"
Well let's see… pic.twitter.com/l7qySd5sRW
— Tony “Abolish (Pol)ICE” Arcieri 🦀 (@bascule) September 19, 2020
But inverting the colors of the image did:
Let's try inverting the colors… (h/t @KnabeWolf) pic.twitter.com/5hW4owmej2
— Tony “Abolish (Pol)ICE” Arcieri 🦀 (@bascule) September 19, 2020
Finally, another user showed that even if you increase the number of Obamas and remove all the white space between the photos, the same thing happens:
I wonder what happens if we increase the number of Obamas. pic.twitter.com/sjrlxjTDSb
— Jack Philipson (@Jack09philj) September 19, 2020
Others have tried reversing the order in which the photos are attached, or reversing the order of the names in the tweet itself, neither of which worked. However, using a different photo of Obama with a more obvious, high-contrast smile did reverse the order every time:
Okay, to test this hypothesis, let’s try using an image of Barack with a higher contrast smile. This might do it. pic.twitter.com/AX073Ss2KD
— 🏝Kim Sherrell (@kim) September 20, 2020
No doubt the experiments will continue as people try to parse what exactly the algorithm is highlighting and whether or not it should be classified as implicit racial bias. In the meantime, Liz Kelley of Twitter Comms responded by thanking Arcieri and the rest of the people who were testing this out, and admitting that they’ve “got more analysis to do.”
thanks to everyone who raised this. we tested for bias before shipping the model and didn't find evidence of racial or gender bias in our testing, but it’s clear that we’ve got more analysis to do. we'll open source our work so others can review and replicate. https://t.co/E6sZV3xboH
— liz kelley (@lizkelley) September 20, 2020
“We tested for bias before shipping the model and didn’t find evidence of racial or gender bias in our testing, but it’s clear that we’ve got more analysis to do,” wrote Kelley in a Tweet. “We’ll open source our work so others can review and replicate.”
(via Engadget)