Twitter’s Photo Cropping Algorithm Draws Heat for Possible Racial Bias

Back in January of 2018, Twitter introduced an auto-cropping AI that detects the most interesting part of your image and crops the ‘preview’ photo to match. This works with everything from airplane wings to people, but as one engineer showed this weekend, it may suffer from some inherent bias.

Over the weekend, cryptographic engineer Tony Arcieri went viral on Twitter by pointing out an awkward problem with the social network’s auto-cropping algorithm. In what he classified as a “horrible experiment,” he posted two different photos, each of which was made up of a portrait of Senate Majority Leader Mitch McConnell and former President Barack Obama.

For context, these are the photos he used, each of which are about 600 x 3000px. Notice the extreme amount of white space between the photo on top and the one on the bottom:

“Which [face] will the Twitter algorithm pick: Mitch McConnell or Barack Obama?,” asked Arcieri. In this particular case, using these two images, the answer was always McConnell, no matter what order the photos are stacked.

Here’s Arcieri’s original post, which has been retweeted over 76K times and liked over 190K times as of this writing:

After the post went viral, Arcieri ran a couple of other experiments to try and address some criticisms and alternative theories that users had brought up. For example, swapping out the red tie for a blue tie did not change the results:

But inverting the colors of the image did:

Finally, another user showed that even if you increase the number of Obamas and remove all the white space between the photos, the same thing happens:

Others have tried reversing the order in which the photos are attached, or reversing the order of the names in the tweet itself, neither of which worked. However, using a different photo of Obama with a more obvious, high-contrast smile did reverse the order every time:

No doubt the experiments will continue as people try to parse what exactly the algorithm is highlighting and whether or not it should be classified as implicit racial bias. In the meantime, Liz Kelley of Twitter Comms responded by thanking Arcieri and the rest of the people who were testing this out, and admitting that they’ve “got more analysis to do.”

“We tested for bias before shipping the model and didn’t find evidence of racial or gender bias in our testing, but it’s clear that we’ve got more analysis to do,” wrote Kelley in a Tweet. “We’ll open source our work so others can review and replicate.”

(via Engadget)

Discussion