Twitter has just announced that auto-cropping of photos on the social networking service will be producing much better results thanks to a new neural network that has been trained for the task.
Twitter has been a platform for photo sharing since 2011, but cropping shared photos into neat previews has been a challenge for developers. One strategy the service previously used was to employ face detection and crop around the most prominent face in each photo.
Problem is, there are many shared photos that don’t have faces, and these photos can often get turned into “awkwardly cropped preview images.”
For its latest attempt at improving the photo cropping system, Twitter is teaching its AI new tricks. Instead of looking for faces, the AI will be hunting for “salient” regions of photos — the areas that people are most likely to look when freely gazing at the picture.
“Academics have studied and measured saliency by using eye trackers, which record the pixels people fixated with their eyes,” Twitter says. “In general, people tend to pay more attention to faces, text, animals, but also other objects and regions of high contrast.
“This data can be used to train neural networks and other algorithms to predict what people might want to look at. The basic idea is to use these predictions to center a crop around the most interesting region.”
Here are some before and after examples showing badly done crops on Twitter before and what the new crops look like with this neural network:
Neural networks that deal with saliency are usually too slow to be used for cropping massive volumes of photos in real-time, but Twitter developed some optimizations that help its system perform 10 times faster than standard methods. The result is that the AI can do intelligent cropping on all photos as soon as they’re uploaded.
This new and improved cropping is now being rolled out to Twitter users across the Web and on Twitter’s iOS and Android smartphone apps.