Snapchat’s “lenses,” more colloquially known as selfie filters or just “filters,” may seem like a totally inane feature. But it turns out the facial recognition technology behind them is advanced, impressive… and a tad scary.
Snapchat’s filters are the brainchild of a Ukranian startup called Looksery, which Snapchat acquired for a record-setting $150M (well… record-setting in Ukraine). Unfortunately, Snapchat won’t let anybody talk to those engineers directly, but Vox recently went digging through their patents to figure out how the tech works. They reveal what they found and how these ‘silly’ filters work in the short educational video above.
At the most basic level, the app uses computer vision to spot you based on contrast patterns typically seen in and around a human face; however, that’s not specific enough to identify, for example, the border of your lips or where to put that dog nose.
To get to that level of specificity, Snapchat trained the system using hundreds (quite possibly thousands) of faces that were manually marked with points to show where the borders of lips, eyes, nose, and face are. The trained application can then take that point-mask and shift it to match your individual face based on the data its getting from your camera at 24 frames per second.
The final step is to create a mesh from that point-mask; a mesh that can move with you or trigger an animation when you open your mouth or raise your eyebrows.
Of course, all of this facial recognition has a slightly scary Minority Report-like component, which is mentioned at the very end of the video. But whether you’re terrified of facial recognition or excited by the potential for tech like this to improve things like portrait autofocus or automatic selection/masking, it’s fascinating to get a peek at what and how exactly your smartphone’s camera “sees” you.