This AI Camera Protects Your Privacy by Only Recording Specific Targets
Scientists from UCLA are attempting to address issues of privacy by developing a new artificial intelligence (AI) camera that only records specific targets and actively erases everything else.
As digital cameras have become nearly ubiquitous, issues of privacy protection have risen in kind. As explained by Science Blog, some have chosen to address these concerns using blurring or data encrypting, but they don’t address the issue of data exposure since the raw footage is still captured before they undergo data processing.
But UCLA professor Aydogan Ozcan and a group of fellow scientists have developed a new camera that bypasses this issue in a smart camera system that only records desired objects or subjects and actively and instantaneously erases all other types of objects at the point of capture without requiring any additional post processing.
In the research paper, the scientists describe a camera design that performs class-specific imaging of target objects with instantaneous all-optical erasure of other classes of objects — in short, a camera that only sees and records what it is told to look for.
The new camera design uses what the scientists call “diffractive computing,” which images a target or class of objects in high fidelity while erasing all other objects that do not match that target.
The camera is made up of a series of diffractive layers that are each optimized by deep learning. After they have been trained, each layer is assembled together into a 3D system that forms what the scientists call a “computational imager” between the input field of view and the output plane.
“This camera design is not based on a standard point-spread function, and instead the 3D-assembled diffractive layers collectively act as an optical mode filter that is statistically optimized to pass through the major modes of the target classes of objects, while filtering and scattering out the major representative modes of the other classes of objects (learned through the data-driven training process),” the scientists explain.
If any object that does match the desired input is placed in front of the camera, it is optically erased and reduced to what is described as non-informative, low-intensity patterns — basically, what photographers would describe as noise.
The camera was tested by presenting it with a set of numbers and asked the camera to only look for the number two. As shown below, the final five-layer output only captured that desired digit while reducing the other numbers in the frame into meaningless visual noise.
The scientists say that not only does this make for a camera that is far better at protecting privacy, but it also uses less power than what is required with current methods since no post-processing is necessary.
The camera presented in the research requires each layer to be meticulously trained and then stacked on each other, making it rather impractical for large-scale deployment right now. The scientists seem to understand this, but say that the research they have done can inform the design of camera systems to come.
“The teachings of this diffractive camera design can inspire future imaging systems that consume orders of magnitude less computing and transmission power as well as less data storage, helping with our global need for task-specific, data-efficient and privacy-aware modern imaging systems,” the researchers say.
The hope is that this technology could be used in place of any footage that currently requires someone to actively blur out areas of an image for the sake of privacy by never recording it in the first place.
The full research paper titled To image, or not to image: class-specific diffractive cameras with all-optical erasure of undesired objects can be read on Springer Open.
Image credits: Bijie Bai, Yi Luo, Tianyi Gan, Jingtian Hu, Yuhang Li, Yifan Zhao, Deniz Mengu, Mona Jarrahi, and Aydogan Ozcan