Claims it ‘Surpasses Human Vision’ in Mobile Imaging

Visionary AI startup company image

Low-light photography and videography has improved considerably over the last decade, but says its collaboration with Qualcomm could deliver a generational leap in quality.

The Israeli startup announced its collaboration with Qualcomm in October 2023 at the Snapdragon Summit in Hawaii, saying that the Snapdragon 8 Gen 3 processor would support’s night vision technology. The results should mean higher visibility beyond what the human eye can see in the same conditions, illuminating a scene as dark as 0.2 lux.

It’s all the more interesting because smartphones are somewhat hindered by their smaller image sensors and pixels compared to interchangeable lens cameras, despite pixel binning trying to mitigate that. It’s a constant battle to reduce noise and produce a good image, which manufacturers have been working on through a mix of bracketing and AI-driven processing.

It’s not as simple with video, which would require increasing brightness and reducing noise simultaneously with each frame in quick succession. Google is tackling that with its Video Boost feature introduced in the Pixel 8 Pro that captures a second “RAW-ish” file of the recording, uploads it to the cloud via Google Photos, and then delivers an edited version with better dynamic range. Google intentionally developed it to improve low-light and night video, but it currently has a 10-minute limit and must be manually enabled for every recording. It’s also not clear if other Pixel devices in the current lineup will gain access to the feature as well. claims to produce better results in real time, negating the need for the cloud or setting any specific limits. It does it by focusing on reducing noise above anything else by approaching de-noising algorithms with advanced AI.

“Other than Apple, companies are all doing this with algorithms, but not with advanced AI,” says Yoav Taieb, co-founder and CTO at, in an interview with PetaPixel. “Apple is also using AI, but our recent results show that we’ve succeeded in surpassing their results. What’s also cool to see is that we’ve surpassed Apple’s newest phone — the iPhone 15 Pro Max, even with a much older, lower-cost sensor. That means we surpassed the newest iPhone even though it has a far superior sensor and optics.”

Except Taieb and his team weren’t able to do an “apples-to-Apple comparison” with the latest sensor and optics, so it’s unclear how much better it would be. The startup used a Sony IMX766, a Type 1/1.56-inch sensor largely relegated to mid-range phones or secondary cameras, to capture the test videos it shared publicly. For comparison, the iPhone 15 Pro Max’s main camera has a Type 1/1.28-inch sensor.

Despite not testing it with newer gear and components, he’s confident the strong AI accelerator in the Snapdragon 8 Gen 3, coupled with the higher-end sensors and optics in upcoming flagship phones, should capture truly impressive results. Better AI de-noising can both reduce noise and exposure time while recording video, even at 4K at 60 frames per second, he says.’s video de-noising doesn’t distinguish between different light sources, be they natural or artificial, though there may be some improved color accuracy. Results largely depend on the optics, image sensor, and lighting conditions, but are also contingent on what kind of camera hardware and software computation is also involved. The company’s focus is on establishing how algorithms and AI can overcome limitations from all of the above. Could it theoretically enable something akin to astro videography from a phone?

“We’ve gone deep into the desert, far away from light pollution, and tested our software with nothing but moonlight,” says Taieb. “We’ve captured some incredible results in as low as 0.1 lux, which is almost as dark as it gets. Our night vision for mobile enables a smartphone to capture results that surpass human vision.”

There is a catch, though. It’s unclear just how much light and de-noising would bring out the stars in the darkest environments. Still photos are easier because the phone takes a long exposure with an assist from the hardware and software, which is then processed upon capture. It’s a much trickier proposition for video in real time, and without seeing Visionary’s results in the desert, it’s hard to tell how far it can go.

Taieb is also quick to point out that the technology is not adding brightness, since de-noising forms the basis for its image signal processor. Reducing noise to that degree should help color, sharpening, and tone mapping come out looking better, too.

“The denoiser works by integrating information from the spatial and temporal domains into each frame,” he says. “In the spatial domain, you think about one pixel in a frame, where you’ll often find the pixel next to it is similar or the exact same. In the temporal domain, we use information from multiple previous frames and pixel similarity to make inferences.”

One of the challenges in doing all this in real time is power efficiency. Not necessarily, says Taieb, as it’s a “low-power solution” with “sub-one-watt” delivery at 4K/30p, so “within the ballpark of other functional mobile applications.” Some phones generally run hotter than others when it comes to recording video, especially for longer stretches, and it’s hard to tell how efficient the hardware and software will be on phones running Qualcomm’s latest chipset. While the tech is apparently not a battery-drainer unto itself, it’s also hard to tell how it might affect flagship devices with already demanding camera apps.

Content creators using their phones may find the ability to see more at night a boon to how they capture footage in low-light conditions. So might private investigators who would like clearer visuals from a distance. Taieb says it’s up to phone manufacturers to decide if the technology will apply only to the primary camera or to the entire array, which could include telephoto lenses and hybrid zooms.’s tech isn’t only applicable to mobile devices, as it could also work with security, medical, automotive, drones, and smart city applications as well. The company could also attempt to adapt its technology to improve the image quality for still photos, but has found it a “near impossible task” thus far.

“Over the past couple of years, people have been reaching out to us regarding a whole range of low-light use cases we had never thought of, from smart ovens to farming, as well as a few different medical imaging applications,” he says. “I think that’s one of the beautiful things about AI: we create something with two or three use cases in mind, and we don’t always know at the start where other people might take it.”

Image credits: