Samsung Teases High-Res, Better Low Light Cameras in Galaxy S23

Samsung Teaser

In two videos published to its Weibo account in China, Samsung has teased both high resolution and improved low light sensitivity in its coming Galaxy S23 smartphone.

Samsung is expected to announce its new Galaxy S23 smartphone during an in-person event on February 1 and the two videos, spotted by The Verge, seem to indicate the company expects its camera system to be a focal point of that announcement.

Usually the promise of high resolution doesn’t come in tandem with boasting about low light performance, but in this case these two teasers are pretty expected given the sensor Samsung is likely using.

Samsung Teaser

Last summer, Samsung unveiled the ISOCELL HP3, a new 200-megapixel smartphone sensor that it claimed packed 12% smaller pixels than its previous sensors for a 20% reduction in the camera module area, the industry’s smallest 0.56μm pixel, and the ability to combine pixels together to transform the 0.56μm 200-megapixel sensor into a 1.12μm 50-megapixel sensor, or even further down to a 12.5-megapixel sensor with 2.24μm-pixels by combining 16 pixels into one. This compression of resolution would allow the sensor to create better low-light images.

Those specifications are certainly in-line with what Samsung’s teasers are promising.

In September, the Galaxy S23 designed supposedly leaked, which showed that the smartphone would ditch the camera bump that its two predecessors had used and instead go with a simple three optical, lined array. Earlier this week, that design was again shown via a set of purported S23 dummy phones on Twitter.

The Galaxy S23 is very likely going to be powered by Qualcomm’s latest Snapdragon 8 Gen 2 chip, which the company announced last November. In that announcement, Qualcomm said that it specifically designed its chip to back Samsung’s HP3 sensor and allow the phone the power to capture both high-resolution photos and videos.

Additionally, Snapdragon 8 Gen 2 can automatically enhance photos and videos in real-time with what Qualcomm calls semantic segmentation, the process by which an AI neural network can make the camera contextually aware of faces, facial features, hair, clothes, skies, and other factors and optimize them individually so every detail receives customized professional image tuning.

While all the pieces appear to be falling into place, nothing will be confirmed for sure until Samsung’s February event.