AI Image Generator Made by Stable Diffusion Inventors on Par With Midjourney and DALL-E

Left: An elderly man with glasses, a mustache, and a hat looks directly at the camera with a serious expression. Right: Three young women, smiling and standing close together, raise their hands in a stop gesture, with an urban street scene in the background.

A new AI image generator called FLUX.1 from the same researchers who developed Stable Diffusion and invented the latent diffusion technique promises high fidelity and claims it even does hands right.

FLUX.1 is made by startup Black Forest Labs which recently closed $31 million in funding but perhaps most interesting is the researchers’ connection to Stable Diffusion which has seemingly floundered since they left.

Stable Diffusion’s last release was widely ridiculed for its poor ability to generate images of the human anatomy. Perhaps that’s why FLUX.1 is apparently excellent at producing hands that have the right amount of fingers and are in the correct positions.

Ars Technica has tried out FLUX.1, an open-weight model like Stable Diffusion, saying its output is “generally comparable” to OpenAI’s DALL-E 3 in prompt fidelity and matches Midjourney 6’s photorealism.

An elderly couple is walking hand in hand along a sunny beach. They appear happy and relaxed, with gentle waves and a clear blue sky in the background. The man is wearing light-colored pants and a shirt, while the woman is dressed in a white top and shorts.
FLUX.1 example
A man with a beard, wearing a black long-sleeve shirt and blue jeans, is lying on a brown couch. His head rests on two gray pillows, and he appears to be sleeping. The background features a window with a view of parked cars outside.
FLUX.1 example

“Our mission is to develop and advance state-of-the-art generative deep learning models for media such as images and videos, and to push the boundaries of creativity, efficiency, and diversity,” Black Forest Labs says.

“We believe that generative AI will be a fundamental building block of all future technologies. By making our models available to a wide audience, we want to bring its benefits to everyone, educate the public, and enhance trust in the safety of these models. We are determined to build the industry standard for generative media.”

News broke that the three former Stability AI engineers — Robin Rombach, Andreas Blattmann, and Dominik Lorenz — left the company in March. Shortly after, CEO Emad Mostaque resigned and then came the troubled release of Stable Diffusion 3 Medium.

A model walks down the runway wearing a dramatic, chocolate and red-colored dress designed to look like a dessert. The dress features layers of ruffles resembling whipped cream and cherries, creating a striking, confectionery-inspired visual.
FLUX.1 example
A modern red and white tram travels along a tree-lined street in a bustling urban area. Pedestrians walk on the sidewalk, and cyclists ride their bikes nearby. Historic buildings and shops line the street, and the tram displays route information on an electronic sign.
FLUX.1 example

The three German researchers created Stable Diffusion while at university and it was only after Stable Diffusion was published did Stability AI became involved.

“Stability, as far as I know, did not even know about this thing when we created it,” Björn Ommer, the professor who supervised the researchers, has said on the record. “They jumped on this wagon only later on.”

What About FLUX.1’s Training Data?

Black Forest Labs has not said what training data was used to make FLUX.1. Ars Technica notes it “likely used a huge unauthorized image scrape of the internet.”

It is safe to assume at this point that unless a generative AI company says otherwise, it’s likely it got the training data from the open web without asking permission from copyright holders.

The practice remains controversial and could be decided in the courts. The researchers’ old company Stable Diffusion is facing a huge lawsuit brought by Getty Images.


Image credits: FLUX.1 via Black Forest Labs.

Discussion