Millions of people had signed up to use the early access version and Open AI, the company that makes DALL-E, will offer its latest version to one million people from that waitlist in the coming weeks.
The users contacted will receive 50 free images to use within the first month and then 15 every following month. Each credit represents four pictures based on one original prompt, or three if the user offers an edit or variation prompt. If the freebies aren’t enough to satisfy the operator’s AI demands, then a bundle of 115 credits is available to purchase for $15. OpenAI says that artists who are in need of financial assistance will be able to apply for subsidized access.
The beta version also allows people to use the images they generate for commercial purposes. For example, printing the images on shirts or selling merchandise containing the AI images will be permitted. However, OpenAI will reject image uploads that include realistic faces and explicit content. The company is concerned that malevolents may use its technology to create misinformation, deepfakes, and other harmful purposes.
DALL-E 2, the successor of DALL-E, was announced in April and it has already garnered 100,000 users. OpenAI says that the broader access was made possible by new approaches to mitigate bias and toxicity in DALL-E 2’s generations, as well as evolutions in policy governing images created by the system.
DALL-E 2 was trained on a dataset filtered to remove images that contained obvious violent, sexual or hateful content. However, this isn’t fail safe. Google recently said it wouldn’t release an AI-generating model it developed, Imagen, due to risks of misuse. Meanwhile, Meta has limited access to Make-A-Scene, its art-focused image-generating system, to “prominent AI artists.”
OpenAI emphasizes that the hosted DALL-E 2 incorporates other safeguards including “automated and human monitoring systems” to prevent the model from memorizing faces that often appear on the internet. Still, the company admits that there’s more work to do.
“Expanding access is an important part of our deploying AI systems responsibly because it allows us to learn more about real-world use and continue to iterate on our safety systems,” OpenAI writes in its blog post. “We are continuing to research how AI systems, like DALL-E, might reflect biases in its training data and different ways we can address them.”