Google CEO Admits Gemini AI Image Failures: ‘We Got It Wrong’

Google CEO Sundar Pichai on stage.

Google took its AI image generation tool Gemini offline last week following a number of errors. Now, Google CEO Sundar Pichai admits to employees, “we got it wrong.”

Google’s AI image generation model, which was recently renamed Gemini from Bard, seemingly failed to produce any images of white people when given various prompts. Prompts shared on social media included those from countries like the United States, Australia, the United Kingdom, and Germany. While Australia and the United States were first inhabited by indigenous people and all four countries and people of various racial and ethnic backgrounds, many pointed out that they, presently, have largely white populations. Prompts for images of popes and knights for example also seemed to return few non-white people. This was also observed by PetaPixel until Gemini stopped taking prompts altogether.

Pichai, in a note to staff that was picked up by news outlet Semafor, called the results “problematic” and admitted they offended users and showed bias.

“Our teams have been working around the clock to address these issues,” Pichai said in the note from Tuesday. “We’re already seeing a substantial improvement on a wide range of prompts. No AI is perfect, especially at this emerging stage of the industry’s development, but we know the bar is high for us and we will keep at it for however long it takes. And we’ll review what happened and make sure we fix it at scale.”

Pichai additionally called the incident “completely unacceptable.” Google further issued a press release last week in which it admitted fault in its image generation.

However, it’s vital to note the context of bias in AI, an issue that is not at all new. Many AI models have been accused of bias in favor of white people and of rarely showing people of color. This issue has been persistent as image generation models have exploded in popularity over the last few months as well. As PetaPixel noted in the initial coverage of the incident, the images may have been the result of Google attempting to over-correct that issue.

Further, there may not be an example of a company, especially one of Google’s visibility, so swiftly and publicly admitting to mistakes and taking a model offline when AI showed bias in favor of white people.


Image credits: Google

Discussion