Google’s Bard AI Chatbot Shares False Photo ‘Fact’ in its Debut

Google Bard

Earlier this week, Google introduced Bard, its experimental conversational artificial intelligence (AI) service that is basically the company’s answer to ChatGPT. Unfortunately, it immediately got a photo fact wrong in its debut example.

As PetaPixel has shown, conversational AI is capable of producing dramatically better results to common photography questions than Google’s current Search system can, to the degree that it looked poised to easily replace it. It should come as no surprise, then, that Google was working on its own version of the software so that it wouldn’t be left in the dust by the current belle of the ball, ChatGPT.

Unfortunately, the company isn’t off to a strong start. In an advertisement for Bard on Twitter, the company shows the AI answering what should be a pretty simple question: “What new discoveries from the James Webb Space Telescope can I tell my 9 year old about?”

Bard provides three answers, but it’s the third one that caught the eyes of astrophotographers and astronomers: “JWST took the very first pictures of a planet outside our own solar system.”

Google Bard wrong answer

That answer is in fact quite incorrect, Reuters noticed. The first image of an exoplanet was captured by the Very Large Telescope (VLT) in 2004, almost two decades before the James Webb Space Telescope was even launched into space.

Users on Twitter were also quick to point out the error.

Why Bard failed to get this question correct can be found by inputting the question “who took the first photo of an exoplanet?” into Google Search, which returns the same incorrect answer that Bard provided. It appears that Google sees “first” and “explanet” and is defaulting to the most recent result for that combination of terms, as the James Webb Space Telescope did take a photo of an exoplanet back in September of 2022, which was its first such photo.

Bard is apparently not smart enough to realize that just because those two words are found together doesn’t mean that it correctly answers the broader question. It appears that Bard is still relying heavily on whatever intelligence drives the responses to Google Search.

Bizarrely, even in the face of publicity that its Bard AI provided such an incorrect answer in the debut example of its capabilities, Google has left the example up for nearly two days.

Image credits: Header image contains assets licensed via Depositphotos.