Google Search Now Supports a Combo of Text and Images

Google Multisearch

Google has expanded the capabilities of Search. The company says it understands that sometimes it’s hard to find the words when looking for something, and Search is now smart enough to understand images and text together to make that easier.

The company says that it has been looking for ways to make it easier to find information even if it’s hard to explain what is being sought. The result of its research is called multisearch. Using the power of Google Lens, users can go beyond the search box to ask questions about what they see. The company teased the idea last September but now has released the first production version of it in beta.

The feature isn’t being integrated into the browser-based version of Search, but instead lives inside the Google app that is available on either Android or iOS. Multisearch leverages the power of Google Lens, so the new feature can use either a photo taken with a smartphone or an image that lives on a camera roll.

Tap the lens camera icon and select the desired image to search. Then, swipe up and tap the “+ Add to your search” button to add text. Google says multisearch allows users to ask a question about the photo or refine the search by color, brand, or visual attribute.

For example, if someone were to query a dress, they could use a photo of an orange dress and add the text “green” to the search, and multisearch would understand to look for a similar dress in the color green. Similarly, a user could take a photo of a dining set and add the term “coffee table” and multisearch would look for a coffee table that matched the dining set. Google also says that a photo of a plant could be included with the query “care instructions” and multisearch is smart enough to recognize the plant variety and search for it along with how to care for it.

Google says that its advancements with Lens and multisearch have been made possible by artificial intelligence (AI) advancements at the company. Google is using AI to make it easier for users to learn more about their environments and interact with it in intuitive ways digitally.

The company says it is not done with developing on multisearch and is currently working on ways to integrate multimodal search (MUM) into it going forward. MUM is the ability to combine queries like photos and text together, but also combine multiple queries on top of each other and intelligently relate them to one another.

Multisearch is currently in beta in English in the United States and Google says the best results with it right now are linked to shopping. The company has not said how long it expects the feature to remain in beta nor when it will become available in other languages or regions.

Discussion