Android, Google, Technology, Web

What will be the Future of Google’s Visual Phone Search?

Google has not limited itself to PC-only search functions. Although the internet giant became huge through its standard search engine function and the search engine optimization that came with it, Google search has since headed on to Android smart phones. The initial application was called Google Goggles.

This application allows phone users to get identification and information for almost anything that can be searched for, as long as that thing is within visual distance and can have a phone pointed at it. With the Google Goggles app installed, the phone can be used as a search engine on demand, allowing the user to get information on a film poster or a particular type of car that he or she has seen while strolling around town, for example. The idea is that when a visual image is captured on a phone using the Google Goggles app, the search engine then returns results relating to the image based on its ability to recognize its details.

While Google Goggles had limited functionality on its first release, more recently further updates and features have been added to the app. Google Goggles can now use a continuous scanning mode to identify objects without the need for the phone user to take a picture. However, a snapshot is still needed for identifying text-based queries. Users can also use the app to find text on the web that’s related to a written object they’ve got in front of them, in what’s called the advanced text search.

Technology progresses all the time though and it seems Google has no plans to let its application for visual searching stagnate. It is still interested in creating a visual search app that’s as functional and as useful as possible and it is developing additional features for Google Goggles to achieve this.

One of the latest ideas is a visual-search app that categorizes the image someone takes using a smart phone into many parts. The search engine could then analyze each part in turn in order to provide a set of results that covers every aspect of the image the user is seeing. So if the image features a car, a bar and several passers-by, the results given by the app would provide information on each of these, broken down into sections.

The number of types of searches used in one visual query could therefore be quite large. Google would use facial-recognition technology to suggest identities for the people photographed, while object recognition would pick out any items in the shot. Any products, from books to DVDs, would be identified, while landmarks would also come up in the results from the search.

As the variety of searches the app would perform increases, so too would the forms of results a query would return. So just one visual-search query could tell a user what a product is and where to buy it, what a particular advertisement is selling, where a text document such as a digital version of a magazine might be found and more. Images would even be broken down further. For example, an advert featuring a celebrity, if searched for, would display information not only on the product itself but on the celebrity featured.

If all of these features are implemented, the end result could be a visual-search application that would make search engine optimization easier and give users the ability to have all kinds of information at their fingertips.

About the Author

0 comments