Google revolutionizes searches by applying more artificial intelligence
The company Google has shown, at the international event Search On strong>, as advances in artificial intelligence are helping to transform their information products, creating search experiences more in tune with the way the human mind works and multidimensional like people themselves. In The aforementioned event presented three main novelties to find exactly what we are looking for in our searches, combining images, sounds, text and voice, as human beings do naturally.
A more natural visual search, with multi search, a new way of doing searches using images and text simultaneously.
The translation of the environment. Using advances in artificial intelligence, Google goes from translating text to translating images. Today it is already used more than a billion times a month to translate the text of images into more than a hundred languages.
Using immersive vision to explore the world. Through advances in computer vision and predictive models, it has reinvented what we understand by a “map”. The classics in two dimensions will evolve into a multidimensional view of the real world, which will allow experience a place as if you were there.
“We've been working on our mission for more than two decades to organize the world's information and make it accessible and useful to everyone. At first, it was text search, but over time, “We have been creating more natural and intuitive ways to find information. For example, now you can search what you see with the camera or ask questions out loud,” it was reported during the event.
From the perspective of Google, a world is glimpsed in which you can find exactly what you are looking for combining images, sounds, text and voice, as the human being does naturally. human.
Uses cameras as a tool, the keyboard of the future, with which to access information and better understand the environment. In 2017 Lens was born, which allows us to search what we see using the camera or an image. Today, Lens is used to answer eight billion questions every month.
Visual search is more natural with multisearch, a new way to search using images and text simultaneously. A few months ago, it was implemented The beta version of the multisearch mode in the United States and, in Search On, it has been announced that it will be available. available in more than seventy languages in the coming months. One step further it is found with the multisearch near me, which allows you to take a photo of something unknown, such as a plate of food or a plant, and find it in a nearby place, such as a restaurant or a garden center. This fall will be given the starting signal for this tool in English, in the United States.
Translate the world around
One of the greatest potentials of visual perception is its ability to break down language barriers. Through artificial intelligence, we have gone from translating text to translating images. Google is already used more than a billion times a month to translate the text of images into more than a hundred languages. But often it is the combination of the words and their context (the images in which the text is inscribed) that make up the meaning. Today, translated text is already being combined with those contextual images, thanks to a machine learning technology called Generative Adversarial Networks(RGA or GAN, for its acronym in English). If, for example, the camera is pointed at a magazine in another language, we will see on the screen the translated text superimposed on the accompanying images.
Thanks to advances in computer vision and predictive models, Google is reinventing maps. The classics in two dimensions will evolve into a multidimensional view, which will allow experience a place in a personalized way.
As well as the possibility of checking the traffic in real time in navigation mode, it changed. Google Maps, making it more useful, another significant advance has been achieved with the immersive view of Google Maps, with more information such as weather conditions or how crowded it is. a certain place. With this experience it is possible to get an idea of what a place is like before you even set foot in it, to decide where you want to go and when.
By merging a representation n advanced in the world with predictive models, it gives an idea of what it will be like. Somewhere tomorrow, next week, or even a month from now. Today, the first version of this function has been expanded with aerial images of two hundred and fifty emblematic enclaves. In the next few months it will arrive. the immersive view of five major cities.