HomeTechnology"Google" changes its meaning:...

“Google” changes its meaning: these are the new features that Google is including in the search engine

Google adds new search engine tools | Font: spill

adUnits.push({
code: ‘Rpp_tecnologia_google_Nota_Interna1’,
mediaTypes: {
banner: {
sizes: (navigator.userAgent.match(/iPhone|android|iPod/i)) ? [[300, 250], [320, 460], [320, 480], [320, 50], [300, 100], [320, 100]] : [[300, 250], [320, 460], [320, 480], [320, 50], [300, 100], [320, 100], [635, 90]]
}
},
bids: [{
bidder: ‘appnexus’,
params: {
placementId: ‘14149971’
}
},{
bidder: ‘rubicon’,
params: {
accountId: ‘19264’,
siteId: ‘314342’,
zoneId: ‘1604128’
}
},{
bidder: ‘amx’,
params: {
tagId: ‘MTUybWVkaWEuY29t’
}
},{
bidder: ‘oftmedia’,
params: {
placementId: navigator.userAgent.match(/iPhone|android|iPod/i) ? ‘22617692’: ‘22617693’
}
}]
});

Google has the luxury of using its own verb for its actions. Like “pasteurization” or “platonic love,” the ability to “google” is typical of Internet searches. However, the concept of “Google” is no longer just a search, and has now become an opportunity to understand the world through all kinds of data. In his Action “Search 2022”, Google introduces its new search features.

“For more than two decades, we have dedicated ourselves to our mission: to organize the world’s information and make it accessible and useful. We started with text search, but over time we have continued to create more natural and intuitive ways to search for information: now you can search for what you see with the camera, or ask a question out loud with your voice”, share Google in a presentation.

More intuitive search with images

By incorporating artificial intelligence into every corner of the search engine, Google has been able to improve the experience with more organic results and interest for the user. However, today they go even further, integrating photos and screenshots as if we were entering text into a search engine. Google calls it “MultiSearch” and it will be available in English today and in 70 other languages ​​in the coming months.

This is explained simply. Today, we almost always search from our phone and always have access to the camera. WHEN pointing at a place or trying to recognize an object, including a Google photo, is faster and easier than “transcribing what we see.” From there, we already have a powerful visual recognition tool that generates searches with images and combines words to refine the most convenient result.

“We envision a world where you can find exactly what you are looking for; combining images, sounds, text and speech, as people do in nature. You will be able to ask questions with fewer words or no words at all; and we will continue to understand exactly what you mean so that you can explore organized information in a way that makes sense to you.” adds Google.

Note that “MultiSearch” is also added to business and local searches near your location. If a friend shares a photo of a dish in a message, you can use that image to paste it into a search engine and find results near your location in real time. This will benefit businesses that digitize their menus and offer up-to-date data on opening hours and availability.

In the update, the Google search bar will include shortcuts to the gallery to take a photo or paste a screenshot. In addition, keywords will be included in typed queries to increase search speed.

search to buy

Another Google addition to the engine is the ability to buy products from the search engine itself. Let’s understand that for a purchase, the process is not only based on entering an online business and placing our card. It’s a long journey of research and comparison, apart from spending time looking for the perfect product, or at best the one we have in front of our eyes and want.

To do this, Google uses its “Purchase Schedule” algorithm to improve the options that the search engine shows for a direct purchase. Currently only for the United States, you can access this solution by searching for “store” and what you are looking for.

This algorithm includes the ability to unify the search for a single outfit that is not sold together or in the same business, so that the search engine will return you all the purchase options for products that match each image element.

This is complemented by categorized trends and offers, as well as 3D catalogs that can be virtualized in our real space, whether it be furniture or slippers. In addition, you will be able to access the shopping experience of other users directly in the search engine and see if this product meets your expectations.

Visual translation without deleting anything

One common barrier to accessing additional information is the original language of the publication. With Translate, Google has closed that gap by supporting over 100 languages, and this update aims to close it even further by using adversarial generation networks for image processing. GANs complement real-time visual translation in more detail.

“Now you can combine translated text with a background image thanks to a machine learning technology called Generative Adversarial Networks (GANs)” contact Google. “So if you point your camera at, for example, a magazine in another language, you will now see the translated text realistically superimposed on the images below.”

A more real world on the phone

In the case of Maps, a tool called “Neighborhood Environment” has been released that combines data from millions of users on the platform with Google’s suggestion algorithms to develop a kind of travel guide based on user habits.

Among the novelties, photorealistic aerial views are included in the 250 most visited places around the world, such as the Tokyo Tower, the Empire State or the Acropolis of Athens. These images will be available from today, September 28th.

However, we may also use this type of content to access data about a particular city and its typical traffic, or about the busiest hours in certain areas. For now, the first cities to receive this special treatment will be Los Angeles, London, New York, San Francisco, and Tokyo.

NIUSGEEK had the opportunity to ask Miriam Daniel, Vice President and Manager of Google Maps for Consumers, about the data Google uses to develop these hyper-realistic models:

“To answer your question, Google data is aerial photographs, satellite images. And, you know, the Street View images that we already have, and data from users actually uploading photos and videos of attractions in each of the neighborhoods, which we can then AI stitch together and play back for everyone. Daniel pointed out.

In addition, an additional layer has been announced in Street View that will allow us to find ATMs, shops and other points of interest while we are moving down the street and use augmented reality navigation through our smartphone’s camera.

Source: RPP

- A word from our sponsors -

Most Popular

LEAVE A REPLY

Please enter your comment!
Please enter your name here

More from Author

- A word from our sponsors -

Read Now