That feature will listen passively in the background for music, and provide a match when it finds one in its offline database (all done locally). This is an evolution of how Google’s existing music recognition tech works, which is present in the passive “Now Playing” feature that’s available on its Pixel smartphones. The algorithm is basically boiling the song down to its essence, and coming up with a numerical pattern that represents its essence, or what Google calls its “fingerprint.” Google explains in a blog post announcing the feature that it’s able to do this because it basically ignores the fluff that is the quality of your voice, any accompanying instruments, tone and other details. Clicking on any match will return more info about both artist and track, as well as music videos, and links that let you listen to the full song in the music app of your choice. Google says that its matching tech won’t require you to be a Broadway star or even a choir member - it has built-in abilities to accommodate for various degrees of musical sensibility, and will provide a confidence score as a percentage alongside a number of possible matches. Unsurprisingly, it’s powered behind the scenes by machine learning algorithms developed by the company. The feature should be available to anyone using Google in English on iOS, or across more than 20 languages already on Android, and the company says it will be growing that user group to more languages on both platforms in the future. As of today, users will be able to open either the latest version of the mobile Google app, or the Google Search widget, and then tap the microphone icon, and either verbally ask to search a song or hit the “Search a song button” and start making noises. Google has added a new feature that lets you figure out which song is stuck in your head by humming, whistling or singing - a much more useful version of the kind of song-matching audio feature that it and competitors like Apple’s Shazam have offered previously.
0 Comments
Leave a Reply. |