I would guess he means that they use AI algorithms to perform some of the tasks that are performed manually at most companies. If so, most of Google's customers wouldn't really consider this to be a win.
ML-driven account suspensions etc are universally hated and seem to have damaged Google's image, at least in the tech community.
Part of my job is developing and testing a Google calendar sync tool. I've created a secondary throwaway email address so I don't fill my own calendar with test data.
During testing I often have to link/unlink the calendar with different users in my application. Some AI bot has evidently flagged this as 'suspicious' because I now get 'An error has occurred, please try again' when linking my application with Google Calendar. No further details of which error it might be or what I might do to resolve it.
To whom may I write to explain this situation and have my account unlocked? There is no one. I really despise this behaviour. Why should I put the time in to test and develop an application that benefits users of Google's calendar when they won't even lift a finger for me?
> To whom may I write to explain this situation and have my account unlocked? There is no one. I really despise this behaviour
They will probably never hire humans for support tickets. But they could "hire" their star model PaLM to solve support questions. It could be a good demo and a product useful for many companies. Even if it is only 70% or 80% effective, still better than nothing.
Modern users expect everything to integrate with Google Calendar, unfortunately. You can't avoid them when it comes to developing scheduling applications.
You are right, but those sentiments seem to be limited to tech community ( and even in it, opinion seems split ). I will admit that not once have I heard that mentioned as an issue in my non-tech oriented circle.
I'm not sure if you'd call it a pivot, but Google Brain was initially an X project that was started well before the deep learning craze took off. They hired Geoff Hinton in 2013. Investment in AI has been going on for a long time at Google, probably before Facebook/FAIR got seriously involved, but probably not Microsoft (MSR has been going for ~30 years and Chris Bishop started the Cambridge lab in '97 and they've long been known as a good ML group).
It's not like Google wasn't using ML before that either. Search, recommender systems for ads, anti-spam, translate. Deep learning just turned out to be a lot better than the tools we used to use.
A friend of mine who is in "serious" AI research keeps telling me not to get too excited about Google's AI stuff, as they do release a lot of papers, but these are essentially worthless and non-reproduceable as they hide raw data. He suspects half of them or more are completely made up, as they do not align well with what else happens in the field.
I can't verify that, but whenever the "AI" behind my Google Home gets a bit more stupid (happens once a week nowadays, with it 'forgetting' even the few things that worked before), I am inclined to agree.
agree that they are nonreproduceable, but they are probably not worthless in that i'm sure they have very extensive review processes and theyve said that they actually put models like T5 into production on search. there's no bigger skin in the game than that.
> they do not align well with what else happens in the field
i'm interested in this comment though. havent transformers changed the field? what is this referring to? mind getting your friend to elaborate?
> they actually put models like T5 into production on search
Google search does not show signs of using a decent language model. If they did, then questions like this should work:
> "What is the world record for crossing the English Channel entirely on foot?"
This is the quote it gives on top of all results:
> And it was a Towson University graduate to do it the fastest. Two weeks ago, Nik Haynes '00 became a world-record holder by backstroking the English Channel in an astounding 12 hours, 52 minutes. Haynes bested the mark set by Tina Neill in 2005 of 13 hours, 22 minutes.Aug 27, 2020
And the rest of the page is anything but crossing on foot.
Ok, so it missed the core semantics of the question completely. In reality there have been crossings through the Channel Tunnel. And at some point in the distant past the sea level was low enough people could cross on foot.
This kind of problem happens in many searches - if you search topic X which is not very popular, and there is a topic Y close to X that is very popular, Google will reply to Y instead of X. Topic blindness.
Google Ads, their most significant revenue stream, are entirely AI base, and speaking as an advertiser, it's only made it more crap. It's "ai" algorithms are designed to maximise value extraction from advertisers, rather than value creation for advertisers.
There has been a systematic trend over the last 10 years to remove the controls advertisers have over their placements and spend and move towards a black box we are supposed to trust.
Hopefully someone can inform us as to whether they did announce such a pivot (I have no memory of that) but it wouldn't surprise me. ChatGPT feels a lot like search-without-citations to me, and the closest existing thing in production to it is the blurb at the top of Google results, which I suppose will soon be replaced by this kind of thing (if they can ever get it to stop lying.)
Since 2015, Google have rolled out RankBrain, BERT and MUM which are machine learning enhancements of their search algorithm and allow Google to understand the meaning of the terms you search for and provide matches based on that.
So that explains how Google knows that when I'm searching for X, it can give me results for slightly related Y instead. Sometimes it works, often it feels almost hostile.
I've noticed the same thing with YouTube notifications.
It feels like Google wants to recruit/queue my attention as a resource for their client's ads instead of just notifying me when a new video from someone I follow is available.
That at least make some sense, but I don't understand whose benefit is showing results for Rider and IDEA when I search for some detail in CLion that is different from the other IDEs from the same company.
“Computing is evolving again. We spoke last year about this important shift in computing from a mobile-first to an AI-first approach. … In an AI-first world, we are rethinking all our products and applying machine learning and AI to solve user problems.” —Sundar Pichai, 2017
That seems like a weird thing to throw in there. Does anyone else read it as a hasty "we were doing AI before ChatGPT was cool"?