AI Case Study
Google's Gmail automated composition feature has led to 10% of English language replies being machine written and human approved, speeding up email composition
Google is trialling a new feature for Gmail which predicts and generates phrases for users while they are composing emails in order to speed up the process. According to Google this is now driving a significant portion of Gmail content - although the extent of editing before responses are "human-approved" remains undefined.
Internet Services Consumer
The Google blog describes the following: "Smart Compose, a new feature in Gmail uses machine learning to interactively offer sentence completion suggestions as you type, allowing you to draft emails faster. Smart Compose helps save you time by cutting back on repetitive writing, while reducing the chance of spelling and grammatical errors. It can even suggest relevant contextual phrases. For example, if it's Friday it may suggest "Have a great weekend!" as a closing phrase."
From Google's AI blog, "Smart Compose is trained on billions of phrases and sentences, similar to the way spam machine learning models are trained, we have done extensive testing to make sure that only common phrases used by multiple users are memorized by our model. In developing Smart Compose, we needed to address sources of potential bias in the training process, and had to adhere to the same rigorous user privacy standards as Smart Reply, making sure that our models never expose user’s private information. Furthermore, researchers had no access to emails, which meant they had to develop and train a machine learning system to work on a dataset that they themselves cannot read."
According to Prabhakar Raghavan of Google, quoted by Bloomberg: "Today, over 10% of English Gmail replies are machine-written but human-accepted,"
According to Google's AI blog, its achievements in technical development include the following: "Even after training our faster hybrid model, our initial version of Smart Compose running on a standard CPU had an average serving latency of hundreds of milliseconds, which is still unacceptable for a feature that is trying to save users' time. Fortunately, TPUs can also be used at inference time to greatly speed up the user experience. By offloading the bulk of the computation onto TPUs, we improved the average latency to tens of milliseconds while also greatly increasing the number of requests that can be served by a single machine."
Google AI blog: "In order to incorporate more context about what the user wants to say, our model is also conditioned on the email subject and the previous email body (if the user is replying to an incoming email). To improve on this, we combined a BoW model with an RNN-LM, which is faster than the seq2seq models with only a slight sacrifice to model prediction quality. In this hybrid approach, we encode the subject and previous email by averaging the word embeddings in each field. We then join those averaged embeddings, and feed them to the target sequence RNN-LM at every decoding step, as the model diagram below shows. Once we decided on this modeling approach we still had to tune various model hyperparameters and train the models over billions of examples, all of which can be very time-intensive. To speed things up, we used a full TPUv2 Pod to perform experiments. In doing so, we’re able to train a model to convergence in less than a day."
R And D
Gmail is Google's email product. According to the Google AI blog: "Typical language generation models, such as ngram, neural bag-of-words (BoW) and RNN language (RNN-LM) models, learn to predict the next word conditioned on the prefix word sequence. In an email, however, the words a user has typed in the current email composing session is only one “signal” a model can use to predict the next word."
Billions of phrases and sentences from emails