At the keynote speech of its I/O developers conference on Tuesday, Google revealed a suite of ways the company is moving forward with artificial intelligence. These advancements show Google increasingly trying to build AI-powered tools that seem more sentient and that are better at perceiving how humans actually communicate and think. They seem powerful, too.
Two of the biggest AI announcements from Google involve natural language processing and search. One is called LaMDA, which stands for language model for dialogue applications; LaMDA makes it easier for artificial intelligence systems to have more conversational dialogue. The other is a technology called Multitask Unified Model (MUM), which is an AI model that boosts understanding of human questions and improves search. Google also revealed a number of AI-powered improvements to its Maps platform that are designed to yield more helpful results and directions.
Overall, these steps indicate that Google aims to take on more and more of the work humans normally do when interacting with technology — mainly by making AI smarter. Instead of needing to use multiple queries to answer a series of questions, you could do it with one more sophisticated query. Or, rather than users needing to think through what routes might be most dangerous, Google wants to make these calculations itself and then automatically suggest safer routes. These advances show that Google not only aims for its AI tech to become more powerful but that it will also take on more responsibility in our day-to-day interactions with our phones and computers.
While LaMDA and MUM are still in development, Google offered brief demos at the event. The idea behind LaMDA is to make communicating with an artificial intelligence more natural, since chatbots can often get confused by conversations going from one topic to another. To demonstrate this, Google aired a somewhat odd conversation that involved the model impersonating the dwarf planet Pluto and another conversation where the model pretended to be a paper airplane:
[embedded content]
“Sometimes, it can give nonsensical responses, imagining Pluto doing flips or playing fetch with its favorite ball, the moon,” Google CEO Sundar Pichai said. “Other times, it just doesn’t keep the conversation going.”
Pichai explained that the company is exploring how it could be integrated into Google’s search engine, voice assistant, and workspace. The company is also looking not only at how specific and sensible the AI’s answers are, but also how interesting they are, like whether “responses are insightful, unexpected, or witty.”
MUM is an AI-powered tool that’s meant to simplify how people search online. This system is designed to understand implicit comparisons in a search inquiry — the example Google gave at the keynote was how to prepare for hiking two different mountains — and provide the most appropriate answer.
The answer to the hiking question wouldn’t necessarily come in the form of a list of links to websites that might be helpful, but would be an answer based on different pieces of information gathered from the web. In the future, Google wants to reduce the number of searches someone needs to do and instead use the power of MUM to provide a more coherent, simplified response.
a blog post published on Tuesday. “It could then point you to a blog with a list of recommended gear.”
The system can also find answers to your questions in other languages and then import that information back to help make your search result more accurate. At the same time, the company says it’s looking at the ways bias can be built into MUM, and says that it will be trying to reduce the carbon footprint of the model.
Beyond search, Google announced new ways that AI was being used to boost the detail and routes available in Google Maps. The company said it’s planning to make more than 100 different improvements to its Maps feature using AI in 2021.
One big change: Street maps are becoming a lot more sophisticated and incorporating sidewalks, crosswalks, and pedestrian islands into the landscape. At the conference, Google said AI is allowing the company to add these details to more than 50 cities this year. The company also tailored the results that show up in Google Maps, so that different results show up for different times and places, depending on what you might be interested in. When considering which routes to recommend, Google Maps will also now factor in the safety of the recommended routes.
[embedded content]
“We’ll take the fastest routes and identify which one is likely to reduce your chances of encountering a hard-braking moment,” said Oren Naim, a Google Maps product director, in a Tuesday blog post. “We’ll automatically recommend that route if the ETA is the same or the difference is minimal.”
All these updates showcase how Google’s most advanced technologies are getting even more advanced. For instance, MUM uses a much larger neural network than BERT, the natural language model Google announced back in 2018. Google says that MUM is actually 1,000 times more powerful than BERT.
As this very sophisticated AI is deployed into the real world, the concern that this tech could actively harm people (including through built-in biases and exacerbating climate change) has grown too. While Google has emphasized that it wants to be responsible in its deployment of this tech, the company has also faced ongoing criticism over the treatment of its in-house AI ethics team, as Recode reporter Shirin Ghaffary explained last year.
Now, as this tech makes its way into the real world, it will serve as a reminder of the immense power of Google’s AI systems to make decisions that impact our everyday lives.