Apple and Google are reportedly in cahoots to integrate features from Google’s Gemini generative AI service into iOS. Bloomberg broke the news, which was later corroborated by The New York Times. If the deal pans out, it will be a huge collaboration between two tech giants who have long duked it out in the hardware and software space.
It also raises lots of questions about how Gemini would function on Apple’s devices—and which company would remain in control. Neither Apple nor Google have publicly addressed the news, and neither company responded to requests for comment before this article was published.
There’s also the possibility that the deal could fall through, seeing as how the hype around such a collaboration is drumming up some unwanted attention. “In the past, this leak would have killed the deal,” says Michael Gartenberg, a technology analyst and former director of marketing at Apple. “The first rule of doing a deal with Apple is don’t talk about Apple.”
But in this case, Gartenberg says, it’s highly likely the deal will in fact pan out. For one, Apple needs it to happen. When all the most breathless tech innovations over the past year and a half have been related to AI, Apple needs to prove that it’s in the game, too. Not to mention that Google has announced it is bringing its on-device AI service, Gemini Nano, to the Pixel 8 very soon, a signal that the mobile AI explosion is set to take off.
Apple has languished behind the other big gen-AI players like OpenAI, Microsoft, and Google. The company has big plans for its own internal large language models, but whatever tools it’s cooking up are not yet ready to be released into the world. That slowness, Gartenberg says, puts Apple in a position of looking like it has been caught off guard by the broader generative AI movement.
“The competition is fierce,” says Patrick Moorhead, founder and principal analyst of Moor Insights & Strategy. “You’ve got all of Silicon Valley competing for this hardcore talent, and Apple missed this one.”
There’s a ticking clock putting pressure on the company, too. WWDC, Apple’s big software development conference and product announcement showcase that usually takes place in June, is looming. As it approaches, those simmering expectations about the company’s generative AI strategy will reach a boil.
“An Apple response of just focusing on face computers or adding more widgets is going to feel fairly hollow,” Gartenberg says, because when it comes to AI, “Apple really needs to have something it can show by June 2024. There is a deadline here for people looking at Apple and saying, what is your story?”
Apple clearly feels that pressure. It recently scuttled its self-driving car plans to refocus those resources on its internal generative AI efforts. And now it’s partnering with Google to bring new AI capabilities to its most popular device.
So, assuming the deal does go through, what might Gemini look like on the iPhone?
First off, Gartenberg says it will likely manifest with a distinctly un-Apple label.
“It would probably be something Apple couldn’t hide under its own brand,” he says. “Perhaps it would be a setting where you could select your assistant, where it could be Siri classic or Siri the sequel. And if I’m Google, I’m going to hold out for some kind of branding on this.”
He points out that the default search engine on iOS now is Google Search, and it isn’t rebranded as an Apple service there. Any AI features powered by Gemini would probably warrant the same flashing neon lights, especially at a time when Google is very motivated to show off its AI chops.
Apple will also likely keep focus on its own ambitions. Siri, the occasionally helpful and much maligned voice assistant, has long lagged behind other digital assistants. Don’t call it a glow up, but Apple will likely be looking to Gemini-infused AI advancements to breathe new life into its floundering digital helper.
“I think that they will double down on Siri and be like, ‘This is the Siri we had envisioned when we introduced it 10 years ago,’” Moorhead says. “Essentially, it’s going to do the same thing, with a higher degree of value. It’ll be something that actually works.”
This juiced-up Super Siri could become a fully fledged chatbot, with integrated conversational AI that can stare deep into your life. It’s likely to power real-time language translations, however fraught that may prove. Apple could also use Gemini to power advanced photo and video editing techniques, such as swapping out backgrounds, combining multiple photos to get everyone’s face just right, or using AI-powered editing tools to manipulate photos more wholly.
Image creation capabilities will probably be on the table, like something generated with Dall-E or Midjourney. Moorhead suggests Apple could even incorporate this kind of feature into Siri, such as using a voice command to ask the digital assistant to “make that background blue” or to “make this picture a sunny day,” and then see the results right there in your photo roll.
One big feature that Moorhead says is expected on AI-powered phones across the board—not just iPhones, but Android phones too—is enhanced AI snapshots of your life. The idea here is that on-device AI could make a record of everything happening on your phone throughout the day, then compile all that information and keep it at the ready to be recalled later.
“The runaway hit is going to be snapshots,” Moorhead says. “For people like me who don’t remember anything and have to write everything down, this is going to be great.”
These are, of course, all features that companies like Google and Samsung have touted before, or are at least already working on. But Apple is Apple, and while it is often not the first company to bring new innovations to market, it has a way of making its execution of an idea more enticing or easier to use—even when it’s forced to incorporate another company’s technology.
“There’s an opportunity here for Apple to talk about how the new generation of artificial intelligence meets Apple and Siri, and produces something better,” Gartenberg says. “It’s not going to be enough for them to just deliver the basic generative AI stuff. They’ve got to be able to say they’ve taken the Google stuff and are actually going beyond that.”
Understanding the Functionality of an iPhone Utilizing Google’s Gemini AI
In recent years, artificial intelligence (AI) has become an integral part of our daily lives. From voice assistants to smart home devices, AI has revolutionized the way we interact with technology. One such example is Google’s Gemini AI, which has been integrated into iPhones to enhance their functionality and provide users with a seamless experience.
Gemini AI is a powerful machine learning algorithm developed by Google that utilizes natural language processing and deep learning techniques. It is designed to understand and interpret user commands, enabling iPhones to perform a wide range of tasks with ease. Let’s delve deeper into how this technology works and the benefits it brings to iPhone users.
One of the key features of Gemini AI is its ability to understand natural language. This means that users can interact with their iPhones using everyday language, without the need for specific commands or keywords. For example, instead of saying “Open the weather app,” users can simply say “What’s the weather like today?” Gemini AI will understand the intent behind the command and open the weather app accordingly.
Furthermore, Gemini AI is constantly learning and adapting to user behavior. It analyzes patterns in user interactions and preferences to provide personalized recommendations and suggestions. For instance, if a user frequently checks the news in the morning, Gemini AI will learn this behavior and proactively provide news updates at the start of the day.
Another impressive aspect of Gemini AI is its integration with various apps and services. It seamlessly connects with popular applications like Google Maps, Gmail, and Calendar, allowing users to perform tasks within these apps using voice commands. Whether it’s sending an email, setting a reminder, or navigating to a specific location, Gemini AI simplifies these actions by eliminating the need for manual input.
Gemini AI also enhances the iPhone’s camera capabilities. It can identify objects, landmarks, and even people in photos, making it easier to search for specific images. For example, users can ask Gemini AI to find all photos with their pet dog or photos taken at a particular location. This feature not only saves time but also provides a more intuitive way to organize and access photos.
Additionally, Gemini AI supports multi-language translation, making it a valuable tool for international travelers. Users can simply speak a phrase in their native language, and Gemini AI will translate it into the desired language in real-time. This feature eliminates the need for separate translation apps and allows for seamless communication in foreign countries.
In terms of privacy, Google has implemented robust security measures to protect user data. Gemini AI operates locally on the iPhone, ensuring that sensitive information remains on the device and is not shared with external servers. This ensures user privacy while still providing the benefits of AI-powered functionality.
In conclusion, Google’s Gemini AI has significantly enhanced the functionality of iPhones by enabling natural language interactions, personalized recommendations, seamless integration with apps, improved camera capabilities, and multi-language translation. This technology has transformed the way we use our iPhones, making them more intuitive and user-friendly. As AI continues to evolve, we can expect even more exciting features and advancements in the future.