At Google's I/O 2024 developer conference, the company spent basically the entire time talking about AI (as expected). During the keynote, Google detailed how artificial intelligence would integrate with Android moving forward, with Gemini stepping into the same virtual assistant role as Google Assistant, but in a more integrated, contextual way.
Also: Android 15 unveiled: Here are 8 exciting (or handy) features coming to your phone
After releasing the Gemini app back in February, Google has begun fleshing out Gemini for Android with a suite of new features that can infuse AI into more aspects of your everyday life.
Google tried pitching these changes as a ground-up rethinking of Android, but at least for now, they're more complimentary to the larger Android experience.
Gemini itself is quite literally that. Google is tweaking Gemini's design so that it floats on top of whatever you're doing, rather than take up your entire screen like it does now. This is the same way Google Assistant presents itself. With Gemini, however, you get a huge text field to type your prompts into, lessening the focus on voice prompts.
The new overlay is meant to represent deeper integration with whatever app you're using and give you contextualized controls. Google gave an example of when you're watching a YouTube video; pull up Gemini, and you'll see a button that says "Ask this video," allowing you to use the knowledge base of the video to ask questions or summarize the content. You can also do this with PDFs, so long as you subscribe to Gemini Advanced, which has a longer context window.
Also: 3 new Gemini Advanced features unveiled at Google I/O 2024
Gemini will also flow more easily within apps, such as when you're using drag-and-drop. During the keynote, Google demoed how you can ask the chatbot to generate an image, and then drag the result into a messaging app, drop it, and send it to your friend.
Over time, Google says Gemini will become more contextually aware of apps on your phone and make it easier to navigate them with Dynamic Suggestions.
Google is also upgrading Circle to Search, which is already available on over 100 million Android devices, to help with homework. Specifically, the feature will be able to help students better understand complex physics and math word problems they're stuck on. They'll get a detailed breakdown of how to work through the problem and never need to touch their digital info sheet or syllabus.
Also: Google just teased AR smart glasses, and you can already see how the software works
The new ability is powered by Google's LearnLM model, which aims to make learning easier with AI. Over time, Google says that you'll be able to use Circle to Search to solve more complex problems involving things like symbolic formulas, diagrams, and graphs.
Google also announced that Gemini Nano, the model built directly into Android (albeit on very few devices), will receive an upgrade called "Gemini Nano with Multimodality." The updated LLM will allow you to use various media inputs such as text, photos, videos, and voice to interact with Gemini and get answers to questions, information on queries, and more.
Also: Meet Gemini AI Live: Like FaceTiming with a friend who knows everything
The model will power features like TalkBack for text descriptions of images and real-time spam notifications during phone calls (which is helpful if, for some reason, you believe the person calling from the unknown number isactuallythe prince of Egypt and you must wire him 1 million dollars).
These are just some of the AI goodies that Google will be bringing to Android 15 and beyond over time. Some of them will launch first on Pixel, while others will be available for anyone who downloads the Gemini app. How exactly it'll all shake out remains to be seen, but it's clear that if you own and use an Android phone, it's about to get a lot more powerful.