Google is gearing up to revolutionise its Gemini AI by transforming it into a fully-fledged Agentic AI assistant for smartphones with the release of Android 16. According to a report from Android Authority, the Android 16 update will introduce a new feature called “app functions,” allowing Gemini AI to perform tasks within third-party apps on behalf of users, thus making interactions with apps more seamless and automated.
The app functions feature, outlined in developer documentation provided with the first developer preview of Android 16 earlier this month, is poised to enhance the capabilities of the Gemini AI. Google defines app functions as specific pieces of functionality that an app offers to the system, and these can be integrated into various system features. Essentially, this means that Gemini AI will be able to perform certain actions inside apps without the user needing to manually open the app or engage in the task themselves. For example, a food ordering app might implement a function to place an order, and Gemini AI could execute this action directly, bypassing the need for the user to navigate through the app interface.
While specifics are still limited, this new functionality is expected to greatly expand the scope of actions Gemini AI can perform within third-party applications. Currently, Gemini AI can only execute a small number of actions through Gemini Extensions, which provide limited access to native Google apps and a few select third-party apps such as WhatsApp. These extensions allow users to interact with their apps by performing actions such as sending messages, opening apps, and adjusting device settings. For instance, the newly introduced Utilities extension gives Gemini the ability to open specific apps or websites and adjust display and volume levels. However, Gemini Assistant is still somewhat restricted in carrying out more complex, multi-step tasks that involve navigating several layers within an app or service.
The integration of app functions with Gemini AI could mark a significant leap forward in AI-driven automation, with the potential to handle a variety of tasks such as making purchases, scheduling appointments, or managing daily workflows across multiple apps, all without requiring direct user intervention. This development would move Google’s Gemini AI closer to the vision of an “Agentic AI,” an intelligent assistant capable of acting autonomously on the user’s behalf, significantly reducing the amount of manual interaction required.
In addition to the enhancements within Android apps, Google is also reportedly working on a separate project designed to automate web-based tasks. According to a report from The Information, Google is developing an AI agent under the codename “Project Jarvis.” This agent is intended to handle tasks such as booking tickets, conducting research, or completing forms, all within the Google Chrome browser. Like the Android 16 update, Project Jarvis is expected to be powered by a future iteration of the Gemini AI model and tailored for web-based tasks, adding another layer of functionality to the AI assistant across platforms.
With the combination of these two significant advancements—app functions on Android 16 and Project Jarvis for web tasks—Google aims to create a more robust and versatile AI assistant that seamlessly integrates into both mobile and web experiences. The introduction of Agentic AI could eventually lead to a future where users rely on AI to handle an increasing number of tasks across their digital lives, from personal assistant functions to complex workflows, all while reducing the need for direct interaction with devices.
The development of these features is still in its early stages, and users will have to wait for the full release of Android 16 to see the capabilities of Gemini AI in action. However, the integration of app functions represents a promising step forward in the evolution of AI assistants, with the potential to revolutionise the way we interact with our devices and the apps that power them.