Google has started rolling out new AI features to Gemini Live, enabling it to analyze a smartphone screen or live camera feed and provide real-time responses. A Google spokesperson confirmed the launch, nearly a year after the company first showcased the underlying “Project Astra” technology.
A Reddit user recently reported seeing the feature on their Xiaomi phone, as first spotted by 9to5Google. The same user later shared a demonstration of Gemini’s screen-reading ability, which allows it to interpret on-screen content and answer related questions. These features are gradually becoming available to Gemini Advanced subscribers through the Google One AI Premium plan.
Another feature now rolling out is live video interpretation, where Gemini can analyze a real-time camera feed and respond to queries about what it sees. In a recent demonstration video released by Google, a user asked Gemini for advice on selecting a paint color for freshly-glazed pottery, showcasing its practical applications.This rollout highlights Google’s continued leadership in AI-powered assistants, while competitors like Amazon and Apple work on their own advancements. Amazon is preparing an upgraded Alexa experience, and Apple has delayed its new Siri features. Meanwhile, Samsung continues to offer Bixby, though Gemini remains the default assistant on its smartphones.