❌

Normal view

There are new articles available, click to refresh the page.
Today β€” 3 March 2025Main stream

Gemini Live will learn to peer through your camera lens in a few weeks

At Mobile World Congress, Google confirmed that a long-awaited Gemini AI feature it first teased nearly a year ago is ready for launch. The company's conversational Gemini Live will soon be able to view live video and screen sharing, a feature Google previously demoed as Project Astra. When Gemini's video capabilities arrive, you'll be able to simply show the robot something instead of telling it.

Right now, Google's multimodal AI can process text, images, and various kinds of documents. However, its ability to accept video as an input is spotty at bestβ€”sometimes it can summarize a YouTube video, and sometimes it can't, for unknown reasons. Later in March, the Gemini app on Android will get a major update to its video functionality. You'll be able to open your camera to provide Gemini Live a video stream or share your screen as a live video, thus allowing you to pepper Gemini with questions about what it sees.

Gemini Live with video.

It can be hard to keep track of which Google AI project is whichβ€”the 2024 Google I/O was largely a celebration of all things Gemini AI. The Astra demo made waves as it demonstrated a more natural way to interact with the AI. In the original video, which you can see below, Google showed how Gemini Live could answer questions in real time as the user swept a phone around a room. It had things to say about code on a computer screen, how speakers work, and a network diagram on a whiteboard. It even remembered where the user left their glasses from an earlier part of the video.

Read full article

Comments

Β© Google

❌
❌