Google has released a prototype of Project Astraβs AR glasses for testing in the real world. The glasses are part of Googleβs long-term plan to one day have hardware with augmented reality and multimodal AI capabilities. In the meantime, they will be releasing demos to get the attention of consumers, developers, and their competition. Along [β¦]
Google is slowly peeling back the curtain on its vision to, one day, sell you glasses with augmented reality and multimodal AI capabilities. The companyβs plans for those glasses, however, are still blurry. At this point, weβve seen multiple demos of Project Astra β DeepMindβs effort to build real-time, multimodal apps and agents with AI [β¦]
On Wednesday, Google unveiled Gemini 2.0, the next generation of its AI-model family, starting with an experimental release called Gemini 2.0 Flash. The model family can generate text, images, and speech while processing multiple types of input including text, images, audio, and video. It's similar to multimodal AI models like GPT-4o, which powers OpenAI's ChatGPT.
"Gemini 2.0 Flash builds on the success of 1.5 Flash, our most popular model yet for developers, with enhanced performance at similarly fast response times," said Google in a statement. "Notably, 2.0 Flash even outperforms 1.5 Pro on key benchmarks, at twice the speed."
Gemini 2.0 Flashβwhich is the smallest model of the 2.0 family in terms of parameter countβlaunches today through Google's developer platforms like Gemini API, AI Studio, and Vertex AI. However, its image generation and text-to-speech features remain limited to early access partners until January 2025. Google plans to integrate the tech into products like Android Studio, Chrome DevTools, and Firebase.