❌

Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Not to be outdone by OpenAI, Google releases its own β€œreasoning” AI model

19 December 2024 at 13:49

It's been a really busy month for Google as it apparently endeavors to outshine OpenAI with a blitz of AI releases. On Thursday, Google dropped its latest party trick: Gemini 2.0 Flash Thinking Experimental, which is a new AI model that uses runtime "reasoning" techniques similar to OpenAI's o1 to achieve "deeper thinking" on problems fed into it.

The experimental model builds on Google's newly released Gemini 2.0 Flash and runs on its AI Studio platform, but early tests conducted by TechCrunch reporter Kyle Wiggers reveal accuracy issues with some basic tasks, such as incorrectly counting that the word "strawberry" contains two R's.

These so-called reasoning models differ from standard AI models by incorporating feedback loops of self-checking mechanisms, similar to techniques we first saw in early 2023 with hobbyist projects like "Baby AGI." The process requires more computing time, often adding extra seconds or minutes to response times. Companies have turned to reasoning models as traditional scaling methods at training time have been showing diminishing returns.

Read full article

Comments

Β© Alan Schein via Getty Images

Google releases its own β€˜reasoning’ AI model

19 December 2024 at 09:22

Google has released what it’s calling a new β€œreasoning” AI model β€” but it’s in the experimental stages, and from our brief testing, there’s certainly room for improvement. The new model, called Gemini 2.0 Flash Thinking Experimental (a mouthful, to be sure), is available in AI Studio, Google’s AI prototyping platform. A model card describes […]

Β© 2024 TechCrunch. All rights reserved. For personal use only.

Google goes β€œagentic” with Gemini 2.0’s ambitious AI agent features

11 December 2024 at 11:23

On Wednesday, Google unveiled Gemini 2.0, the next generation of its AI-model family, starting with an experimental release called Gemini 2.0 Flash. The model family can generate text, images, and speech while processing multiple types of input including text, images, audio, and video. It's similar to multimodal AI models like GPT-4o, which powers OpenAI's ChatGPT.

"Gemini 2.0 Flash builds on the success of 1.5 Flash, our most popular model yet for developers, with enhanced performance at similarly fast response times," said Google in a statement. "Notably, 2.0 Flash even outperforms 1.5 Pro on key benchmarks, at twice the speed."

Gemini 2.0 Flashβ€”which is the smallest model of the 2.0 family in terms of parameter countβ€”launches today through Google's developer platforms like Gemini API, AI Studio, and Vertex AI. However, its image generation and text-to-speech features remain limited to early access partners until January 2025. Google plans to integrate the tech into products like Android Studio, Chrome DevTools, and Firebase.

Read full article

Comments

Β© Google

Gemini 2.0, Google’s newest flagship AI, can generate text, images, and speech

11 December 2024 at 07:30

Google’s next major AI model has arrived to combat a slew of new offerings from OpenAI. On Wednesday, Google announced Gemini 2.0 Flash, which the company says can natively generate images and audio in addition to text. 2.0 Flash can also use third-party apps and services, allowing it to tap into Google Search, execute code, […]

Β© 2024 TechCrunch. All rights reserved. For personal use only.

Google Gemini can now do more in-depth research

11 December 2024 at 07:30

Google is bringing a new capability to Gemini, its chatbot platform, that enables the AI to perform 'deep research' on a subject.

Β© 2024 TechCrunch. All rights reserved. For personal use only.

❌
❌