❌

Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

The AI war between Google and OpenAI has never been more heated

20 December 2024 at 07:44

Over the past month, we've seen a rapid cadence of notable AI-related announcements and releases from both Google and OpenAI, and it's been making the AI community's head spin. It has also poured fuel on the fire of the OpenAI-Google rivalry, an accelerating game of one-upmanship taking place unusually close to the Christmas holiday.

"How are people surviving with the firehose of AI updates that are coming out," wrote one user on X last Friday, which is still a hotbed of AI-related conversation. "in the last <24 hours we got gemini flash 2.0 and chatGPT with screenshare, deep research, pika 2, sora, chatGPT projects, anthropic clio, wtf it never ends."

Rumors travel quickly in the AI world, and people in the AI industry had been expecting OpenAI to ship some major products in December. Once OpenAI announced "12 days of OpenAI" earlier this month, Google jumped into gear and seemingly decided to try to one-up its rival on several counts. So far, the strategy appears to be working, but it's coming at the cost of the rest of the world being able to absorb the implications of the new releases.

Read full article

Comments

Β© RenataAphotography via Getty Images

Not to be outdone by OpenAI, Google releases its own β€œreasoning” AI model

19 December 2024 at 13:49

It's been a really busy month for Google as it apparently endeavors to outshine OpenAI with a blitz of AI releases. On Thursday, Google dropped its latest party trick: Gemini 2.0 Flash Thinking Experimental, which is a new AI model that uses runtime "reasoning" techniques similar to OpenAI's o1 to achieve "deeper thinking" on problems fed into it.

The experimental model builds on Google's newly released Gemini 2.0 Flash and runs on its AI Studio platform, but early tests conducted by TechCrunch reporter Kyle Wiggers reveal accuracy issues with some basic tasks, such as incorrectly counting that the word "strawberry" contains two R's.

These so-called reasoning models differ from standard AI models by incorporating feedback loops of self-checking mechanisms, similar to techniques we first saw in early 2023 with hobbyist projects like "Baby AGI." The process requires more computing time, often adding extra seconds or minutes to response times. Companies have turned to reasoning models as traditional scaling methods at training time have been showing diminishing returns.

Read full article

Comments

Β© Alan Schein via Getty Images

Google releases its own β€˜reasoning’ AI model

19 December 2024 at 09:22

Google has released what it’s calling a new β€œreasoning” AI model β€” but it’s in the experimental stages, and from our brief testing, there’s certainly room for improvement. The new model, called Gemini 2.0 Flash Thinking Experimental (a mouthful, to be sure), is available in AI Studio, Google’s AI prototyping platform. A model card describes […]

Β© 2024 TechCrunch. All rights reserved. For personal use only.

Google goes β€œagentic” with Gemini 2.0’s ambitious AI agent features

11 December 2024 at 11:23

On Wednesday, Google unveiled Gemini 2.0, the next generation of its AI-model family, starting with an experimental release called Gemini 2.0 Flash. The model family can generate text, images, and speech while processing multiple types of input including text, images, audio, and video. It's similar to multimodal AI models like GPT-4o, which powers OpenAI's ChatGPT.

"Gemini 2.0 Flash builds on the success of 1.5 Flash, our most popular model yet for developers, with enhanced performance at similarly fast response times," said Google in a statement. "Notably, 2.0 Flash even outperforms 1.5 Pro on key benchmarks, at twice the speed."

Gemini 2.0 Flashβ€”which is the smallest model of the 2.0 family in terms of parameter countβ€”launches today through Google's developer platforms like Gemini API, AI Studio, and Vertex AI. However, its image generation and text-to-speech features remain limited to early access partners until January 2025. Google plans to integrate the tech into products like Android Studio, Chrome DevTools, and Firebase.

Read full article

Comments

Β© Google

Google’s AI Overviews will soon be able to answer math and coding questions

11 December 2024 at 07:30

Google's AI Overviews feature on Search is getting an upgrade, thanks to the company's newly launched Gemini 2.0 model.

Β© 2024 TechCrunch. All rights reserved. For personal use only.

❌
❌