❌

Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Cohere is quietly working with Palantir to deploy its AI models

16 December 2024 at 15:27

The Canadian AI startup has quietly partnered with Palantir to put Cohere's models in use with unnamed Palantir customers.

Β© 2024 TechCrunch. All rights reserved. For personal use only.

Report: Google told FTC Microsoft’s OpenAI deal is killing AI competition

Google reportedly wants the US Federal Trade Commission (FTC) to end Microsoft's exclusive cloud deal with OpenAI that requires anyone wanting access to OpenAI's models to go through Microsoft's servers.

Someone "directly involved" in Google's effort told The Information that Google's request came after the FTC began broadly probing how Microsoft's cloud computing business practices may be harming competition.

As part of the FTC's investigation, the agency apparently asked Microsoft's biggest rivals if the exclusive OpenAI deal was "preventing them from competing in the burgeoning artificial intelligence market," multiple sources told The Information. Google reportedly was among those arguing that the deal harms competition by saddling rivals with extra costs and blocking them from hosting OpenAI's latest models themselves.

Read full article

Comments

Β© JASON REDMOND / Contributor | AFP

Inflection AI CEO says it’s done trying to make next-generation AI models

26 November 2024 at 06:00

Just last year, Inflection AI was as hot as a startup could be, releasing best-in-class AI models it claimed could outperform technology from OpenAI, Meta, and Google. That’s a stark contrast compared to today, as Inflection’s new CEO tells TechCrunch that his startup is simply no longer trying to compete on that front. Between then […]

Β© 2024 TechCrunch. All rights reserved. For personal use only.

Child safety org flags new CSAM with AI trained on real child sex abuse images

For years, hashing technology has made it possible for platforms to automatically detect known child sexual abuse materials (CSAM) to stop kids from being retraumatized online. However, rapidly detecting new or unknown CSAM remained a bigger challenge for platforms as new victims continued to be victimized. Now, AI may be ready to change that.

Today, a prominent child safety organization, Thorn, in partnership with a leading cloud-based AI solutions provider, Hive, announced the release of an API expanding access to an AI model designed to flag unknown CSAM. It's the earliest use of AI technology striving to expose unreported CSAM at scale.

An expansion of Thorn's CSAM detection tool, Safer, the AI feature uses "advanced machine learning (ML) classification models" to "detect new or previously unreported CSAM," generating a "risk score to make human decisions easier and faster."

Read full article

Comments

Β© Aurich Lawson | Getty Images

Current AI scaling laws are showing diminishing returns, forcing AI labs to change course

20 November 2024 at 06:00

AI labs traveling the road to super-intelligent systems are realizing they might have to take a detour. β€œAI scaling laws,” the methods and expectations that labs have used to increase the capabilities of their models for the last five years, are now showing signs of diminishing returns, according to several AI investors, founders, and CEOs […]

Β© 2024 TechCrunch. All rights reserved. For personal use only.

❌
❌