Google reportedly wants the US Federal Trade Commission (FTC) to end Microsoft's exclusive cloud deal with OpenAI that requires anyone wanting access to OpenAI's models to go through Microsoft's servers.
Someone "directly involved" in Google's effort told The Information that Google's request came after the FTC began broadly probing how Microsoft's cloud computing business practices may be harming competition.
As part of the FTC's investigation, the agency apparently asked Microsoft's biggest rivals if the exclusive OpenAI deal was "preventing them from competing in the burgeoning artificial intelligence market," multiple sources told The Information. Google reportedly was among those arguing that the deal harms competition by saddling rivals with extra costs and blocking them from hosting OpenAI's latest models themselves.
Just last year, Inflection AI was as hot as a startup could be, releasing best-in-class AI models it claimed could outperform technology from OpenAI, Meta, and Google. Thatβs a stark contrast compared to today, as Inflectionβs new CEO tells TechCrunch that his startup is simply no longer trying to compete on that front. Between then [β¦]
For years, hashing technology has made it possible for platforms to automatically detect known child sexual abuse materials (CSAM) to stop kids from being retraumatized online. However, rapidly detecting new or unknown CSAM remained a bigger challenge for platforms as new victims continued to be victimized. Now, AI may be ready to change that.
Today, a prominent child safety organization, Thorn, in partnership with a leading cloud-based AI solutions provider, Hive, announced the release of an API expanding access to an AI model designed to flag unknown CSAM. It's the earliest use of AI technology striving to expose unreported CSAM at scale.
An expansion of Thorn's CSAM detection tool, Safer, the AI feature uses "advanced machine learning (ML) classification models" to "detect new or previously unreported CSAM," generating a "risk score to make human decisions easier and faster."
AI labs traveling the road to super-intelligent systems are realizing they might have to take a detour. βAI scaling laws,β the methods and expectations that labs have used to increase the capabilities of their models for the last five years, are now showing signs of diminishing returns, according to several AI investors, founders, and CEOs [β¦]