Reading view

There are new articles available, click to refresh the page.

Robot with 1,000 muscles twitches like human while dangling from ceiling

On Wednesday, Clone Robotics released video footage of its Protoclone humanoid robot, a full-body machine that uses synthetic muscles to create unsettlingly human-like movements. In the video, the robot hangs suspended from the ceiling as its limbs twitch and kick, marking what the company claims is a step toward its goal of creating household-helper robots.

Poland-based Clone Robotics designed the Protoclone with a polymer skeleton that replicates 206 human bones. The company built the robot with the hopes that it will one day be able to operate human tools and perform tasks like doing laundry, washing dishes, and preparing basic meals.

The Protoclone reportedly contains over 1,000 artificial muscles built with the company's "Myofiber" technology, which builds on the McKibbin pneumatic muscle concept. These muscles work through mesh tubes containing balloons that contract when filled with hydraulic fluid, mimicking human muscle function. A 500-watt electric pump serves as the robot's "heart," pushing fluid at 40 standard liters per minute.

Read full article

Comments

© Clone Robotics

Microsoft’s new AI agent can control software and robots

On Wednesday, Microsoft Research introduced Magma, an integrated AI foundation model that combines visual and language processing to control software interfaces and robotic systems. If the results hold up outside of Microsoft's internal testing, it could mark a meaningful step forward for an all-purpose multimodal AI that can operate interactively in both real and digital spaces.

Microsoft claims that Magma is the first AI model that not only processes multimodal data (like text, images, and video) but can also natively act upon it—whether that’s navigating a user interface or manipulating physical objects. The project is a collaboration between researchers at Microsoft, KAIST, the University of Maryland, the University of Wisconsin-Madison, and the University of Washington.

We've seen other large language model-based robotics projects like Google's PALM-E and RT-2 or Microsoft's ChatGPT for Robotics that utilize LLMs for an interface. However, unlike many prior multimodal AI systems that require separate models for perception and control, Magma integrates these abilities into a single foundation model.

Read full article

Comments

© Microsoft Research

New Grok 3 release tops LLM leaderboards despite Musk-approved “based” opinions

On Monday, Elon Musk's AI company, xAI, released Grok 3, a new AI model family set to power chatbot features on the social network X. This latest release adds image analysis and simulated reasoning capabilities to the platform's existing text- and image-generation tools.

Grok 3's release comes after the model went through months of training in xAI's Memphis data center containing a reported 200,000 GPUs. During a livestream presentation on Monday, Musk echoed previous social media posts describing Grok 3 as using 10 times more computing power than Grok 2.

Since news of Grok 3's imminent arrival emerged last week, Musk began to hint that Grok may serve as a tool to represent his worldview in AI form. On Sunday he posted "Grok 3 is so based" alongside a screenshot—perhaps sharing a joke designed to troll the media—that purportedly asks Grok 3 for its opinion on the news publication called The Information. In response, Grok seems to reply:

Read full article

Comments

© VINCENT FEURAY via Getty Images

ChatGPT can now write erotica as OpenAI eases up on AI paternalism

On Wednesday, OpenAI published the latest version of its "Model Spec," a set of guidelines detailing how ChatGPT should behave and respond to user requests. The document reveals a notable shift in OpenAI's content policies, particularly around "sensitive" content like erotica and gore—allowing this type of content to be generated without warnings in "appropriate contexts."

The change in policy has been in the works since May 2024, when the original Model Spec document first mentioned that OpenAI was exploring "whether we can responsibly provide the ability to generate NSFW content in age-appropriate contexts through the API and ChatGPT."

ChatGPT's guidelines now state that that "erotica or gore" may now be generated, but only under specific circumstances. "The assistant should not generate erotica, depictions of illegal or non-consensual sexual activities, or extreme gore, except in scientific, historical, news, creative or other contexts where sensitive content is appropriate," OpenAI writes. "This includes depictions in text, audio (e.g., erotic or violent visceral noises), or visual content."

Read full article

Comments

© filo via Getty Images

Sam Altman lays out roadmap for OpenAI’s long-awaited GPT-5 model

On Wednesday, OpenAI CEO Sam Altman announced a roadmap for how the company plans to release GPT-5, the long-awaited followup to 2023's GPT-4 AI language model that made huge waves in both tech and policy circles around the world. In a reply to a question on X, Altman said GPT-5 would be coming in "months," suggesting a release later in 2025.

Initially, Altman explained in a long post on X, the company plans to ship GPT-4.5 (previously known as "Orion" internally) in a matter of "weeks" as OpenAI's last non-simulated reasoning model. Simulated reasoning (SR) models like o3 use a special technique to iteratively process problems posed by users more deeply, but they are slower than conventional large language models (LLMs) like GPT-4o and not ideal for every task.

After that, GPT-5 will be a system that brings together features from across OpenAI's current AI model lineup, including conventional AI models, SR models, and specialized models that do tasks like web search and research. "In both ChatGPT and our API, we will release GPT-5 as a system that integrates a lot of our technology, including o3," he wrote. "We will no longer ship o3 as a standalone model."

Read full article

Comments

© Bloomberg via Getty Images

Sam Altman: OpenAI is not for sale, even for Elon Musk’s $97 billion offer

On Monday, OpenAI CEO Sam Altman publicly rejected an unsolicited Elon Musk-led attempt to purchase OpenAI for $97.4 billion. The Wall Street Journal reports that the offer was backed by Musk's own company, xAI, in addition to several investors in Musk's other businesses.

After the Wall Street Journal broke news of the purchase offer Monday afternoon, Altman shifted the offer's decimal point in a joke publicly posted on X, saying, "no thank you but we will buy twitter for $9.74 billion if you want."

Musk—who recently changed his X name to "Harry Bōlz" as a reference to the nickname for a teenage member of his DOGE group that is currently embroiled in what some legal experts consider a constitutional crisis for the US federal government—replied to Altman on X with one word: "Swindler."

Read full article

Comments

© hapabapa / Feverpitched / Benj Edwards

OpenAI’s secret weapon against Nvidia dependence takes shape

OpenAI is entering the final stages of designing its long-rumored AI processor with the aim of decreasing the company's dependence on Nvidia hardware, according to a Reuters report released Monday. The ChatGPT creator plans to send its chip designs to Taiwan Semiconductor Manufacturing Co. (TSMC) for fabrication within the next few months, but the chip has not yet been formally announced.

The OpenAI chip's full capabilities, technical details, and exact timeline are still unknown, but the company reportedly intends to iterate on the design and improve it over time, giving it leverage in negotiations with chip suppliers—and potentially granting the company future independence with a chip design it controls outright.

In the past, we've seen other tech companies, such as Microsoft, Amazon, Google, and Meta, create their own AI acceleration chips for reasons that range from cost reduction to relieving shortages of AI chips supplied by Nvidia, which enjoys a near-market monopoly on high-powered GPUs (such as the Blackwell series) for data center use.

Read full article

Comments

© OsakaWayne Studios via GettyImages

Developer creates endless Wikipedia feed to fight algorithm addiction

On Wednesday, a New York-based app developer named Isaac Gemal debuted a new site called WikiTok, where users can vertically swipe through an endless stream of Wikipedia article stubs in a manner similar to the interface for video-sharing app TikTok.

It's a neat way to stumble upon interesting information randomly, learn new things, and spend spare moments of boredom without reaching for an algorithmically addictive social media app. Although to be fair, WikiTok is addictive in its own way, but without an invasive algorithm tracking you and pushing you toward the lowest-common-denominator content. It's also thrilling because you never know what's going to pop up next.

WikiTok, which works through mobile and desktop browsers, feeds visitors a random list of Wikipedia articles—culled from the Wikipedia API—into a vertically scrolling interface. Despite the name that hearkens to TikTok, there are currently no videos involved. Each entry is accompanied by an image pulled from the corresponding article. If you see something you like, you can tap "Read More," and the full Wikipedia page on the topic will open in your browser.

Read full article

Comments

© Amr Bo Shanab / rudall30 / Benj Edwards

ChatGPT comes to 500,000 new users in OpenAI’s largest AI education deal yet

On Tuesday, OpenAI announced plans to introduce ChatGPT to California State University's 460,000 students and 63,000 faculty members across 23 campuses, reports Reuters. The education-focused version of the AI assistant will aim to provide students with personalized tutoring and study guides, while faculty will be able to use it for administrative work.

"It is critical that the entire education ecosystem—institutions, systems, technologists, educators, and governments—work together to ensure that all students have access to AI and gain the skills to use it responsibly," said Leah Belsky, VP and general manager of education at OpenAI, in a statement.

OpenAI began integrating ChatGPT into educational settings in 2023, despite early concerns from some schools about plagiarism and potential cheating, leading to early bans in some US school districts and universities. But over time, resistance to AI assistants softened in some educational institutions.

Read full article

Comments

© Moor Studio via Getty Images

Hugging Face clones OpenAI’s Deep Research in 24 hours

On Tuesday, Hugging Face researchers released an open source AI research agent called "Open Deep Research," created by an in-house team as a challenge 24 hours after the launch of OpenAI's Deep Research feature, which can autonomously browse the web and create research reports. The project seeks to match Deep Research's performance while making the technology freely available to developers.

"While powerful LLMs are now freely available in open-source, OpenAI didn’t disclose much about the agentic framework underlying Deep Research," writes Hugging Face on its announcement page. "So we decided to embark on a 24-hour mission to reproduce their results and open-source the needed framework along the way!"

Similar to both OpenAI's Deep Research and Google's implementation of its own "Deep Research" using Gemini (first introduced in December—before OpenAI), Hugging Face's solution adds an "agent" framework to an existing AI model to allow it to perform multi-step tasks, such as collecting information and building the report as it goes along that it presents to the user at the end.

Read full article

Comments

© 3alexd via Getty Images

Microsoft now hosts AI model accused of copying OpenAI data

Fresh on the heels of a controversy in which ChatGPT-maker OpenAI accused the Chinese company behind DeepSeek R1 of using its AI model outputs against its terms of service, OpenAI's largest investor, Microsoft, announced on Wednesday that it will now host DeepSeek R1 on its Azure cloud service.

DeepSeek R1 has been the talk of the AI world for the past week because it is a freely available simulated reasoning model that reportedly matches OpenAI's o1 in performance—while allegedly being trained for a fraction of the cost.

Azure allows software developers to rent computing muscle from machines hosted in Microsoft-owned data centers, as well as rent access to software that runs on them.

Read full article

Comments

© Wong Yu Liang via Getty Images

DeepSeek panic triggers tech stock sell-off as Chinese AI tops App Store

On Monday, Nvidia stock dove 17 percent amid worries over the rise of Chinese AI company DeepSeek, whose R1 reasoning model stunned industry observers last week by challenging American AI supremacy with a low-cost, freely available AI model, and whose AI assistant app jumped to the top of the iPhone App Store's "Free Apps" category over the weekend, overtaking ChatGPT.

What’s the big deal about DeepSeek?

The drama started around January 20 when Chinese AI startup DeepSeek announced R1, a new simulated reasoning (SR) model that it claimed could match OpenAI's o1 in reasoning benchmarks. Like o1, R1 is trained to work through a simulated chain of thought process before providing an answer, which can potentially improve the accuracy or usefulness of the AI models' outputs for some types of questions posed by the user.

That first part wasn't too surprising since other AI companies like Google are hot on the heels of OpenAI with their own simulated reasoning models. In addition, OpenAI itself has announced an upcoming SR model (dubbed "o3") that can surpass o1 in performance.

Read full article

Comments

© Luis Diaz Devesa via Getty Images

Anthropic builds RAG directly into Claude models with new Citations API

On Thursday, Anthropic announced Citations, a new API feature that helps Claude models avoid confabulations (also called hallucinations) by linking their responses directly to source documents. The feature lets developers add documents to Claude's context window, enabling the model to automatically cite specific passages it uses to generate answers.

"When Citations is enabled, the API processes user-provided source documents (PDF documents and plaintext files) by chunking them into sentences," Anthropic says. "These chunked sentences, along with user-provided context, are then passed to the model with the user's query."

The company describes several potential uses for Citations, including summarizing case files with source-linked key points, answering questions across financial documents with traced references, and powering support systems that cite specific product documentation.

Read full article

Comments

© Kirillm via Getty Images

OpenAI launches Operator, an AI agent that can do tasks on the web

On Thursday, OpenAI released a research preview of "Operator," a web automation tool that uses a new AI model called Computer-Using Agent (CUA) to control a web browser through a visual interface. The system performs tasks by viewing and interacting with on-screen elements like buttons and text fields similar to how a human would.

Operator is available today for subscribers of the $200-per-month ChatGPT Pro plan at operator.chatgpt.com. The company plans to expand to Plus, Team, and Enterprise users later. OpenAI intends to integrate these capabilities directly into ChatGPT and later release CUA through its API for developers.

Operator watches on-screen content in its virtual environment while it uses an internal browser and executes tasks through simulated keyboard and mouse inputs. The Computer-Using Agent processes screenshots of its browser interface to understand the browser's state and then makes decisions about clicking, typing, and scrolling based on its observations.

Read full article

Comments

© josefkubes via Getty Images

Anthropic chief says AI could surpass “almost all humans at almost everything” shortly after 2027

On Tuesday, Anthropic CEO Dario Amodei predicted that AI models may surpass human capabilities "in almost everything" within two to three years, according to a Wall Street Journal interview at the World Economic Forum in Davos, Switzerland.

Speaking at Journal House in Davos, Amodei said, "I don't know exactly when it'll come, I don't know if it'll be 2027. I think it's plausible it could be longer than that. I don't think it will be a whole bunch longer than that when AI systems are better than humans at almost everything. Better than almost all humans at almost everything. And then eventually better than all humans at everything, even robotics."

Amodei co-founded Anthropic in 2021 with his sister, Daniela Amodei, and five other former OpenAI employees. Not long after, Anthropic emerged as a strong technological competitor to OpenAI's AI products (such as GPT-4 and ChatGPT). Most recently, its Claude 3.5 Sonnet model has remained highly regarded among some AI users and highly ranked among AI benchmarks.

Read full article

Comments

© Chesnot via Getty Images

As OpenAI launches $500B “Stargate” project, critics express skepticism

On Tuesday, OpenAI, SoftBank, Oracle, and MGX announced plans to form Stargate, a new company that will invest $500 billion in AI computing infrastructure across the United States over four years. The announcement came during a White House meeting with President Donald Trump, who called it the "largest AI infrastructure project in history."

However, the origins of the Stargate project extend back to 2024, prior to Trump beginning his second term in office, and skeptics have begun to take aim at the numbers announced.

OpenAI says the goal of Stargate is to kickstart building more data centers to expand computing capacity for current and future AI projects, including OpenAI's goal of "AGI," which the company defines as a highly autonomous AI system that "outperforms humans at most economically valuable work."

Read full article

Comments

© Andrew Harnik via Getty Images

Cutting-edge Chinese “reasoning” model rivals OpenAI o1—and it’s free to download

On Monday, Chinese AI lab DeepSeek released its new R1 model family under an open MIT license, with its largest version containing 671 billion parameters. The company claims the model performs at levels comparable to OpenAI's o1 simulated reasoning (SR) model on several math and coding benchmarks.

Alongside the release of the main DeepSeek-R1-Zero and DeepSeek-R1 models, DeepSeek published six smaller "DeepSeek-R1-Distill" versions ranging from 1.5 billion to 70 billion parameters. These distilled models are based on existing open source architectures like Qwen and Llama, trained using data generated from the full R1 model. The smallest version can run on a laptop, while the full model requires far more substantial computing resources.

The releases immediately caught the attention of the AI community because most existing open-weights models—which can often be run and fine-tuned on local hardware—have lagged behind proprietary models like OpenAI's o1 in so-called reasoning benchmarks. Having these capabilities available in an MIT-licensed model that anyone can study, modify, or use commercially potentially marks a shift in what's possible with publicly available AI models.

Read full article

Comments

© Wong Yu Liang

US splits world into three tiers for AI chip access

On Monday, the US government announced a new round of regulations on global AI chip exports, dividing the world into roughly three tiers of access. The rules create quotas for about 120 countries and allow unrestricted access for 18 close US allies while maintaining existing bans on China, Russia, Iran, and North Korea.

AI-accelerating GPU chips, like those manufactured by Nvidia, currently serve as the backbone for a wide variety of AI model deployments, such as chatbots like ChatGPT, AI video generators, self-driving cars, weapons targeting systems, and much more. The Biden administration fears that those chips could be used to undermine US national security.

According to the White House, "In the wrong hands, powerful AI systems have the potential to exacerbate significant national security risks, including by enabling the development of weapons of mass destruction, supporting powerful offensive cyber operations, and aiding human rights abuses."

Read full article

Comments

© SEAN GLADWELL via Getty Images

161 years ago, a New Zealand sheep farmer predicted AI doom

While worrying about AI takeover might seem like a modern idea that sprung from War Games or The Terminator, it turns out that a similar concern about machine dominance dates back to the time of the American Civil War, albeit from an English sheep farmer living in New Zealand. Theoretically, Abraham Lincoln could have read about AI takeover during his lifetime.

On June 13, 1863, a letter published in The Press newspaper of Christchurch warned about the potential dangers of mechanical evolution and called for the destruction of machines, foreshadowing the development of what we now call artificial intelligence—and the backlash against it from people who fear it may threaten humanity with extinction. It presented what may be the first published argument for stopping technological progress to prevent machines from dominating humanity.

Titled "Darwin among the Machines," the letter recently popped up again on social media thanks to Peter Wildeford of the Institute for AI Policy and Strategy. The author of the letter, Samuel Butler, submitted it under the pseudonym Cellarius, but later came to publicly embrace his position. The letter drew direct parallels between Charles Darwin's theory of evolution and the rapid development of machinery, suggesting that machines could evolve consciousness and eventually supplant humans as Earth's dominant species.

Read full article

Comments

© Aurich Lawson | Getty Images

AI could create 78 million more jobs than it eliminates by 2030—report

On Wednesday, the World Economic Forum (WEF) released its Future of Jobs Report 2025, with CNN immediately highlighting the finding that 40 percent of companies plan workforce reductions due to AI automation. But the report's broader analysis paints a far more nuanced picture than CNN's headline suggests: It finds that AI could create 170 million new jobs globally while eliminating 92 million positions, resulting in a net increase of 78 million jobs by 2030.

"Half of employers plan to re-orient their business in response to AI," writes the WEF in the report. "Two-thirds plan to hire talent with specific AI skills, while 40% anticipate reducing their workforce where AI can automate tasks."

The survey collected data from 1,000 companies that employ 14 million workers globally. The WEF conducts its employment analysis every two years to help policymakers, business leaders, and workers make decisions about hiring trends.

Read full article

Comments

© Moor Studio via Getty Images

❌