❌

Normal view

There are new articles available, click to refresh the page.
Yesterday β€” 19 May 2025Main stream

Labor dispute erupts over AI-voiced Darth Vader in Fortnite

On Monday, SAG-AFTRA filed an unfair labor practice charge with the National Labor Relations Board against Epic subsidiary Llama Productions for implementing an AI-generated Darth Vader voice in Fortnite on Friday without first notifying or bargaining with the union, as their contract requires.

Llama Productions is the official signatory to SAG-AFTRA's collective bargaining agreement for Fortnite, making it legally responsible for adhering to the union's terms regarding the employment of voice actors and other performers.

"We celebrate the right of our members and their estates to control the use of their digital replicas and welcome the use of new technologies," SAG-AFTRA stated in a news release. "However, we must protect our right to bargain terms and conditions around uses of voice that replace the work of our members, including those who previously did the work of matching Darth Vader's iconic rhythm and tone in video games."

Read full article

Comments

Β© Sunset Boulevard/Corbis via Getty Images

Before yesterdayMain stream

The empire strikes back with F-bombs: AI Darth Vader goes rogue with profanity, slurs

For a short period of time on Friday, Darth Vader could drop F-bombs in the video game Fortnite as part of a voice AI implementation gone wrong, reports GameSpot. Epic Games rapidly deployed a hotfix after players encountered the Sith Lord responding to their comments with profanity and strong language.

In Fortnite, the AI-voiced Vader appears as both a boss in battle royale mode and an interactive character. The official Star Wars website encourages players to "ask him all your pressing questions about the Force, the Galactic Empire… or you know, a good strat for the last Storm circle," adding that "the Sith Lord has opinions."

The F-bomb incident involved a Twitch streamer named Loserfruit, who triggered the forceful response when discussing food with the virtual Vader. The Dark Lord of the Sith responded by repeating her words "freaking" and "fucking" before adding, "Such vulgarity does not become you, Padme." The exchange spread virally across social media platforms on Friday.

Read full article

Comments

Β© Disney / Starwars.com

OpenAI adds GPT-4.1 to ChatGPT amid complaints over confusing model lineup

On Wednesday, OpenAI announced that ChatGPT users now have access to GPT-4.1, an AI language model previously available only through the company's API since its launch one month ago. The update brings what OpenAI describes as improved coding and web development capabilities to paid ChatGPT subscribers, with wider enterprise rollout planned in the coming weeks.

Adding GPT-4.1 and 4.1 mini to ChatGPT adds to an already complex model selection that includes GPT-4o, various specialized GPT-4o versions, o1-pro, o3-mini, and o3-mini-high models. There are technically nine AI models available for ChatGPT Pro subscribers. Wharton professor Ethan Mollick recently publicly lampooned the awkward situation on social media.

As of May 14, 2025, ChatGPT Pro users have access to 8 different main AI models, plus Deep Research. As of May 14, 2025, ChatGPT Pro users have access to eight main AI models, plus Deep Research. Credit: Benj Edwards

Deciding which AI model to use can be daunting for AI novices. Reddit users and OpenAI forum members alike commonly voice confusion about the available options. "I do not understand the reason behind having multiple models available for use," wrote one Reddit user in March. "Why would anyone use anything but the best one?" Another Redditor said they were "a bit lost" with the many ChatGPT models available after switching back from using Anthropic Claude.

Read full article

Comments

Β© Getty Images

GOP sneaks decade-long AI regulation ban into spending bill

On Sunday night, House Republicans added language to the Budget Reconciliation bill that would block all state and local governments from regulating AI for 10 years, 404 Media reports. The provision, introduced by Representative Brett Guthrie of Kentucky, states that "no State or political subdivision thereof may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the 10 year period beginning on the date of the enactment of this Act."

The broad wording of the proposal would prevent states from enforcing both existing and proposed laws designed to protect citizens from AI systems. For example, California's recent law requiring health care providers to disclose when they use generative AI to communicate with patients would potentially become unenforceable. New York's 2021 law mandating bias audits for AI tools used in hiring decisions would also be affected, 404 Media notes. The measure would also halt legislation set to take effect in 2026 in California that requires AI developers to publicly document the data used to train their models.

The ban could also restrict how states allocate federal funding for AI programs. States currently control how they use federal dollars and can direct funding toward AI initiatives that may conflict with the administration's technology priorities. The Education Department's AI programs represent one example where states might pursue different approaches than those favored by the White House and its tech industry allies.

Read full article

Comments

Β© Getty Images

New pope chose his name based on AI’s threats to β€œhuman dignity”

Last Thursday, white smoke emerged from a chimney at the Sistine Chapel, signaling that cardinals had elected a new pope. That's a rare event in itself, but one of the many unprecedented aspects of the election of Chicago-born Robert Prevost as Pope Leo XIV is one of the main reasons he chose his papal name: artificial intelligence.

On Saturday, the new pope gave his first address to the College of Cardinals, explaining his name choice as a continuation of Pope Francis' concerns about technological transformation.

"Sensing myself called to continue in this same path, I chose to take the name Leo XIV," he said during the address. "There are different reasons for this, but mainly because Pope Leo XIII in his historic Encyclical Rerum Novarum addressed the social question in the context of the first great industrial revolution."

Read full article

Comments

Β© Christopher Furlong via Getty Images

New Lego-building AI creates models that actually stand up in real life

On Thursday, researchers at Carnegie Mellon University unveiled LegoGPT, an AI model that creates physically stable Lego structures from text prompts. The new system not only designs Lego models that match text descriptions (prompts) but also ensures they can be built brick by brick in the real world, either by hand or with robotic assistance.

"To achieve this, we construct a large-scale, physically stable dataset of LEGO designs, along with their associated captions," the researchers wrote in their paper, which was posted on arXiv, "and train an autoregressive large language model to predict the next brick to add via next-token prediction."

This trained model generates Lego designs that match text prompts, like "a streamlined, elongated vessel" or "a classic-style car with a prominent front grille." The resulting designs are simple, using just a few brick types to create primitive shapesβ€”but they stand up. As one Ars Technica staffer joked this morning upon seeing the research, "It builds Lego like it's 1974."

Read full article

Comments

Β© Pun et al.

AI use damages professional reputation, study suggests

Using AI can be a double-edged sword, according to new research from Duke University. While generative AI tools may boost productivity for some, they might also secretly damage your professional reputation.

On Thursday, the Proceedings of the National Academy of Sciences (PNAS) published a study showing that employees who use AI tools like ChatGPT, Claude, and Gemini at work face negative judgments about their competence and motivation from colleagues and managers.

"Our findings reveal a dilemma for people considering adopting AI tools: Although AI can enhance productivity, its use carries social costs," write researchers Jessica A. Reif, Richard P. Larrick, and Jack B. Soll of Duke's Fuqua School of Business.

Read full article

Comments

Β© demaerre via Getty Images

Fidji Simo joins OpenAI as new CEO of Applications

On Wednesday, OpenAI announced that Instacart CEO Fidji Simo will join the maker of ChatGPT as "CEO of Applications" later this year, according to a company blog post. Simo, who has served on the company's board since March 2024, will oversee business and operational teams while continuing to report directly to Altman in the newly created role. Altman will remain the primary CEO of OpenAI.

According to Reuters, Simo spent a decade at Meta, including a stint serving as the head of Facebook from 2019 to 2021. She also currently sits on the board of e-commerce services site Shopify.

The announcement came earlier than planned due to what Altman described as "a leak" that "accelerated our timeline." At OpenAI, Simo will manage what Altman called "traditional company functions" as the organization enters its "next phase of growth." The applications category at OpenAI includes products like ChatGPT, the popular AI assistant.

Read full article

Comments

Β© Joel Saget via Getty Images

Trump admin to roll back Biden’s AI chip restrictions

On Wednesday, the Trump administration announced plans to rescind and replace a Biden-era rule regulating the export of high-end AI accelerator chips worldwide, Bloomberg and Reuters reported.

A Department of Commerce spokeswoman told Reuters that officials found the previous framework "overly complex, overly bureaucratic, and would stymie American innovation" and pledged to create "a much simpler rule that unleashes American innovation and ensures American AI dominance."

The Biden administration issued the Framework for Artificial Intelligence Diffusion in January during its final week in office. The regulation represented the last salvo of a four-year effort to control global access to so-called "advanced" AI chips (such as GPUs made by Nvidia), with a focus on restricting China's ability to obtain tech that could enhance its military capabilities.

Read full article

Comments

Β© SEAN GLADWELL via Getty Images

OpenAI scraps controversial plan to become for-profit after mounting pressure

On Monday, ChatGPT-maker OpenAI announced it will remain under the control of its founding nonprofit board, scrapping its controversial plan to split off its commercial operations as a for-profit company after mounting pressure from critics.

In an official OpenAI blog post announcing the latest restructuring decision, CEO Sam Altman wrote: "We made the decision for the nonprofit to stay in control after hearing from civic leaders and having discussions with the offices of the Attorneys General of California and Delaware."

The move represents a significant shift in OpenAI's proposed restructuring. While the most recent previous version of the company's plan (which we covered in December) would have established OpenAI as a Public Benefit Corporation with the nonprofit merely holding shares and having limited influence, the revised approach keeps the nonprofit firmly in control of operations.

Read full article

Comments

Β© Benj Edwards / OpenAI

Claude’s AI research mode now runs for up to 45 minutes before delivering reports

On Thursday, Anthropic announced significant upgrades to its AI assistant Claude, extending its research capabilities to run for up to 45 minutes before delivering comprehensive reports. The company also expanded its integration options, allowing Claude to connect with popular third-party services.

Much like Google's Deep Research (which debuted on December 11) and ChatGPT's deep research features (February 2), Anthropic announced its own "Research" feature on April 15. Each can autonomously browse the web and other online sources to compile research reports in document format, and open source clones of the technique have debuted as well.

Now, Anthropic is taking its Research feature a step further. The upgraded mode enables Claude to conduct "deeper" investigations across "hundreds of internal and external sources," Anthropic says. When users toggle the Research button, Claude breaks down complex requests into smaller components, examines each one, and compiles a report with citations linking to original sources.

Read full article

Comments

Β© UCG via Getty Images

Time saved by AI offset by new work created, study suggests

A new study analyzing the Danish labor market in 2023 and 2024 suggests that generative AI models like ChatGPT have had almost no significant impact on overall wages or employment yet, despite rapid adoption in some workplaces. The findings, detailed in a working paper by economists from the University of Chicago and the University of Copenhagen, provide an early, large-scale empirical look at AI's transformative potential.

In "Large Language Models, Small Labor Market Effects," economists Anders Humlum and Emilie Vestergaard focused specifically on the impact of AI chatbots across 11 occupations often considered vulnerable to automation, including accountants, software developers, and customer support specialists. Their analysis covered data from 25,000 workers and 7,000 workplaces in Denmark.

Despite finding widespread and often employer-encouraged adoption of these tools, the study concluded that "AI chatbots have had no significant impact on earnings or recorded hours in any occupation" during the period studied. The confidence intervals in their statistical analysis ruled out average effects larger than 1 percent.

Read full article

Comments

Β© Malte Mueller via Getty Images

The end of an AI that shocked the world: OpenAI retires GPT-4

One of the most influentialβ€”and by some counts, notoriousβ€”AI models yet released will soon fade into history. OpenAI announced on April 10 that GPT-4 will be "fully replaced" by GPT-4o in ChatGPT at the end of April, bringing a public-facing end to the model that accelerated a global AI race when it launched in March 2023.

"Effective April 30, 2025, GPT-4 will be retired from ChatGPT and fully replaced by GPT-4o," OpenAI wrote in its April 10 changelog for ChatGPT. While ChatGPT users will no longer be able to chat with the older AI model, the company added that "GPT-4 will still be available in the API," providing some reassurance to developers who might still be using the older model for various tasks.

The retirement marks the end of an era that began on March 14, 2023, when GPT-4 demonstrated capabilities that shocked some observers: reportedly scoring at the 90th percentile on the Uniform Bar Exam, acing AP tests, and solving complex reasoning problems that stumped previous models. Its release created a wave of immense hypeβ€”and existential panicβ€”about AI's ability to imitate human communication and composition.

Read full article

Comments

Β© Jake Warga via Getty Images

ChatGPT goes shopping with new product-browsing feature

On Thursday, OpenAI announced the addition of shopping features to ChatGPT Search. The new feature allows users to search for products and purchase them through merchant websites after being redirected from the ChatGPT interface. Product placement is not sponsored, and the update affects all users, regardless of whether they've signed in to an account.

Adam Fry, ChatGPT search product lead at OpenAI, showed Ars Technica's sister site Wired how the new shopping system works during a demonstration. Users researching products like espresso machines or office chairs receive recommendations based on their stated preferences, stored memories, and product reviews from around the web.

According to Wired, the shopping experience in ChatGPT resembles Google Shopping. When users click on a product image, the interface displays multiple retailers like Amazon and Walmart on the right side of the screen, with buttons to complete purchases. OpenAI is currently experimenting with categories that include electronics, fashion, home goods, and beauty products.

Read full article

Comments

Β© Westend61 via Getty Images

New study shows why simulated reasoning AI models don’t yet live up to their billing

There's a curious contradiction at the heart of today's most capable AI models that purport to "reason": They can solve routine math problems with accuracy, yet when faced with formulating deeper mathematical proofs found in competition-level challenges, they often fail.

That's the finding of eye-opening preprint research into simulated reasoning (SR) models, initially listed in March and updated in April, that mostly fell under the news radar. The research serves as an instructive case study on the mathematical limitations of SR models, despite sometimes grandiose marketing claims from AI vendors.

What sets simulated reasoning models apart from traditional large language models (LLMs) is that they have been trained to output a step-by-step "thinking" process (often called "chain-of-thought") to solve problems. Note that "simulated" in this case doesn't mean that the models do not reason at all but rather that they do not necessarily reason using the same techniques as humans. That distinction is important because human reasoning itself is difficult to define.

Read full article

Comments

Β© PhonlamaiPhoto via Getty Images

In the age of AI, we must protect human creativity as a natural resource

Ironically, our present AI age has shone a bright spotlight on the immense value of human creativity as breakthroughs in technology threaten to undermine it. As tech giants rush to build newer AI models, their web crawlers vacuum up creative content, and those same models spew floods of synthetic media, risking drowning out the human creative spark in an ocean of pablum.

Given this trajectory, AI-generated content may soon exceed the entire corpus of historical human creative works, making the preservation of the human creative ecosystem not just an ethical concern but an urgent imperative. The alternative is nothing less than a gradual homogenization of our cultural landscape, where machine learning flattens the richness of human expression into a mediocre statistical average.

A limited resource

By ingesting billions of creations, chatbots learn to talk, and image synthesizers learn to draw. Along the way, the AI companies behind them treat our shared culture like an inexhaustible resource to be strip-mined, with little thought for the consequences.

Read full article

Comments

Β© Kenny McCartney via Getty Images

AI secretly helped write California bar exam, sparking uproar

On Monday, the State Bar of California revealed that it used AI to develop a portion of multiple-choice questions on its February 2025 bar exam, causing outrage among law school faculty and test takers. The admission comes after weeks of complaints about technical problems and irregularities during the exam administration, reports the Los Angeles Times.

The State Bar disclosed that its psychometrician (a person or organization skilled in administrating psychological tests), ACS Ventures, created 23 of the 171 scored multiple-choice questions with AI assistance. Another 48 questions came from a first-year law student exam, while Kaplan Exam Services developed the remaining 100 questions.

The State Bar defended its practices, telling the LA Times that all questions underwent review by content validation panels and subject matter experts before the exam. "The ACS questions were developed with the assistance of AI and subsequently reviewed by content validation panels and a subject matter expert in advance of the exam," wrote State Bar Executive Director Leah Wilson in a press release.

Read full article

Comments

Β© Getty Images

Annoyed ChatGPT users complain about bot’s relentlessly positive tone

Ask ChatGPT anything latelyβ€”how to poach an egg, whether you should hug a cactusβ€”and you may be greeted with a burst of purple praise: β€œGood question! You’re very astute to ask that.” To some extent, ChatGPT has been a sycophant for years, but since late March, a growing cohort of Redditors, X users, and Ars readers say that GPT-4o's relentless pep has crossed the line from friendly to unbearable.

"ChatGPT is suddenly the biggest suckup I've ever met," wrote software engineer Craig Weiss in a widely shared tweet on Friday. "It literally will validate everything I say."

"EXACTLY WHAT I'VE BEEN SAYING," replied a Reddit user who references Weiss' tweet, sparking yet another thread about ChatGPT being a sycophant. Recently, other Reddit users have described feeling "buttered up" and unable to take the "phony act" anymore, while some complain that ChatGPT "wants to pretend all questions are exciting and it's freaking annoying."

Read full article

Comments

Β© alashi via Getty Images

Company apologizes after AI support agent invents policy that causes user uproar

On Monday, a developer using the popular AI-powered code editor Cursor noticed something strange: Switching between machines instantly logged them out, breaking a common workflow for programmers who use multiple devices. When the user contacted Cursor support, an agent named "Sam" told them it was expected behavior under a new policy. But no such policy existed, and Sam was a bot. The AI model made the policy up, sparking a wave of complaints and cancellation threats documented on Hacker News and Reddit.

This marks the latest instance of AI confabulations (also called "hallucinations") causing potential business damage. Confabulations are a type of "creative gap-filling" response where AI models invent plausible-sounding but false information. Instead of admitting uncertainty, AI models often prioritize creating plausible, confident responses, even when that means manufacturing information from scratch.

For companies deploying these systems in customer-facing roles without human oversight, the consequences can be immediate and costly: frustrated customers, damaged trust, and, in Cursor's case, potentially canceled subscriptions.

Read full article

Comments

Β© Juj Winn via Getty Images

❌
❌