Reading view

There are new articles available, click to refresh the page.

Commercials are still too loud, say “thousands” of recent FCC complaints

“Thousands” of complaints about the volume of TV commercials have flooded the Federal Communications Commission (FCC) in recent years. Despite the FCC requiring TV stations, cable operators, and satellite providers to ensure that commercials don’t bring a sudden spike in decibels, complaints around loud commercials “took a troubling jump” in 2024, the government body said on Thursday.

Under The Commercial Advertisement Loudness Mitigation (CALM) Act, broadcast, cable, and satellite TV providers are required to ensure that commercials “have the same average volume as the programs they accompany,” per the FCC. The FCC’s rules about the volume of commercials took effect in December 2012. The law also requires linear TV providers to use the Advanced Television Systems Committee's (ATSC’s) recommended practices. The practices include guidance around production, post production, metadata systems usage, and controlling dynamic range. If followed, the recommendations “result in consistency in loudness and avoidance of signal clipping,” per the ATSC [PDF]. The guidance reads:

If all programs and commercials were produced at a consistent average loudness, and if the loudness of the mix is preserved through the production, distribution, and delivery chain, listeners would not be subjected to annoying changes in loudness within and between programs.

As spotted by PC Mag, the FCC claimed this week that The Calm Act initially reduced complaints about commercials aggressively blaring from TVs. However, the agency is seeing an uptick in grievances. The FCC said it received "approximately" 750 complaints in 2022, 825 in 2023, and "at least" 1,700 in 2024 [PDF].

Read full article

Comments

© Getty

Research roundup: 7 cool science stories from February

It's a regrettable reality that there is never time to cover all the interesting scientific stories we come across each month. In the past, we've featured year-end roundups of cool science stories we (almost) missed. This year, we're experimenting with a monthly collection. February's list includes dancing sea turtles, the secret to a perfectly boiled egg, the latest breakthrough in deciphering the Herculaneum scrolls, the discovery of an Egyptian pharaoh's tomb, and more.

Dancing sea turtles

There is growing evidence that certain migratory animal species (turtles, birds, some species of fish) are able to exploit the Earth's magnetic field for navigation, using it both as a compass to determine direction and as a kind of "map" to track their geographical position while migrating. A paper published in the journal Nature offers evidence of a possible mechanism for this unusual ability, at least in loggerhead sea turtles, who perform an energetic "dance" when they follow magnetic fields to a tasty snack.

Sea turtles make impressive 8,000-mile migrations across oceans and tend to return to the same feeding and nesting sites. The authors believe they achieve this through their ability to remember the magnetic signature of those areas and store them in a mental map. To test that hypothesis, the scientists placed juvenile sea turtles into two large tanks of water outfitted with large coils to create magnetic signatures at specific locations within the tanks. One tank features such a location that had food; the other had a similar location without food.

Read full article

Comments

© Vesuvius Challenge

Sergey Brin says AGI is within reach if Googlers work 60-hour weeks

Sergey Brin co-founded Google in the 1990s along with Larry Page, but both stepped away from the day to day at Google in 2019. However, the AI boom tempted Brin to return to the office, and he thinks everyone should follow his example. In a new internal memo, Brin has advised employees to be in the office every weekday so Google can win the AI race.

Just returning to the office isn't enough for the Google co-founder. According to the memo seen by The New York Times, Brin says Googlers should try to work 60 hours per week to support the company's AI efforts. That works out to 12 hours per day, Monday through Friday, which Brin calls the "sweet spot of productivity." This is not a new opinion for Brin.

Brin, like many in Silicon Valley, is seemingly committed to the dogma that the current trajectory of generative AI will lead to the development of artificial general intelligence (AGI). Such a thinking machine would be head and shoulders above current AI models, which can only do a good impression of thinking. An AGI would understand concepts and think more like a human being, which some would argue makes it a conscious entity.

Read full article

Comments

© KIMIHIRO HOSHINO/AFP/GettyImages

Firefox deletes promise to never sell personal data, asks users not to panic

Firefox maker Mozilla deleted a promise to never sell its users' personal data and is trying to assure worried users that its approach to privacy hasn't fundamentally changed. Until recently, a Firefox FAQ promised that the browser maker never has and never will sell its users' personal data. An archived version from January 30 says:

Does Firefox sell your personal data?

Nope. Never have, never will. And we protect you from many of the advertisers who do. Firefox products are designed to protect your privacy. That's a promise.

That promise is removed from the current version. There's also a notable change in a data privacy FAQ that used to say, "Mozilla doesn't sell data about you, and we don't buy data about you."

The data privacy FAQ now explains that Mozilla is no longer making blanket promises about not selling data because some legal jurisdictions define "sale" in a very broad way:

Read full article

Comments

© Getty Images | Anadolu Agency

Salty game dev comments, easier mods are inside Command & Conquer’s source code

EA doesn't always treat its classic library with respect, as evidenced by its recent barely touched-up The Sims Legacy Collection. But the folks shepherding Command & Conquer, a vanguard series in the bygone genre of real-time strategy (RTS) games, are seemingly fueled by a different kind of Tiberium.

After releasing a reverential remaster of the first two games in 2020 with 4K upscaling and behind-the-scenes looks at their full-motion video scenes, EA is now opening up the series even more to its fans. Source code for the original C&C, Red Alert, Renegade (the first-person one), and Generals/Zero Hour has been dropped on GitHub. Along with Steam Workshop support for most of the series, that should enable a new generation of mods for the games. Given the extent of the code available, mods could include the kinds of modern updates, like higher and wider resolutions or beefed-up textures and refresh rates, that all good games deserve.

Building and working with this code will not be a plug-and-play affair. The namesake 1995 game and its hugely popular 1996 Red Alert sequel require some older dependencies, like DirectX 5 and the Greenleaf Communications Library (for a full build and tool access) and the Borland Turbo Assembler (TASM 4.0) to compile. Renegade and Generals, however, call for a whole lot more nostalgia: STLport 4.5.3, the SafeDisk API, the GameSpy SDK, the RAD Miles Sound System SDK, and at least eight more.

Read full article

Comments

© EA

Europol arrests 25 users of online network accused of sharing AI CSAM

Europe is cracking down on AI-generated sex images of minors. So far, Europol has arrested 25 people in a large-scale ongoing probe called Operation Cumberland and confirmed that more arrests are expected in the coming weeks.

In a press release, Europol said that the 24 arrests occurred simultaneously on February 26 after Danish law enforcement arrested a main suspect in November. That suspect is accused of running "an online platform where he distributed the AI-generated material he produced" for others willing to pay to "watch children being abused," Europol alleged. The network was hidden from casual lurkers and required a "symbolic payment" to access, Europol said.

While fully AI-generated images may not depict real kids, at least one AI model (now scrubbed) has been trained on actual CSAM and images of real kids, so child safety experts fear outputs could potentially depict or resemble a known victim or actual child. And there's growing consensus globally that, in general, AI-generated CSAM harms kids by normalizing child sex abuse through the increased prevalence of CSAM online.

Read full article

Comments

© JASPER JACOBS / Contributor | AFP

“It’s a lemon”—OpenAI’s largest AI model ever arrives to mixed reviews

The verdict is in: OpenAI's newest and most capable traditional AI model, GPT-4.5, is big, expensive, and slow, providing marginally better performance than GPT-4o at 30x the cost for input and 15x the cost for output. The new model seems to prove that longstanding rumors of diminishing returns in training unsupervised-learning LLMs were correct and that the so-called "scaling laws" cited by many for years have possibly met their natural end.

An AI expert who requested anonymity told Ars Technica, "GPT-4.5 is a lemon!" when comparing its reported performance to its dramatically increased price, while frequent OpenAI critic Gary Marcus called the release a "nothing burger" in a blog post (though to be fair, Marcus also seems to think most of what OpenAI does is overrated).

Former OpenAI researcher Andrej Karpathy wrote on X that GPT-4.5 is better than GPT-4o but in ways that are subtle and difficult to express. "Everything is a little bit better and it's awesome," he wrote, "but also not exactly in ways that are trivial to point to."

Read full article

Comments

© Rawpixel via Getty Images

Yes, it turns out you can make a Tesla Cybertruck even uglier

There's a saying about putting lipstick on a pig, but what if it's not lipstick? That's the question the universe set out to answer when it aligned in such a way that famed (or perhaps infamous) car customizer Mansory got itself a Tesla Cybertruck. The Mansory Elongation—a name that must have taken ages to think of—offers exterior, interior, and wheel and tire upgrades for the straight-edged stainless steel-wrapped pickup.

Among those who mod cars, there are the tuners, who focus on adding power and (one hopes) performance, and then there are the customizers, who concentrate more on aesthetics. Once upon a time, the entire luxury car industry worked like that—a client would buy a rolling chassis from Bugatti, Rolls-Royce, or Talbot and then have bodywork added by coachbuilders like Gurney Nutting, Touring, or Figoni et Falaschi.

The rear 3/4 view of a modified Cybertruck At least the rear winglets don't entirely compromise access to the bed. Credit: Mansory

Modern homologation requirements have mostly put an end to that level of coachbuilding, but for the ultra-wealthy prepared to spend telephone numbers on cars, brands like Rolls-Royce will still occasionally oblige. More common now are those aftermarket shops that spiff up already luxurious cars, changing normal doors for gullwing versions, adding flaring fenders and bulging wheel arches, and plastering the interior in any hue of leather one might imagine.

Read full article

Comments

© Mansory

Details on AMD’s $549 and $599 Radeon RX 9070 GPUs, which aim at Nvidia and 4K

AMD is releasing the first detailed specifications of its next-generation Radeon RX 9070 series GPUs and the RDNA4 graphics architecture today, almost two months after teasing them at CES.

The short version is that these are both upper-midrange graphics cards targeting resolutions of 1440p and 4K and meant to compete mainly with Nvidia's incoming and outgoing 4070- and 5070-series GeForce GPUs, including the RTX 4070, RTX 5070, RTX 4070 Ti and Ti Super, and the RTX 5070 Ti.

AMD says the RX 9070 will start at $549, the same price as Nvidia's RTX 5070. The slightly faster 9070 XT starts at $599, $150 less than the RTX 5070 Ti. The cards go on sale March 6, a day after Nvidia's RTX 5070.

Read full article

Comments

© AMD

AMD’s FSR 4 upscaling is exclusive to 90-series Radeon GPUs, won’t work on other cards

AMD's new Radeon RX 90-series cards and the RDNA4 architecture make their official debut on March 5, and a new version of AMD's FidelityFX Super Resolution (FSR) upscaling technology is coming along with them.

FSR and Nvidia's Deep Learning Super Sampling (DLSS) upscalers have the same goal: to take a lower-resolution image rendered by your graphics card, bump up the resolution, and fill in the gaps between the natively rendered pixels to make an image that looks close to natively rendered without making the GPU do all that rendering work. These upscalers can make errors, and they won't always look quite as good as a native-resolution image. But they're both nice alternatives to living with a blurry, non-native-resolution picture on an LCD or OLED display.

FSR and DLSS are especially useful for older or cheaper 1080p or 1440p-capable GPUs that are connected to a 4K monitor, where you'd otherwise have to decide between a sharp 4K image and a playable frame rate; it's also useful for hitting higher frame rates at lower resolutions, which can be handy for high-refresh-rate gaming monitors.

Read full article

Comments

© AMD

US Antarctic Program disrupted by DOGE-induced chaos

Few agencies have been spared as Elon Musk’s so-called Department of Government Efficiency (DOGE) has ripped through the United States federal government. Even in Antarctica, scientists and workers are feeling the impacts—and are terrified for what’s to come.

The United States Antarctic Program (USAP) operates three permanent stations in Antarctica. These remote stations are difficult to get to and difficult to maintain; scattered across the continent, they are built on volcanic hills, polar plateaus, and icy peninsulas.

But to the US, the science has been worth it. At these stations, over a thousand people each year come to the continent to live and work. Scientists operate a number of major research projects, studying everything from climate change and rising sea levels to the cosmological makeup and origins of the universe itself. With funding cuts and layoffs looming, Antarctic scientists and experts don’t know if their research will be able to continue, how US stations will be sustained, or what all this might mean for the continent’s delicate geopolitics.

Read full article

Comments

© Wolfgang Kaehler via Getty

On May 5, Microsoft’s Skype will shut down for good

After more than 21 years, Skype will soon be no more. Last night, some users (including Ars readers) poked around in the latest Skype preview update and noticed as-yet-unsurfaced text that read "Starting in May, Skype will no longer be available. Continue your calls and chats in Teams."

This morning, Microsoft has confirmed to Ars that it's true. May 5, 2025, will mark the end of Skype's long run.

Alongside the verification that the end is nigh, Microsoft shared a bunch of details about how it plans to migrate Skype users over. Starting right away, some Skype users (those in Teams and Skype Insider) will be able to log in to Teams using their Skype credentials. More people will gain that ability over the next few days.

Read full article

Comments

© Aurich Lawson

Did the snowball Earth give complex life a boost?

Around 700 million years ago, Earth was a frozen, white sphere, its rocky surface buried kilometers under ice. Despite the barren landscape, the evolution of complex life in the oceans was about to pick up steam. New research published this week in Geology suggests that the two realms were more connected than previously thought.

As massive glaciers scratched and scarred Earth’s rocky surface, they freed less-common minerals, which were later flushed into the seas as the ice melted into giant glacial rivers. These minerals in turn may have spurred nutrient cycling in the oceans, boosting the metabolism of microbial life.

“In retrospect, I’m surprised it took [researchers] so long to go and do a study like this,” says Galen Halverson, a stratigrapher at McGill University who was not involved in the work. “It fits with what we understand” about the glaciated Earth.

Read full article

Comments

© MARK GARLICK/SCIENCE PHOTO LIBRARY

Elon Musk fans truly believe he can make Dogecoin the currency of Earth

At a time when many analysts are declaring memecoins dead, the most popular memecoin of all time, Dogecoin, not only perseveres but appears likely to become more mainstream than ever in 2025.

Most memecoins—cryptocurrencies inspired by Internet memes—remain controversial. Their prices can suddenly skyrocket before abruptly crashing, causing extreme gains and losses at a moment's notice, often triggered by a celebrity mention that tenuously amplifies short-term interest.

Donald Trump's memecoin is a recent example. Within two days of its launch, it peaked at above $70 before falling to $17 shortly after, Reuters reported. Seeing that politically backed token take off apparently inspired Argentine President Javier Milei to endorse another memecoin called Libra, which seemed to set off a brief price surge before a devastating crash that caused most traders to endure losses. Only about 34 investors in total reportedly profited $124.6 million from Milei's endorsement, which a federal judge is now investigating as an alleged "rug pull" scheme, Reuters reported.

Read full article

Comments

© Aurich Lawson | No Country For Old Men

Rocket Report: Rocket Lab’s news blitz; Starship mishap blamed on vibrations

Welcome to Edition 7.33 of the Rocket Report! Phew, what a week for Rocket Lab! The company released a bevy of announcements in conjunction with its quarterly earnings report Thursday. Rocket Lab is spending a lot of money to develop the medium-lift Neutron rocket, and as we'll discuss below, a rocket landing platform and a new satellite design. For now, the company is sticking by its public statements that the Neutron rocket will launch this year—the official line is it will debut in the second half of 2025—but this schedule assumes near-perfect execution on the program. "We’ve always been clear that we run aggressive schedules," said Peter Beck, Rocket Lab's founder and CEO. The official schedule doesn't quite allow me to invoke a strict interpretation of Berger's Law, which states that if a rocket's debut is predicted to happen in the fourth quarter of a year, and that quarter is six or more months away, the launch will be delayed. However, the spirit of the law seems valid here. This time last year, Rocket Lab targeted a first launch by the end of 2024, an aggressive target that has come and gone.

As always, we welcome reader submissions. If you don't want to miss an issue, please subscribe using the box below (the form will not appear on AMP-enabled versions of the site). Each report will include information on small-, medium-, and heavy-lift rockets as well as a quick look ahead at the next three launches on the calendar.

Australian startup sets a launch date. The first attempt to send an Australian-made rocket into orbit is set to take place no sooner than March 15, the Australian Broadcasting Corporation reports. Gilmour Space Technologies' launch window announcement marks a major development for the company, which has been working toward a test launch for a decade. Gilmour previously hoped to launch its test rocket, Eris, in May 2024, but had to wait for the Australian government to issue a launch license and airspace approvals for the flight to go forward. Those are now in hand, clearing the last regulatory hurdle before liftoff.

Read full article

Comments

© Rocket Lab

Astroscale aced the world’s first rendezvous with a piece of space junk

There's a scene in the film Interstellar where Matthew McConaughey's character flies his spaceplane up to meet a mothership spinning out of control. The protagonist rises to the challenge with a polished piece of piloting and successfully links up with his objective.

Real life, of course, isn't quite this dramatic. Slow down that spin to a tranquil tumble, and replace McConaughey's hand on the joystick with the autonomous wits of a computer, and you'll arrive at an approximation of what Japanese company Astroscale has accomplished within the last year.

Still, it's an impressive feat of engineering and orbital dynamics. Astroscale's ADRAS-J mission became the first spacecraft (at least in the unclassified world) to approach a piece of space junk in low-Earth orbit. This particular object, a derelict upper stage from a Japanese H-IIA rocket, has been in orbit since 2009. It's one of about 2,000 spent rocket bodies circling the Earth and one of more than 45,000 objects in orbit tracked by US Space Command.

Read full article

Comments

© Astroscale

Copilot exposes private GitHub pages, some removed by Microsoft

Microsoft’s Copilot AI assistant is exposing the contents of more than 20,000 private GitHub repositories from companies including Google, Intel, Huawei, PayPal, IBM, Tencent and, ironically, Microsoft.

These repositories, belonging to more than 16,000 organizations, were originally posted to GitHub as public, but were later set to private, often after the developers responsible realized they contained authentication credentials allowing unauthorized access or other types of confidential data. Even months later, however, the private pages remain available in their entirety through Copilot.

AI security firm Lasso discovered the behavior in the second half of 2024. After finding in January that Copilot continued to store private repositories and make them available, Lasso set out to measure how big the problem really was.

Read full article

Comments

© Microsoft

Microsoft brings an official Copilot app to macOS for the first time

It took a couple of years, but it happened: Microsoft released its Copilot AI assistant as an application for macOS. The app is available for download for free from the Mac App Store right now.

It was previously available briefly as a Mac app, sort of; for a short time, Microsoft's iPad Copilot app could run on the Mac, but access on the Mac was quickly disabled. Mac users have been able to use a web-based interface for a while.

Copilot initially launched on the web and in web browsers (Edge, obviously) before making its way onto iOS and Android last year. It has since been slotted into all sorts of first-party Microsoft software, too.

Read full article

Comments

© Samuel Axon

New AI text diffusion models break speed barriers by pulling words from noise

On Thursday, Inception Labs released Mercury Coder, a new AI language model that uses diffusion techniques to generate text faster than conventional models. Unlike traditional models that create text word by word—such as the kind that powers ChatGPT—diffusion-based models like Mercury produce entire responses simultaneously, refining them from an initially masked state into coherent text.

Traditional large language models build text from left to right, one token at a time. They use a technique called "autoregression." Each word must wait for all previous words before appearing. Inspired by techniques from image-generation models like Stable Diffusion, DALL-E, and Midjourney, text diffusion language models like LLaDA (developed by researchers from Renmin University and Ant Group) and Mercury use a masking-based approach. These models begin with fully obscured content and gradually "denoise" the output, revealing all parts of the response at once.

While image diffusion models add continuous noise to pixel values, text diffusion models can't apply continuous noise to discrete tokens (chunks of text data). Instead, they replace tokens with special mask tokens as the text equivalent of noise. In LLaDA, the masking probability controls the noise level, with high masking representing high noise and low masking representing low noise. The diffusion process moves from high noise to low noise. Though LLaDA describes this using masking terminology and Mercury uses noise terminology, both apply a similar concept to text generation rooted in diffusion.

Read full article

Comments

© akinbostanci via Getty Images

Google will finally fix awesome (but broken) song detection feature for Pixels

Google's Pixel phones include numerous thoughtful features you don't get on other phones, like Now Playing. This feature can identify background music from the lock screen, but unlike some similar song identifiers, it works even without an Internet connection. Sadly, it has been broken for months. There is some hope, though. Google has indicated that a fix is ready for deployment, and Pixel users can expect to see it in a future OS update.

First introduced in 2017, Now Playing uses a cache of thousands of audio fingerprints to identify songs you might encounter in your daily grind. Since it works offline, it's highly efficient and preserves your privacy. Now Playing isn't a life-changing addition to the mobile experience, but it's damn cool.

That makes it all the stranger that Google appears to have broken Now Playing with the release of Android 15 (or possibly a Play Services update around the same time) and has left it that way for months. Before that update, Now Playing would regularly list songs on the lock screen and offer enhanced search for songs it couldn't ID offline. It was obvious to Pixel fans when Now Playing stopped listening last year, and despite a large volume of online complaints, Google has seemingly dragged its feet.

Read full article

Comments

© Ryan Whitwam

❌