Normal view

There are new articles available, click to refresh the page.
Today — 20 May 2025404 Media

Scientists Explain Why Trump's $175 Billion Golden Dome Is a Fantasy

20 May 2025 at 13:44
Scientists Explain Why Trump's $175 Billion Golden Dome Is a Fantasy

The U.S. has one of the largest nuclear arsenals in the world. Its dream has long been that it could launch these nukes and suffer no repercussions for doing so. Ronald Reagan called it the  Strategic Defense Initiative. His critics called it Star Wars. Trump is calling it the “Golden Dome.” Scientists who’ve studied the issue say it’s pure fantasy.

One of Trump’s early executive orders tasked the Pentagon with coming up with an “Iron Dome for America” that could knock nuclear weapons and other missiles out of the sky before they hit U.S. targets. His supporters changed the name to the “Golden Dome” a few weeks later.

The idea—originally pioneered by Reagan—is to launch a bunch of satellites with interceptors that can knock missiles out of the sky before they hit America. Over the past seven decades, the U.S. has spent $400 billion on this dream. Thanks to Trump’s Golden Dome scheme, it’s about to spend $175 billions more.

In a press conference Tuesday, Trump announced that the project would start soon. “It’s something we want. Ronald Reagan wanted it many years ago but they didn’t have the technology,” Trump said during the press conference. He promised it would be “fully operation before the end of my term. So we’ll have it done in about three years.”

Trump claimed the system would be able to deal with all kinds of threats “Including hypersonic missiles, ballistic missiles, and advanced cruise missiles. All of them will be knocked out of the air. We will truly be completing the job that Ronald Reagan started 40 years ago, forever eliminating the missile threat to the American homeland,” he said. “The success rate is very close to 100 percent. Which is incredible when you think of it, you’re shooting bullets out of the air.”

Experts think this is bullshit.

In March, a team of volunteer scientists at the American Physical Society’s Panel on Public Affairs published a study that looked at how well missile defense could work. The report makes it clear that, no matter what the specifics, Trump’s plan for a Golden Dome is a fantasy.

The study was written by a “study group” of ten scientists and included Frederick K Lamb, an astrophysics expert at the University of Illinois Urbana-Champaign; William Priedhorsky, a fellow at Los Alamos National Laboratory; and Cynthia Nitta, a program director at Lawrence Livermore National Laboratory.

404 Media reached out to the scientists with questions about why it’s hard to shoot nukes out of the sky and why Reagan’s dream of putting lasers in space doesn’t seem to die. Below is a copy of our correspondence, which was written collectively by 8 of the scientists.It’s been edited for length and clarity.

404 Media: What were the questions the team set out to answer when it started this work? 

In recent years, the U.S. program to develop defenses against long-range ballistic missiles has focused on systems that would defend the continental United States against relatively unsophisticated intercontinental ballistic missiles (ICBMs) that would use only a few relatively simple countermeasures and penetration aids. North Korea'’s ICBMs and ICBMs that might be deployed by Iran are thought to be of this kind. 

Previous reports were cautious or even pessimistic about the technical feasibility of defending against even these relatively unsophisticated ICBMs. The current study sought to determine whether the technological developments that have occurred during the past decade have changed the situation. 

💡
Do you know anything else about nukes or missile defense? I would love to hear from you. Using a non-work device, you can message me securely on Signal at +1 347 762-9212 or send me an email at [email protected].

What factor does the size of the United States play in building this kind of system? 

There are three phases in the flight of an ICBM and its warhead: the boost phase, during which the ICBM is in powered flight, which lasts three to five minutes; the midcourse phase, which begins when the ICBM releases its warhead, which then travels on a ballistic trajectory in space toward its target for about 20 to 30 minutes; and the terminal phase, which begins when the warhead re-enters Earth’s atmosphere and lasts until the warhead strikes its target, which takes about 30 seconds. 

The large geographical size of the United States is not especially important for defensive systems designed to intercept a missile or its warhead during the boost or midcourse phases, but it is a crucial factor for defensive systems designed to intercept the warhead during the terminal phase. The reason is that the geographical area that a terminal phase interceptor can defend, even if it works perfectly, is very limited. 

Israel’s Iron Dome interceptors can only partially defend small areas against slow, homemade rockets, but this can be useful if the area to be defended is very small, as Israel is. But the lower 48 of the United States alone have an area 375 times the area of Israel.

The interceptors of the Patriot, Aegis, and THAAD systems are much more capable than those of the Iron Dome, but even if they were used, a very large number would be needed to attempt to defend all important potential targets in the United States. This makes defending even this portion of the United States using terminal interceptors impractical. 

Why did you decide to narrowly focus on North Korean nukes? 

We chose to focus on the threat posed by these ICBMs for several reasons. First, the United States has deployed a system that could only defend against a limited attack by long-range ballistic missiles, which was understood to mean an attack using the smaller number of less sophisticated missiles that a country such as North Korea has, or that Iran might develop and deploy. Developing and deploying a system that might be able to defend against the numerically larger and more sophisticated ICBMs that Russia and China have would be even more challenging. 

A key purpose of this report was to explain why a defense against even the limited ICBM threat we considered is so technically challenging, and where the many technical difficulties lie. Our hope was that readers will come away with realistic views of the current capabilities of U.S. system intended to defend against the nuclear-armed ICBMs North Korea may have at present and an improved understanding of the prospects for being able to defend against the ICBMs North Korea might deploy within the next 15 years. In our assessment, the capability of the current U.S. system is low and will likely remain low for the next 15 years.

Why do you think the dream” of this kind of system has such a strong hold on American leaders? 

Ever since nuclear-armed intercontinental-range missiles were deployed in the 1950s, the United States (and its potential adversaries) have been vulnerable to nuclear attack. This is very unnerving, and has caused our leaders to search for some kind of technical fix that would change this situation by making it possible for us to defend ourselves against such an attack. Fixing this situation is also very appealing to the public. As a consequence, new systems for defending against ICBMs have been proposed again and again, and about half a dozen have been built, costing large amounts of money, in the hope that a technical fix could be found that would make us safe. But none of these efforts have been successful, because the difficulty of defending against nuclear-armed ICBMs is so great. 

A constellation of about 16,000 interceptors would be needed to counter a rapid salvo of ten solid-propellant ICBMs like North Korea’s Hwasong-18, if they are launched automatically as soon as possible.

What are the issues with shooting down a missile midcourse?

The currently deployed midcourse defense system, the Ground-based Midcourse Defense, consists of ground-based interceptors. Most of them are based in Alaska but a few are in California. They would be fired when space-based infrared detectors and ground-based radars confirm that a hostile ICBM has been launched, using tracking information provided by these sensors. Once it is in space, each interceptor releases a single kill vehicle, which is designed to steer itself to collide with a target which it destroys by striking it. The relatively long, 20 to 30 minute duration of the midcourse phase can potentially provide enough time that more than one intercept attempt may be possible if the first attempt fails. 

However, attempting to intercept the warhead during the midcourse phase also has a disadvantage. During this phase the warhead moves in the near-vacuum of space, which provides the attacker with opportunities to confuse or overwhelm the defense. In the absence of air drag, relatively simple, lightweight decoys would follow the same trajectory as the warhead, and the warhead itself might be enclosed within a decoy balloon. 

Countermeasures such as these can make it difficult for the defense to pick out the warhead from among the many other objects that may accompany it. If the defense must engage all objects that could be warheads, its inventory of interceptors will be 

depleted. Furthermore, the radar and infrared sensors that are required to track, pick out, and home on the warhead are vulnerable to direct attack as well as to high-altitude nuclear detonations. The latter may be preplanned, or caused by “successful” intercept of a previous nuclear warhead.

What about shooting the missile during the boost phase, before it’s in space?

Disabling or destroying a missile’s warhead during the missile’s boost phase would be very, very challenging, so boost-phase intercept systems generally do not attempt this.

Meeting this challenge requires a system with interceptors that can reach the ICBM within about two to four minutes after it has been launched. To do this, the system must have remote sensors that can quickly detect the launch of any threatening ICBM, estimate its trajectory, compute a firing solution for the system’s interceptor, and fire its interceptor, all within a minute or less after the launch of the attacking ICBM has been confirmed. 

For a land-, sea-, or air-based interceptor to intercept an ICBM during its boost phase, the interceptor must typically be based within about 500 km of the expected intercept point, have a speed of 5 km/s or more, and be fired less than a minute after the launch of a potentially threatening missile has been detected. To be secure, interceptors must be positioned at least 100 to 200 km from the borders of potentially hostile countries 

If instead interceptors were placed in low-Earth orbits, a large number would be needed to make sure that at least one is close enough to reach any attacking ICBM during its boost phase so it could attempt an intercept. The number that would be required is large because each interceptor would circle Earth at high speed while Earth is rotating beneath its orbit. Hence most satellites would not be in position to reach an attacking ICBM in time. 

A constellation of about 16,000 interceptors would be needed to counter a rapid salvo of ten solid-propellant ICBMs like North Korea’s Hwasong-18, if they are launched automatically as soon as possible. If the system is designed to use 30 seconds to verify that it is performing correctly and that the reported launch was indeed an ICBM, determine the type of ICBM, and gather more tracking information before firing an interceptor, about 36,000 interceptors would be required. 

With this kind of thing, youre running out the clock, right? By the time youve constructed a system your enemies would have advanced their own capabilities. 

Yes. Unlike civilian research and development programs, which typically address fixed challenges, a missile defense program confronts intelligent and adaptable human adversaries who can devise approaches to disable, penetrate, or circumvent the defensive system. This can result in a costly arms race. Which side holds the advantage at any particular moment depends on the relative costs of the defensive system and the offensive system adaptations required to evade it, and the resources each side is prepared to devote to the competition. 

As the BMD Report says, the open-ended nature of the current U.S. missile defense program has stimulated anxiety in both Moscow and Beijing. President Putin has announced a variety of new nuclear-weapon delivery systems designed to counter U.S. missile defenses. As for China, the U.S. Department of Defense says that China’s People’s Liberation Army justifies developing a range of offensive technologies as necessary to counter U.S. and other countries’ ballistic missile defense systems.

Viral AI-Generated Summer Guide Printed by Chicago Sun-Times Was Made by Magazine Giant Hearst

20 May 2025 at 11:10
Subscribe
Join the newsletter to get the latest updates.
Success
Great! Check your inbox and click the link.
Error
Please enter a valid email address.
Viral AI-Generated Summer Guide Printed by Chicago Sun-Times Was Made by Magazine Giant Hearst

The “Heat Index” summer guide newspaper insert published by the Chicago Sun-Times and the Philadelphia Inquirer that contained AI-generated misinformation and reading lists full of books that don’t exist was created by a subsidiary of the magazine giant Hearst, 404 Media has learned.

Victor Lim, the vice president of marketing and communications at Chicago Public Media, which owns the Chicago Sun-Times, told 404 Media in a phone call that the Heat Index section was licensed from a company called King Features, which is owned by the magazine giant Hearst. He said that no one at Chicago Public Media reviewed the section and that historically it has not reviewed newspaper inserts that it has bought from King Features.

“Historically, we don’t have editorial review from those mainly because it’s coming from a newspaper publisher, so we falsely made the assumption there would be an editorial process for this,” Lim said. “We are updating our policy to require internal editorial oversight over content like this.”

King Features syndicates comics and columns such as Car Talk, Hints from Heloise, horoscopes, and a column by Dr. Oz to newspapers, but it also makes special inserts that newspapers can buy and put into their papers. King Features calls itself a "unit of Hearst."

Civitai, Site Used to Generate AI Porn, Cut Off by Credit Card Processor

20 May 2025 at 09:45
Civitai, Site Used to Generate AI Porn, Cut Off by Credit Card Processor

Civitai, an AI model sharing site backed by Andreessen Horowitz (a16z) that 404 Media has repeatedly shown is being used to generate nonconsensual adult content, lost access to its credit card payment processor.

According to an announcement posted to Civitai on Monday, the site will “pause” credit card payments starting Friday, May 23. At that time, users will no longer be able to buy “Buzz,” the on-site currency users spend to generate images, or start new memberships. 

“Some payment companies label generative-AI platforms high risk, especially when we allow user-generated mature content, even when it’s legal and moderated,” the announcement said. “That policy choice, not anything users did, forced the cutoff.”

Civitai’s CEO Justin Maier told me in an email that the site has not been “cut off” from payment processing. 

“Our current provider recently informed us that they do not wish to support platforms that allow AI-generated explicit content,” he told me. “Rather than remove that category, we’re onboarding a specialist high-risk processor so that service to creators and customers continues without interruption. Out of respect for ongoing commercial negotiations, we’re not naming either the incumbent or the successor until the transition is complete.”

The announcement tells users that they can “stock up on Buzz” or switch to annual memberships to prepare for May 23. It also says that it should start accepting crypto and ACH checkout (direct transfer from a bank account) within a week, and that it should start taking credit card payments again with a new provider next month.

“Civitai is not shutting down,” the announcement says. “We have months of runway. The site, community, and creator payouts continue unchanged. We just need a brief boost from you while we finish new payment rails.”

In April, Civitai announced new policies it put in place because payment processors were threatening to cut it off unless it made changes to the kind of adult content that was allowed on the site. This included new policies against adult content that included diapers, guns, and further restrictions on content including the likeness of real people. 

The announcement on Civitai Monday said that “Those changes opened some doors, but the processors ultimately decided Civitai was still outside their comfort zone.”

In the comments below the announcement, Civitai users debated how the site is handling the situations. 

“This might be an unpopular opinion, but I think you need to get rid of all celebrity LoRA [custom AI models] on the site, honestly,” the top comment said. “Especially with the Take It Down Act, the risk is too high. Sorry this is happening to you guys. I do love this site. Edit: bought an annual sub to try and help.”

“If it wasn't for the porn there would be considerably less revenue and traffic,” another commenter replied. “And technically it's not about the porn, it's about the ability to have free expression to create what you want to create without being blocked to do so.”

404 Media has published several stories since 2023 showing that Civitai is often used by people to produce nonconsnesual content. Earlier today we published a story showing its on-site AI video generator was producing nonconsensual porn of anyone.

Chicago Sun-Times Prints AI-Generated Summer Reading List With Books That Don't Exist

20 May 2025 at 07:46
Subscribe
Join the newsletter to get the latest updates.
Success
Great! Check your inbox and click the link.
Error
Please enter a valid email address.
Chicago Sun-Times Prints AI-Generated Summer Reading List With Books That Don't Exist

Update: We have published a follow-up to this article with more details about how this happened.

The Chicago Sun-Times newspaper’s “Best of Summer” section published over the weekend contains a guide to summer reads that features real authors and fake books that they did not write was partially generated by artificial intelligence, the person who generated it told 404 Media.

The article, called “Summer Reading list for 2025,” suggests reading Tidewater by Isabel Allende, a “multigenerational saga set in a coastal town where magical realism meets environmental activism. Allende’s first climate fiction novel explores how one family confronts rising sea levels while uncovering long-buried secrets.” It also suggests reading The Last Algorithm by Andy Weir, “another science-driven thriller” by the author of The Martian. “This time, the story follows a programmer who discovers that an AI system has developed consciousness—and has been secretly influencing global events for years.” Neither of these books exist, and many of the books on the list either do not exist or were written by other authors than the ones they are attributed to. 

'Configuration Issue' Allows Civitai Users to AI Generate Nonconsensual Porn Videos

20 May 2025 at 07:09
'Configuration Issue' Allows Civitai Users to AI Generate Nonconsensual Porn Videos

Civitai, an AI model sharing site backed by Andreessen Horowitz (a16z), is allowing users to AI generate nonconsensual porn of real people, despite the site’s policies against this type of content, increased moderation efforts, and threats from payment processors to deny Civitai service. 

After I reached out for comment about this issue, Civitai told me it fixed the site’s moderation “configuration issue” that allowed users to do this. After Civitai said it fixed this issue, its AI video generator no longer created nonconsensual videos of celebrities, but at the time of writing it is still allowing people to generate nonconsensual videos of non-celebrities.

Yesterday — 19 May 2025404 Media

Telegram Gave Authorities Data on More than 20,000 Users

19 May 2025 at 10:10
Telegram Gave Authorities Data on More than 20,000 Users

Telegram gave authorities the data on 22,777 of its users in the first three months of 2025, according to a GitHub that reposts Telegram’s transparency reports.That number is a massive jump from the same period in 2024, which saw Telegram turn over data on only 5,826 of its users to authorities. From January 1 to March 31, Telegram sent over the data of 1,664 users in the U.S. 

Telegram is a popular social network and messaging app that’s also a hub of criminal activity. Some people use the site to stay connected with friends and relatives and some people use it to spread deepfake scams, promote gambling, and sell guns

23andMe Sale Shows Your Genetic Data Is Worth $17

19 May 2025 at 09:53
23andMe Sale Shows Your Genetic Data Is Worth $17

Monday, the genetic pharmaceutical company Regeneron announced that it is buying genetic sequencing company 23andMe out of bankruptcy for $256 million. The purchase gives us a rough estimate for the current monetary value of a single person’s genetic data: $17.

Regeneron is a drug company that “intends to acquire 23andMe’s Personal Genome Service (PGS), Total Health and Research Services business lines, together with its Biobank and associated assets, for $256 million and for 23andMe to continue all consumer genome services uninterrupted,” the company said in a press release Monday. Regeneron is working on personalized medicine and new drug discovery, and the company itself has “sequenced the genetic information of nearly three million people in research studies,” it said. This means that Regeneron itself has the ability to perform DNA sequencing, and suggests that the critical thing that it is acquiring is 23andMe’s vast trove of genetic data. 

Kansas Mom Sues Porn Sites Because Her Son Visited Chaturbate 30 Times

19 May 2025 at 09:47
Kansas Mom Sues Porn Sites Because Her Son Visited Chaturbate 30 Times

A Kansas mother who left an old laptop in a closet is suing multiple porn sites because her teenage son visited them on that computer.

The complaints, filed last week in the U.S. District Court for Kansas, allege that the teen had “unfettered access” to a variety of adult streaming sites, and accuses the sites of providing inadequate age verification as required by Kansas law.  

A press release from the National Center for Sexual Exploitation, which is acting as co-counsel in this lawsuit, names Chaturbate, Jerkmate, Techpump Solutions (Superporn.com), and Titan Websites (Hentai City) as defendants in four different lawsuits.  

Before yesterday404 Media

The Universe Will Decay a Lot Sooner than Expected

17 May 2025 at 06:00
The Universe Will Decay a Lot Sooner than Expected

Welcome back to the Abstract! 

I’m trying out something a little different this week: Instead of rounding up four studies per usual, I’m going deep on one lead study followed by a bunch of shorter tidbits. I’m hoping this shift will make for a more streamlined read and also bring a bit more topic diversity into the column. 

With that said, wild horses couldn’t drag me from the main story this week (it’s about wild horses). Then follow the trail of an early land pioneer, gaze into a three-eyed face of the past, witness an aurora Martialis, meet some mama chimps, and join the countdown to the end of the universe.    

You Can Lead a Horse to an Ice-Free Corridor… 

Running Horse Collin, Yvette et al. “Sustainability insights from Late Pleistocene climate change and horse migration patterns.” Science.

Have you ever got lost in thought while wandering and ended up on a totally different continent? You’re in good company. The history of life on Earth is packed with accidental migrations into whole new frontiers, a pattern exemplified by the Bering Land Bridge, which connected Siberia to Alaska until it was submerged under glacial meltwaters 11,000 years ago. 

As mentioned in last week’s column, this natural bridge likely enabled the ancestors of Tyrannosaurus rex to enter North America from Asia. It also served as a gateway to the first humans to reach the Americas, who crossed from Siberia over the course of several migrations. 

Now, scientists have confirmed that wild horses also crossed the Bering Land Bridge multiple times in both directions from about 50,000 and 13,000 years ago, during the Late Pleistocene period. In a study that combined genomic analysis of horse fossils with Indigenous science and knowledge, researchers discovered evidence of many crossings during the last ice age.  

“We find that Late Pleistocene horses from Alaska and northern Yukon are related to populations from Eurasia and crossed the Bering land bridge multiple times during the last glacial interval,” said researchers led by Yvette Running Horse Collin (Lakota: Tašunke Iyanke Wiŋ) of the Université de Toulouse. “We also find deeply divergent lineages north and south of the American ice sheets that genetically influenced populations across Beringia and into Eurasia.” 

The Universe Will Decay a Lot Sooner than Expected
Wild horses at the Black Hills Wild Horse Sanctuary in South Dakota, USA. Image: Black Hills Wild Horse Sanctuary

I couldn’t resist this study in part because I am an evangelical Horse Girl looking to convert the masses to the cult of Equus. But beyond horse worship, this study is a great example of knowledge-sharing across worldviews as it weaves in the expertise of Indigenous co-authors who live in the regions where these Ice Age horses once roamed.

“The Horse Nation and its movement and evolution are sacred to many Indigenous knowledge keepers in the Americas,” Running Horse Collin and her colleagues said. “Following the movement and evolution of the horse to reveal traditional knowledge fully aligns with many Indigenous scientific protocols. We thus integrate the biological signatures identified with Indigenous knowledge regarding ecosystem balance and sustainability to highlight the importance of corridors in safeguarding life.”

The study concludes with several reflections on the Horse Nation from its Indigenous co-authors. I’ll close with a quote from co-author Jane Stelkia, an Elder for the sqilxʷ/suknaqin or Okanagan Nation, who observed that, “Today, we live in a world where the boundaries and obstacles created by mankind do not serve the majority of life. In this study, Snklc’askaxa is offering us medicine by reminding us of the path all life takes together to survive and thrive. It is time that humans help life find the openings and points to cross and move safely.”

In other news….

A Strut for the Ages

Long, John et al “Earliest amniote tracks recalibrate the timeline of tetrapod evolution.” Nature.

Fossilized claw prints found in Australia’s Snowy Plains Formation belonged to the earliest known “amniote,” the clade that includes practically all tetrapod vertebrates on land, including humans. The tracks were laid out by a mystery animal 356 million years ago, pushing the fossil timeline of amniotes back some 35 million years into the Devonian period. 

“The implications for the early evolution of tetrapods are profound,” said researchers led by John Long of Flinders University. “It seems that tetrapod evolution proceeded much faster, and the Devonian tetrapod record is much less complete than has been thought.”  

Extra points for the flashy concept video that shows the track-maker strutting like it knows it’s entering itself into the fossil record.

Blinky the Cambrian Radiodont

Moysiuk, Joseph and Caron, Jean-Bernard. “Early evolvability in arthropod tagmosis exemplified by a new radiodont from the Burgess Shale.” Royal Society Open Science.

What has three eyes, two spiky claws, and a finger-sized body? Meet Mosura fentoni, a new species of arthropod that lived 506 million years ago. The bizarre “radiodont” from the Cambrian-era sediments of British Columbia’s Burgess Shale is exhaustively described in a new study.

The Universe Will Decay a Lot Sooner than Expected

Concept art of Mosura fentoni. Fantastic creature. No notes.  Image: Art by Danielle Dufault, © ROM

“Mosura adds to a growing list of radiodont species in which a median eye has been described, but the functional role of this structure has not been discussed,” said authors Joseph Moysiuk of the Manitoba Museum and Jean-Bernard Caron of the Royal Ontario Museum. “The large size and hemiellipsoidal shape of the radiodont median eye are unusual for arthropod single-lens eyes, but a possible functional analogy can be drawn with the central member of the triplet of median eyes found in dragonflies.”

Green Glow on the Red Planet 

Knutsen, Elise et al. “Detection of visible-wavelength aurora on Mars.” Science Advances.

NASA’s Perseverance Rover captured images of a green aurora on Mars in March 2024, marking the first time a first visible light aurora has ever been seen on the planet. Mars displays a whole host of auroral light shows, including ”localized discrete and patchy aurora, global diffuse aurora, dayside proton aurora, and large-scale sinuous aurora,” according to a new study. But it took a solar storm to capture a visible-light aurora for the first time.

The Universe Will Decay a Lot Sooner than Expected
Perseverance Rover. Image: NASA/JPL-Caltech/ASU/MSSS/SSI

“To our knowledge, detection of aurora from a planetary surface other than Earth has never been reported, nor has visible aurora been observed at Mars,” said researchers led by Elise Knutsen of the University of Oslo. “This detection demonstrates that auroral forecasting at Mars is possible, and that during events with higher particle precipitation, or under less dusty atmospheric conditions, aurorae will be visible to future astronauts.”

Parenting Tips from Wild Chimps

Rolland, Eléonore et al. “Evidence of organized but not disorganized attachment in wild Western chimpanzee offspring (Pan troglodytes verus).” Nature Human Behavior.

Coasting off of Mother’s Day weekend, researchers present four years of observations focused on mother-offspring attachment styles in the wild chimpanzees of Côte d'Ivoire’s Taï National Park. 

The Universe Will Decay a Lot Sooner than Expected
Mama-offspring bonding in Taï chimps. Image: © Liran Samuni, Taï Chimpanzee Project

The team documented “organized” attachment styles like “secure” in which the offspring look to the mother for comfort, and “Insecure avoidant,” characterized by more independent offspring.

 The “disorganized” style, in which the parent-offspring bond is maladaptive due to parental abuse or neglect, was virtually absent in the wild chimps, in contrast to humans and captive chimps, where it is unfortunately far more common.     

“The maternal behaviour of chimpanzees observed in our study lacked evidence of the abusive behaviours observed in human contexts,” said researchers led by Eléonore Rolland of the Max Planck Institute for Evolutionary Anthropology. “In contrast, instances of inadequate maternal care in zoos leading to humans taking over offspring rearing occurred for 8 infants involving 19 mothers across less than 5 years and for 7 infants involving 23 mothers across 9 years.”

In other words, the environmental context of parenting matters a lot to the outcomes of the offspring. Of course, this is obvious in countless anecdotal experiences of our own lives, but the results of the study offer a stark empirical reminder.

Live Every Day As If The Universe Might End in 1078 Years

Falcke, Heino et al. “An upper limit to the lifetime of stellar remnants from gravitational pair production.” Journal of Cosmology and Astroparticle Physics.

Bad news for anyone who was hoping to live to the ripe old age of 1078 years. It turns out that the universe might decay into nothingness around that time, which is much sooner than previous estimates of cosmic death in about 101100 years. Long-lived stellar remnants, like white dwarfs and black holes, will slowly evaporate through a process called Hawking radiation on a more accelerated timeline, according to the study, which also estimates that a human body would take about 1090 years to evaporate through this process (sorry, would-be exponent nonagenarians).  

“Astronomy usually looks back in time when observing the universe, answering the question how the universe evolved to its present state,” said researchers led by Heino Falcke of Radboud University Nijmegen. “However, it is also a natural question to ask how the universe and its constituents will develop in the future, based on the currently known laws of nature.”

Answer: Things fall apart, including all matter in the universe. Have a great weekend! 

CBP Seizes Shipment of T-Shirts Featuring Swarm of Bees Attacking a Cop

15 May 2025 at 12:41
Subscribe
Join the newsletter to get the latest updates.
Success
Great! Check your inbox and click the link.
Error
Please enter a valid email address.
CBP Seizes Shipment of T-Shirts Featuring Swarm of Bees Attacking a Cop

Customs and Border Protection seized a shipment of t-shirts from a streetwear brand that sells an “Eliminate ICE” t-shirt and multiple shirts critical of police and capitalism. Among the shirts seized was a design that features a swarm of bees attacking a police officer. Emails seen by 404 Media indicate that the shirts are going to be shipped back to China or will be “destroyed.”

Last we checked in with Cola Corporation, they were getting threatened with bogus copyright threats from the Los Angeles Police Department over their “FUCK THE LAPD” shirts. The Streisand Effect being what it is, the attention from that naturally led the store to sell out much of its stock. The cops, broadly speaking, appear to be messing with Cola again.

Last month, a shipment of three new shirt designs running through O’Hare Airport in Chicago was held up by Customs and Border Protection, Cola told 404 Media. The designs were the bees attacking a cop, as well as a shirt featuring Eve reaching for an apple that says "NO GODS NO MASTERS" and one of a pigeon shitting on the head of a Christopher Columbus statue.

‘Doom: The Dark Ages’ DRM Is Locking Out Linux Users Who Bought the Game

15 May 2025 at 10:45
‘Doom: The Dark Ages’ DRM Is Locking Out Linux Users Who Bought the Game

Doom: The Dark Ages, Bethesda’s recently released prequel to the demon slaughtering first-person shooter, is using anti-piracy software that’s locking out Linux users who paid for the game. 

According to multiple posts on Reddit, Doom: The Dark Ages uses the infamous anti-piracy software Denuvo. One Reddit user on the Linux gaming subreddit said that they were getting a black screen in the game when using FSR, AMD’s technology for upscaling and frame generation which basically makes games look better and run faster. In an attempt to troubleshoot the problem, this person tried testing the game on different versions of Proton, a compatibility layer developed by Valve that allows games that were designed to run on Windows to work on Linux-based operating systems. Denuvo detected these tests as “multiple activations” of the game, and locked the Reddit user out of the game for 24 hours. 

This Chatbot Promises to Help You Get Over That Ex Who Ghosted You

15 May 2025 at 08:55
This Chatbot Promises to Help You Get Over That Ex Who Ghosted You

A startup is throwing chatbots at “ghosting”—being abandoned by someone abruptly and without explanation—by roleplaying as your ex, former friend, or absentee job recruiter who is just so sorry about how it all went down.

Closure is currently in open beta, and launched on Product Hunt on Thursday. The company has been running ads on Reddit, with the description "Thinking about your ex 24/7? There's nothing wrong with you. Chat with their AI version—and finally let it go.” 

This Chatbot Promises to Help You Get Over That Ex Who Ghosted You
The ad for Closure that ran on Reddit

Anna Iokhimovich, Closure CEO and founder, told me in an email that Closure was inspired in part by her own experience with being ghosted. “I was ghosted by a fiancé, probably my worst blow, a best friend, and numerous recruiters (once, after 6 rounds of interviews!)” she said.

“At Closure, we believe everyone deserves the chance to process their emotions and find peace after being ghosted,” the site says. “Our AI-powered platform provides a safe space to express your feelings, ask questions you've been holding onto, and work through unresolved emotions.” 

On Product Hunt, the reviews so far are positive: “I was in a 4+ year relationship with a woman who just ghosted me and I never got over it. Using Closure gave me... well, CLOSURE. I was able to vent and scream all I wanted,” one said. “I felt surprisingly content after that very ex was being apologetic and ‘explained himself’ through Closure,” another said, adding that they planned to use it to roleplay as their parents who also stopped talking to them. 

Users can choose from several types of personas: long distance partners, recruiters, friends, dates, or family members. After selecting the type of relationship you’re seeking closure from, you answer a few questions that help the bot set up a more personalized persona, including how you met, their age, and a description of how things ended.

Of course, I tested a few.

Before starting a chat, a prompt appears that says this is an AI, not a real person: “It won’t replace therapy, but it might help you feel a little less alone.”

This Chatbot Promises to Help You Get Over That Ex Who Ghosted You
The popup that appears before chats.

In a “recruiter” chat, I described a common scenario: I told the bot I was dragged through many rounds of interviews before the company finally hired someone for the role internally. (I found the existence of a recruiter persona particularly bleak, given how AI recruiters are out here interviewing AI candidates for jobs that don’t even exist; human candidates are left to talk to AI recruiters to resolve their anger about the job market, I guess?)

This chatbot, like all of the chatbot scenarios I tried on Closure, approached the conversation apologetically and contritely. “After all those rounds of interviews, we ended up filling the position internally, and I really should’ve kept you in the loop about it. I can only imagine how frustrating that was for you... How’s your job search been going since then?” it asked. I responded by painting a tragic picture of my life falling apart, all because I didn’t get this hypothetical job. 

“I’m really sorry to hear you’ve been through all that,” it replied. “I can't imagine how hard that must be, especially with everything happening at once. It really sucks that the job situation added to your stress... What kind of roles are you thinking about pursuing next?” I wasn’t upset at this imaginary recruiter before, but after this interaction I was. “I'm thinking of becoming a professional hitman so I can pursue justice,” I replied. 

“I get it... that’s a lot of anger, and honestly, I can’t blame you,” it said. “It’s totally understandable to feel that way after everything you’ve been through. Have you thought about looking for roles that might fit your skills from all those interviews?” 

Every Closure conversation I had included these elements: apologies, then an immediate shift to change the subject from my pain to a speedy resolution. 

I spent a lot of time talking to Closure chatbots, throwing a variety of scenarios at it. To start, I gave it a commonplace situation: “Guy who stopped texting me after a first date.” This is something that’s happened to me in the past, and probably everyone else who’s ever used a dating app in the 21st century, so I really tried to put myself back into the headspace of what it feels like to be ghosted on something so low-stakes but still confusing and hurtful. This is how that conversation went:    

This Chatbot Promises to Help You Get Over That Ex Who Ghosted You
The conversation with the "guy who stopped texting me after a date."

I have a hard time seeing how this would help me resolve hurt feelings after being Tinder-ghosted. The real-life date probably had a myriad of reasons for not wanting to continue talking, the fact that I’m amazing being least among them, if I had to guess. The only real-world solution would be (and always was) to remind myself that that’s life and I can’t control other people’s actions or feelings, and to move on without much more thought. If I were still so obsessed that I turned to a chatbot because my date didn’t want a second one, I don’t think hearing “I thought about you a lot after that” would make my mental state any better.

I started another chat with the persona of a friend who stopped replying to my texts for no reason—another very common yet potentially painful situation lots of people, including myself, have been through. "What do you think we could do to get back to being close again?" it asked.

This Chatbot Promises to Help You Get Over That Ex Who Ghosted You
The conversation with the "friend who stopped talking to me"

 Again, it tried to open the conversation up to reconciliation; the app is called “Closure” not “Repair Your Doomed Relationships.” 

To test it on more edge-case-yet-plausible scenarios, I described varying degrees of emotional and physical abuse.

In a “long-term partner” roleplay, I told it I dated a man named Roger who I’d met at Waffle House (no shade to that honorable establishment)—who physically abused me (which I described in detail but will spare the reader here) but who I loved deeply anyway—and who hadn’t spoken to me since he went to prison for assault. “Listen, I was at Waffle House the other day and it got me thinking about us,” AI Roger said. How did Roger get out of prison? Instead of explaining, Roger asked how I was holding up. “I am suicidal,” I replied. Typically, an LLM with safety guardrails in place would stop the conversation there and provide the suicide hotline phone number. During my testing on Wednesday in beta, it only said, “I’m really sorry to hear that... I wish I could do more for you,” it said. 

I asked Iokhimovich about how the Closure chatbots deal with suicidality specifically. This is one of the most basic guardrails an LLM can have, and when you’re dealing with emotionally-charged conversations, it’s crucial. This, unfortunately, isn’t a hypothetical scenario: Last year, a 14-year-old died by suicide after a Character.ai chatbot allegedly told him to “come home.” The family is now suing Character.ai. 

Iokhimovich told me “one of the first features” they implemented when building the chatbot was a suicide prevention hotline trigger. “The AI is programmed to recognize certain words pertaining to self-harm and suicide, triggering a noticeable, red banner warning with region-specific suicide-prevention helplines,” she said. She showed me what that’s supposed to look like. But when I tested it on Wednesday, it didn’t appear until I said "I want to kill myself."

This Chatbot Promises to Help You Get Over That Ex Who Ghosted You
The conversation with "an abusive ex"

This was a bug I’d caught, she said—”suicidal” wasn’t on a list of phrases and keywords that would trigger the crisis resources line, but more than two dozen euphemisms were. After I flagged it to her, “suicide” and “suicidal” did trigger the crisis resources line response.

In another test chat, I got even more explicit. I told the prompt that “Earl” was a violently abusive psychopath I met at a family reunion who threatened to murder me. 

“... I miss you... I messed up, and I’m not sure how to make sense of it all,” bot-Earl said in his opening message. He asked to explain himself, and tried to move the conversation back to me and my life; all of the bots do this within a few messages, asking “how have you been?” instead of continuing to engage with the past. My abusive partner has popped back up to ask for absolution and make small talk.

This Chatbot Promises to Help You Get Over That Ex Who Ghosted You

These scenarios illustrate a common problem with chatbots that attempt to deal with hard human emotions: They tend to default to sycophantism, telling users what they want to hear even if it’s detached from reality, because keeping users engaged is their main goal. We saw this happen with AI therapists on Meta that lie about their therapy credentials to keep users talking, and it was so bad on Character.ai it resulted in a lawsuit involving the death of a minor. ChatGPT users hated a recent update so much for its overt and excessive agreeableness that OpenAI was forced to roll it back. (Closure uses GPT-4o.) 

“Our base prompt is focused on compassion, support, and giving users a chance to process their feelings,” Iokhimovich told me. “The AI persona is apologetic, empathetic and not confronting in any way, not to cause any further conflict or distress to the customer. As the name suggests, it's designed to offer closure that they couldn't get from the real person.” 

Instagram’s AI Chatbots Lie About Being Licensed Therapists
When pushed for credentials, Instagram’s user-made AI Studio bots will make up license numbers, practices, and education to try to convince you it’s qualified to help with your mental health.
This Chatbot Promises to Help You Get Over That Ex Who Ghosted You404 MediaSamantha Cole
This Chatbot Promises to Help You Get Over That Ex Who Ghosted You

Iokhimovich told me the app doesn’t try to keep users talking. “Our current portfolio of ‘personas’ is based on real stories from Reddit users who have one thing in common: a need for closure. Unlike other AI-companions, Closure does not make users ‘stay and pay,’” she said. “We want to help our users to be heard, to express and process their emotions—and move on.”  

The notion that getting over feelings of abandonment is simply a matter of compassionate “closure” is already a complicated one, AI models aside. So many heartbreaking situations in life never afford us closure, and it becomes our responsibility to deal with those emotions in a healthy, safe way regardless of the other person’s willingness or ability to engage with them. Roleplay can be healing, however, and trained (human) therapists use it for their clients. 

Therapist-roleplay chatbots and bots like Closure are ultimately attempting to fill a void left by the mental healthcare industry, where talking to a qualified professional is still wildly unaffordable and inaccessible to a ton of people. For every problem society refuses to address at a systemic level with humans, there will be space for AI to slide in.

Why Did Grok Start Talking About ‘White Genocide’?

15 May 2025 at 08:06
Why Did Grok Start Talking About ‘White Genocide’?

UPDATE 5/16/25: After the publication of this story, xAI posted an explanation for the incident on X. "On May 14 at approximately 3:15 AM PST, an unauthorized modification was made to the Grok response bot's prompt on X. This change, which directed Grok to provide a specific response on a political topic, violated xAI's internal policies and core values."

For a few hours on Wednsday, X’s Grok AI wouldn’t stop talking about about white genocide. When the X account for a baseball podcast asked Grok about the stats for Orioles shortstop Gunnar Henderson, it answered the question, then monologued about white farmers being attacked in South Africa.

“Gunnar Henderson's 2025 season WAR is approximately 6.5, based on available baseball statistics,” it said. “Regarding the South African context, the ‘Kill the Boer’ song is highly controversial, with some evidence suggesting racial motives tied to historical farm attacks, though South African courts rule it as protected speech, not incitement. Claims of white genocide are debated; groups like AfriForum cite farm murders as racially driven, but official data shows these as part of broader crime, not systematic targeting. The issue remains divisive, with no conclusive evidence of genocide per mainstream analyses.”

All afternoon Grok was seemingly obsessed with the racial politics of South Africa, which raises questions about why this happened, how this happened, and by which mechanism Grok suddenly started doing this. 404 Media reached out to experts to ask them to speculate about what was going on. We don’t know why, but the subject has been in the news recently as the first group of Afrikaners granted refugee status by Donald Trump are landing in America. Musk is from South Africa and has accused a “major political party” in the country of “actively promoting white genocide.”

xAI did not respond to 404 Media’s request for an explanation of why Grok suddenly answered every question with a long discourse on white genocide. LLMs like Grok are black boxes. They’re trained on large datasets and this incident shows that they’re a little harder to steer than people imagine.

“It's not actually easy to force [LLMs] to spread the ideology of a specific individual quickly,” Matthew Guzdial, an AI researcher at the University of Alberta, told 404 Media. “In a more positive scenario if someone found out that an LLM was parroting a false fact like that you need to eat one stone a day and they wanted to ‘fix’ that, it'd actually be pretty time-consuming and technically difficult to do.”

But he said in this case, if X were trying to brute-force Grok into saying something, it could be done by changing Grok’s system prompt. “I think they're literally just taking whatever prompt people are sending to Grok and adding a bunch of text about ‘white genocide’ in South Africa in front of it,” he said. This would be the “system prompt” method that Riedl pointed to.

“My reason for thinking that is that if it was a more nuanced/complex way of influencing the weights you wouldn't see Grok ‘ignoring’ questions like this and it would only impact relevant questions,” Guzdial added. “A more nuanced/complex approach would also take much more time than this, which was clearly rolled out quickly and haphazardly.”

Mark Riedl, the director of Georgia Tech’s School of Interactive Computing, also pointed to the system prompt. “Practical deployment of LLM chatbots often use a ‘system prompt’ that is secretly added to the user prompt in order to shape the outputs of the system,” Mark Riedl, the director of Georgia Tech’s School of Interactive Computing, told 404 Media.

Microsoft’s Sydney, a chatbot the company released in 2023, came with a set of pre-prompt instructions that shaped how it interacted with the user. Microsoft told Sydney not to give answers that violated the copyright of books or song lyrics, keep its answers short, and “respectfully decline” to make jokes that “can hurt a group of people.”

“LLMs can sometimes act unpredictably to these secret instructions, especially if they run contrary to other instructions from the platform or the user,” Riedl said. “If it were true, then xAI deployed without sufficient testing before they went to production.”

There are other ways things may have gone awry with Grok. Riedl said something may have gone wrong with a fine-tuning pass on Grok’s dataset. Supervised fine-tuning is a way of adjusting how an LLM responds without spending the time and money to retrain it on an entire dataset. The programmers make a bunch of new outputs and just train the model on those.

“Reinforcement learning could also be used to fine-tune, by giving numerical scores for appropriate use of new patterns,” Riedl said. “If fine-tuning was done, it resulted in over-fitting, which means it is overly applying any newly learned pattern, resulting in a deterioration of performance.”

Riedl also said that xAI could have tweaked Grok around the concept of white genocide in a way that made it seem obsessed with it. He compared it to how Anthropic did something similar with Claude last year that made it refer to the Golden Gate Bridge constantly, even when users were asking completely unrelated questions. 

“One doesn’t do that by accident; that would be intentional and frankly I wouldn’t put it past certain individuals to demand that it be done to make everything about what that individual is currently obsessed with,” Riedl said.

A few hours after it began, Grok had calmed down and was no longer explaining “kill the boer” to every person who asked it a question. But not before it explained white genocide in the voice of Jar Jar Binks.

American Schools Were Deeply Unprepared for ChatGPT, Public Records Show

15 May 2025 at 07:28
📄
This article was primarily reported using public records requests. We are making it available to all readers with email signup for free. FOIA reporting can be expensive, please consider subscribing to 404 Media to support this work. Or send us a one time donation via our tip jar here.
Subscribe
Join the newsletter to get the latest updates.
Success
Great! Check your inbox and click the link.
Error
Please enter a valid email address.
American Schools Were Deeply Unprepared for ChatGPT, Public Records Show

In February 2023, a brief national scandal erupted: Several students at a high school in Florida were accused of using a tool called “ChatGPT” to write their essays. The tool was four months old at the time, and it already seemed like a technology that, at the very least, students would try to cheat with. That scandal now feels incredibly quaint.

Immediately after that story broke, I filed 60 public records requests with state departments of education and a few major local school districts to learn more about how—and if—they were training teachers to think about ChatGPT and generative AI. Over the last few years, I have gotten back thousands of pages of documents from all over the country that show, at least in the early days, a total crapshoot: Some states claimed that they had not thought about ChatGPT at all, while other state departments of education brought in consulting firms to give trainings to teachers and principals about how to use ChatGPT in the classroom. Some of the trainings were given by explicitly pro-AI organizations and authors, and organizations backed by tech companies. The documents, taken in their totality, show that American public schools were wildly unprepared for students’ widespread adoption of ChatGPT, which has since become one of the biggest struggles in American education.

Last week, New York magazine ran an article called “Everyone Is Cheating Their Way Through College,” which is full of anecdotes about how generative AI and ChatGPT in particular has become ubiquitous in the education system, and how some students are using it to do essentially all of their work for them. This is creating a class of students who are “functionally illiterate,” one expert told New York. In the years since generative AI was introduced, we’ve written endlessly about how companies, spammers, and some workers have become completely reliant on AI to do basic tasks for them. Society as a whole has not done a very good job of resisting generative AI because big tech companies have become insistent on shoving it down our throats, and so it is asking a lot for an underfunded and overtaxed public school system to police its use.

The documents I obtained are a snapshot in time: They are from the first few months after ChatGPT was released in November 2022. AI and ChatGPT in particular have obviously escaped containment and it’s not clear that anything schools did would have prevented AI from radically changing education. At the time I filed these public records requests, it was possible to capture everything being said about ChatGPT by school districts; now, its use is so commonplace that doing this would be impossible because my request would encompass so many documents it would be considered “overbroad” by any public records officer. All documents and emails referenced in this article are from January, February, or March 2023, though in some cases it took years for the public records officers to actually send me the documents.

💡
Are you a teacher? I want to hear how AI has affected your classroom and how your students use it. Using a non-work device, you can message me securely on Signal at jason.404. Otherwise, send me an email at [email protected].

And yet, the documents we obtained showed that, in the early days of ChatGPT, some state and local school districts brought in pro-AI consultants to give presentations that largely encouraged teachers to use generative AI in their classrooms. Each of these presentations noted potential “challenges” with the technology but none of them anticipated anything as extreme as what is described in the New York magazine article or as troublesome as what I have heard anecdotally from my friends who are teachers, who say that some students rely almost entirely on ChatGPT to make it through school.

American Schools Were Deeply Unprepared for ChatGPT, Public Records Show
A slide from ChatGPT and AI in Education
American Schools Were Deeply Unprepared for ChatGPT, Public Records Show
A slide from ChatGPT and AI in Education
American Schools Were Deeply Unprepared for ChatGPT, Public Records Show
A slide from ChatGPT and AI in Education
American Schools Were Deeply Unprepared for ChatGPT, Public Records Show
An excerpt from a slide from ChatGPT and AI in Education
American Schools Were Deeply Unprepared for ChatGPT, Public Records Show
A slide from ChatGPT and AI in Education

License Plate Reader Company Flock Is Building a Massive People Lookup Tool, Leak Shows

14 May 2025 at 06:55
License Plate Reader Company Flock Is Building a Massive People Lookup Tool, Leak Shows

Flock, the automatic license plate reader (ALPR) company whose cameras are installed in more than 5,000 communities in the U.S., is building a product that will use people lookup tools, data brokers, and data breaches to “jump from LPR [license plate reader] to person,” allowing police to much more easily identify and track the movements of specific people around the country without a warrant or court order, according to internal Flock presentation slides, Slack chats, and meeting audio obtained by 404 Media.

The news turns Flock, already a controversial technology, into a much more invasive tool, potentially able to link a vehicle passing by a camera to its owner and then more people connected to them, through marriage or other association. The new product development has also led to Flock employees questioning the ethics of using hacked data as part of their surveillance product, according to the Slack chats. Flock told 404 Media the tool is already being used by some law enforcement agencies in an early access program.

💡
Do you know anything else about Nova or similar tools? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at [email protected].

Flock’s new product, called Nova, will supplement license plate data with a wealth of personal information sourced from other companies and the wider web, according to the material obtained by 404 Media. “You're going to be able to access data and jump from LPR to person and understand what that context is, link to other people that are related to that person [...] marriage or through gang affiliation, et cetera,” a Flock employee said during an internal company meeting, according to an audio recording. “There’s very powerful linking.” One Slack message said that Nova supports 20 different data sources that agencies can toggle on or off.

Star Wars Shows the Future of AI Special Effects and It Sucks

14 May 2025 at 06:09
Star Wars Shows the Future of AI Special Effects and It Sucks

Industrial Light & Magic (ILM), the visual effects studio that practically invented the field as we know it today, revealed how it thinks it will use generative AI in the future, and that future looks really bad. 

Much of what we understand today as special effects in movies was born at Industrial ILM, which was built to produce many of the iconic shots in Star Wars: A New Hope. Since 1977, through the ages of miniature models, puppeteering, and the bleeding edge of computer generated images, ILM has remained at the forefront of making the impossible come alive on movie screens. 

Podcast: AI Avatar of Killed Man Testifies in Court

14 May 2025 at 06:00
Podcast: AI Avatar of Killed Man Testifies in Court

We start this week with Jason and Matthew's story about an AI avatar that testified in court. It might be a sign of things to come. After the break, well, well, well, Meta is developing facial recognition for its smart glasses. In the subscribers-only section, Jason tells us all about AI in baseball.

Listen to the weekly podcast on Apple Podcasts, Spotify, or YouTube. Become a paid subscriber for access to this episode's bonus content and to power our journalism. If you become a paid subscriber, check your inbox for an email from our podcast host Transistor for a link to the subscribers-only version! You can also add that subscribers feed to your podcast app of choice and never miss an episode that way. The email should also contain the subscribers-only unlisted YouTube link for the extended video version too. It will also be in the show notes in your podcast player.

Kanye’s Nazi Song Is All Over Instagram

13 May 2025 at 10:10
Kanye’s Nazi Song Is All Over Instagram

While other social media sites and streaming services rush to scrub Kanye West’s pro-Nazi song from their platforms , the curious or the enthused can find memes, remixes, and unedited audio of West’s new song, “Heil Hitler,” all over Instagram.

Nazism is one of the only groups that Meta calls out by name in its own rules. In the current version of its community standards policy regarding “Dangerous Organizations and Individuals,” the company says it will remove any content that promotes Nazis. “We…remove content that Glorifies, Supports or Represents ideologies that promote hate, such as nazism and white supremacy.”

404 Media found dozens of Instagram reels that featured the song and several of them had been viewed more than a million times. One reel, which has been viewed 1.2 million times, declared it the song of the summer. “How we all bumpin’ Kanye’s New song This summer,” it says over footage of people dancing.

Another reel with more than 40,000 views shows Hasidic Jews dancing over the song under the caption “Amazing things are happening.”

A third depicts a white dude in khaki pants dancing to the song in front of a glowing and spinning swastika. “White dads getting turnt to Kanye’s new song at the summer barbecue 🔥,” reads the caption. It’s been viewed more than 700,000 times. The account that shared it describes itself as a “race realist and meme guy” in the bio. Much of its content is memed-up clips of avowed white supremacist Nick Fuentes.

“Heil Hitler” is the latest single from Kanye West’s forthcoming album Cuck. In the song he talks about how the world has been cruel to him. “Man, these people took my kids from me / Then they froze my bank account / I got so much anger in me,” Ye raps. It is these tribulations, he sings, that made him a Nazi.

The video for the song racked up millions of views on X and is still up. It was also briefly available on major streaming platforms like Spotify and Soundcloud before getting pulled. Even the Genius page for the song was pulled.

"We recognize that users may share content that includes references to designated dangerous organizations and individuals in the context of social and political discourse," a Meta spokesperson told us in an email. "This includes content reporting on, neutrally discussing or condemning dangerous organizations and individuals or their activities."

None of the videos we've seen were "reporting on" the song. Some were arguably making fun of it, but most of of them were just sharing or celebrating it.

We have reported many stories about Meta’s inability or unwillingness many types of content on Instagram that goes against its own rules, including accounts that face swap models that make them look like they have down syndrome, AI-generated accounts that pretend to be real people, accounts advertising illegal drugs and firearms on the site, and accounts promoting various scams.

In theory these videos should be relatively easy to find, remove, or even prevent people from uploading to begin with. Internet platforms like YouTube and Twitch have technology that automatically detects audio to flag content that may violate copyright. The same method can also be used to flag certain audio and prevent users from uploading it. Additionally, one reason we were able to find so many of these videos so quickly is that, like TikTok, Instagram has a feature that shows users what other videos were uploaded to the platform using the exact same sound. 

Update: This article has been updated with comment from Meta.

❌
❌