Reading view

There are new articles available, click to refresh the page.

Scientists Discover New World In Our Solar System: ‘Ammonite’

Scientists Discover New World In Our Solar System: ‘Ammonite’

Welcome back to the Abstract! Here are the studies that made me smile, think, and despair for humanity this week.

First up, it’s officially a hot Jurassic summer with the recent release of yet another Mesozoic movie filled with de-extincted animals that are oddly preoccupied with human flesh. We’ll lead with a story about a fantastic Jurassic predator that didn’t make the cut for Jurassic World: Rebirth, but will eternally star in your nightmares hereafter.

Then: a whole new world, the horrific consequences of Medicaid cuts, and the cosmologies of ancient graveyards.

The case of the cursed ichthyosaur 

Lindgren, Johan et al. “Adaptations for stealth in the wing-like flippers of a large ichthyosaur.” Nature.

Jaws, a summer blockbuster about how a rampaging shark can expose paradigms of masculinity, turned 50 years old last month. But if you want to meet a truly O.G. stealth ocean predator, you'll need to wind the clock back another 181 million years, according to a new study about Temnodontosaurus, a Jurassic predator that belongs to the extinct ichthyosaur family.

Scientists have discovered an exquisitely preserved front fin from this giant hunter, which grew to lengths of more than 30 feet. Unearthed in Germany, the fin includes a “wing-like” shape with “a serrated trailing edge” that probably evolved to reduce the sound it makes while sneaking up on its prey, according to researchers led by Johan Lindgren of Lund University. 

Scientists Discover New World In Our Solar System: ‘Ammonite’

183-million-year-old soft-tissue fossil (SSN8DOR11; Paläontologisches Museum Nierstein, Nierstein, Germany). Image: Randolph G. De La Garza, Martin Jarenmark and Johan Lindgren.

“The notably wing-like fin sheds light on the unique hunting strategy” of Temnodontosaurus, “revealing secondary control structures that probably served to minimize self-generated noise during foraging activities in low-light habitats—in effect, a novel form of stealth (silent swimming) in an ancient marine reptile,” the team said in the new study. 

In other words, this animal had a silencer built into its fin, all the better to ambush fish, squid, reptiles, and other aquatic Jurassic delicacies. But wait—it gets creepier. Temnodontosaurus is most famous for its absolutely enormous eyeballs, with sockets that measured some 10 inches in diameter, potentially making them the biggest eyes of any animal that ever lived.

“A conspicuous feature of Temnodontosaurus is its huge eyeballs; these are the largest of any vertebrate known, rivaling those of the giant and colossal squid (of the genera Architeuthis and Mesocychoteuthis) in absolute size,” Lindgren and his colleagues said. “There is broad consensus that the eyes conferred advantages at low light levels, and thus were well suited either for nocturnal life or deep diving habits.”

Scientists Discover New World In Our Solar System: ‘Ammonite’

Temnodontosaurus, staring at you from beyond the grave. Image: Ghedo, taken at the Paris Museum of Natural History

In Jaws, the shark hunter Quint, played by Robert Shaw, seems especially haunted by the eyes of sharks, describing them as “lifeless eyes, black eyes, like a doll's eyes” in his chilling firsthand account of the sinking of the U.S.S. Indianapolis.

But hey, I’ll take the doll’s eyes of a great white over the freakish monster gaze of Temnodontosaurus any day of the week (or geological epoch). What a relief that none of us will ever encounter this nighttime predator with its bus-length body, acoustic invisibility cloak, and pizza-pan peepers.   

In other news…

New sednoid just dropped

Chen, Ying-Tung et al. “Discovery and dynamics of a Sedna-like object with a perihelion of 66 au.” Nature Astronomy.

Scientists have discovered a new world in the solar system: the trans-Neptunian object (TNO) 2023 KQ14, nicknamed Ammonite. The object is estimated to be about a hundred miles across and has an extreme orbit that takes it as far as 252 times the orbit of Earth. It belongs to a family of distant worlds called “sednoids” after the dwarf planet Sedna. 

https://www.youtube.com/watch?v=z561PGgTe8I

“The discovery of ‘Ammonite’...offers a valuable opportunity to evaluate current models of outer Solar System formation and evolution,” said researchers led by Ying-Tung Chen of Academia Sinica in Taipei. “These findings highlight the diversity of orbital properties and dynamical behaviours among distant Solar System objects.”

As an interesting twist, this new world may be a strike against the idea that a giant hypothetical planet, popularly known as Planet Nine, is lurking in the outer reaches of the solar system. Its orbit doesn’t seem to line up with that theory. Time will probably tell, because Planet Nine—if it does exist—is running out of places to hide.

Medicaid cuts could cause thousands of excess deaths by 2034

Basu, Sanjay et al. “Projected Health System and Economic Impacts of 2025 Medicaid Policy Proposals.” JAMA Health Forum.

There has been a lot of speculation about the extreme Medicaid cuts in the recently passed Big Beautiful Bill, but a new report summarizes the predicted effects with devastating brevity. 

“CBO projections suggest 7.6 million individuals in the US would become uninsured by 2034 due to Medicaid policy changes” resulting in an annual increase of “approximately 1,484 excess deaths, 94,802 preventable hospitalizations” and “1.6 million people delaying care due to cost,” said researchers led by Sanjay Basu of the University of California, San Francisco.

This is a conservative estimate: In the higher-impact scenario where more than 14 million people lose Medicaid by 2034, annual impacts “are estimated be substantially greater: 2,284 excess deaths, 145,946 preventable hospitalizations [and] 2.5 million people delaying care,” according to the study.

This analysis doesn’t include the cutting of subsidies to the Affordable Care Act Marketplace plans or possible changes to Medicare, which will compound these negative effects. At the risk of sounding glib…seems bad!

Eternal sunsets for the Yangshao dead

Chen, Yuqing. “Cosmology in the Orientation of Neolithic Burials in Central China: The Xipo and Qingliangsi Cemeteries.” Journal of World Prehistory.

You can tell a lot about a culture from the way it treats its living (see above) but also from the way it treats its dead. 

With that in mind, Yuqing Chen of Durham University set out to better understand the Yangshao culture (仰韶) of central China, which spanned 4700–2800 BCE, by cataloging the orientations of graves of people buried at the Xipo and Qingliangsi burial grounds.

This work is overflowing with cool insights, from the careful placement of goods inside graves, like cooking pots and ovens, to reconstructions of the Neolithic sky, to an explanation of the Gaitian model of the universe in which “the sky was perceived as a lid parallel to the Earth, and the celestial bodies, such as the Sun, were thought to move within the lid,” according to the study. 

Scientists Discover New World In Our Solar System: ‘Ammonite’

A diagram of the Gaitian model. Image: Wu, 2020

Ultimately, Chen concluded that the predominately westward orientations of the Neolithic graves did not necessarily reflect “the importance of particular astronomical phenomena known to have been important in later times (e.g. the Milky Way or the star Antares), but rather the direction in which sunsets are most commonly seen throughout the year.”

“It is suggested that in the cosmology of the Late Neolithic period, the Sun was perceived to play a key role throughout the year in the worlds of the living and the dead, by maintaining the harmony of sky, Earth and human,” she said.

May we all aspire to maintain some harmony between, sky, Earth, and humanity this weekend, and beyond. Thanks for reading! See you next week.  

Behind the Blog: High Stakes Data Dumps

Behind the Blog: High Stakes Data Dumps

This is Behind the Blog, where we share our behind-the-scenes thoughts about how a few of our top stories of the week came together. This week, we discuss data dumps, high stakes, and lizard brain screen time.

JOSEPH: Flight Manifests Reveal Dozens of Previously Unknown People on Three Deportation Flights to El Salvador is the hardest hacking related article we’ve ever worked on. 

I’ve obtained some very sensitive data breaches over the last decade: metadata of specific individuals from the massive AT&T breach; photos of peoples’ genitalia pre- and post-plastic surgery. Honestly it’s hard to remember them all.

The data here wasn’t even necessarily as sensitive or personal as those. It was flight manifests, which contain peoples’ names, the flight they were on, and their gender. That’s basically it. But it was how to handle publication of the data that was exceptionally complicated and why it took us a while from when we first obtained the data a few months ago to publishing this week.

Come Party With Us In Los Angeles July 30

Come Party With Us In Los Angeles July 30

Hello 404 Media readers! We're excited to announce that we're having our first ever party in Los Angeles. We have partnered with the amazing DIY hackerspace RIP.SPACE in the Arts District.

We'll start the night with a live podcast about the surveillance technologies powering ICE, with a specific focus on tools that are being used in Los Angeles. We'll then change gears and do some Q&A about 404 Media and independent journalism. I'm considering doxing my Instagram algorithm as well. After that, we'll have a reception and party with music from our friend DJ Avey.

We'll have free beer and wine, good vibes, and hopefully a good conversation. Tickets are free for subscribers, $10 for the general public (you can also subscribe for free entry here). If you're a subscriber, scroll to the end of this post for your free ticket code.

GET TICKETS HERE

.

.

.

.

.

White House Partners With PragerU to Make AI-Slopified Founding Fathers

White House Partners With PragerU to Make AI-Slopified Founding Fathers

Conservative content mill PragerU is partnering with the White House to make AI-generated videos of founding fathers and Revolutionary War-era randos.

PragerU is a nonprofit organization with a mission “to promote American values through the creative use of digital media, technology and edu-tainment,” according to its website. It’s been criticized for advancing climate denial and slavery apologism, frequently publishes videos critical of “wokeness” and “DEI,” and is very concerned about “the death of the West.” It has also been increasingly integrated into school curricula around the country.

PragerU held a launch event for the series, “Road to Liberty,” on June 25. Secretary Linda McMahon took some time away from dismantling the Department of Education to speak at the event. In person at the White House, visitors can tour a display of notable Revolutionary War people and places, and scan a QR code on displays that take them to PragerU’s AI-generated videos of people from that time period speaking. 

Each of the videos highlights a different person who was alive during the signing of the Declaration of Independence, from former presidents to relatively minor players in the fight for independence. The videos are clearly AI-generated, with the sepia-toned peoples’ mouths moving almost independently from the rest of their faces in some of them. In one, an AI-generated John Adams says “facts do not care about our feelings,” a phrase commonly attributed to conservative commentator and PragerU contributor Ben Shapiro. 

At the end of the videos, there's a logo for the White House with the text "brought to you by PragerU," and a disclaimer: "The White House is grateful for the partnership with PragerU and the U.S. Department of Education in the production of this museum. This partnership does not constitute or imply U.S. Government or U.S. Department of Education endorsement of PragerU."

Professor of history Seth Cotlar spotted the videos in a thread on Bluesky

I have the unfortunate duty to inform you that the WH & Dept of Ed, as part of the Trump Admin's celebration of the 250th anniversary of the Declaration of Independence, has partnered with Prager U to create AI-slop videos in which we see John Adams say "facts do not care about your feelings."

Seth Cotlar (@sethcotlar.bsky.social) 2025-07-16T14:47:30.314Z

I asked Cotlar, as someone who specializes in American history and the rise of the far-right, what stood out to him about these videos. I thought it was odd, I said, that they chose to include people like politician and disgraced minister Lyman Hall and obscure poet Francis Hopkinson alongside more well-known figures like John Adams or Thomas Jefferson. 

“You're right to note that it's a pretty odd collection of figures they've chosen,” Cotlar said. “My guess is that this is part of the broader right wing populist push to frame themselves as the grassroots ‘true Americans,’ and they're including all of these lesser known figures with the hopes that their viewers will be like ‘oh wow, look at all of these revolutionary freedom fighters like me who were just kinda ordinary guys like me but who still changed history.’” 

He also said it’s noteworthy that the “Road to Liberty” lineup so far is almost entirely white men, including the random dudes like Hall and Hopkinson. “The lack of any pretense to inclusion is pretty notable. Even conservative glosses on the Revolution from the pre-Trump era would have included things like the Rhode Island Regiment or Lemuel Haynes or Phyllis Wheatley. Needless to say, they absolutely do not include Deborah Sampson,” Cotlar said. All of the people in the “coming soon” section on PragerU’s website are also white men. 

AI slop has become the aesthetic of the right, with authoritarians around the world embracing ugly, lazy, mass-produced content like PragerU’s founding father puppets. Here in the U.S., we have President Donald Trump hawking it on his social media accounts, including AI-generated images of himself as the Pope and “Trump Gaza,” an AI video and song depicting the West Bank as a vacation paradise where Trump parties alongside his former bestie Elon Musk. As Republicans used the response to Hurricane Helene to blame migrants, Amy Kremer, founder of Women for Trump, posted an AI image of a child caught in a flood hugging a puppy and then said she didn’t care that it wasn’t real: “Y’all, I don’t know where this photo came from and honestly, it doesn’t matter,” she wrote on X. Mike Lee shared the same image. AI slop makes for quick and easy engagement farming, and now it’s being produced in direct partnership with the White House.

I’m not sure what app or program PragerU is using to make these videos. I thought, at first, that they might be using one of the many basic lipsyncing or “make this old photo come alive” mobile apps on the market now. But the videos look better, or at least more heavily produced, than most of those apps are capable of. Just to make sure they haven’t somehow advanced wildly in the last few months since I checked one out, I tried one of them, Revive, and uploaded an image of John Adams to see if it would return anything close to what PragerU’s putting out. It did not. 

0:00
/0:10

The PragerU videos aren't this bad, but they also aren’t as good as what would come out of Veo 3, the newest AI video generator, which generates highly realistic videos complete with sound and speech, from text prompts. I gave Veo a painting of John Adams and told it what to say; PragerU probably isn’t using this generator, because the result is much more realistic than what’s in the “Road to Liberty” series, even when I use a screenshot from one of their videos.

0:00
/0:08

John Adams in Veo 3 using a painting as a prompt.

On the off chance the culprit is Midjourney—although the series’ style and the way the subjects’ mouths move almost independently of the rest of their faces don’t match what I’ve seen of Midjourney’s videos—I tried that one, too. I just gave Midjourney the same Adams portrait and a prompt for it to animate him praising the United States and it returned a raving lunatic, silently screaming. 

0:00
/0:05

Striking out so far, I emailed Hany Farid, a professor at UC Berkeley and Chief Science Officer of synthetic media detection company GetReal, and asked if he had any leads. He said it looked similar to what comes out of AI video creation platform HeyGen, which creates AI talking heads and generates speech for them using ElevenLabs. I tried this on screenshots of the avatars in PragerU’s Martha Washington and John Adams videos to see if the puppet-mouth-style matched up, and they were pretty close.

0:00
/0:01

HeyGen John Adams

0:00
/0:06

HeyGen Martha Washington

PragerU’s videos are still more heavily produced than what I could make using the free version of HeyGen; it’s possible they used a combination of these to make the videos, plus some old-fashioned video editing and animation to create the final products. PragerU reported almost $70 million in income last year, they can afford the effort. 

“While the PragerU stuff is distinctly terrible, it's not like our culture has commemorated the Revolution with high-minded sophistication,” Cotlar told me. “I was 8 during the bicentennial and while I definitely learned some stuff about the founding era, most of what I absorbed was pretty schlocky.” He mentioned the "Bicentennial minutes" that were broadcast in 1975 and 76, sponsored by Shell, and which TV critic John J. O’Connor called “so insubstantial as to be almost meaningless.” The series won an Emmy.

In the last two years, several states, beginning with Florida, have approved PragerU content to be taught in public school classrooms. In Oklahoma, teachers relocating from states with "progressive education policies” will have to undergo an assessment in partnership with PragerU to determine if they’re allowed to teach. "If you want to teach here, you'd better know the Constitution, respect what makes America great, and understand basic biology,” State Superintendent Ryan Walters said in a press release. “We're raising a generation of patriots, not activists, and I'll fight tooth and nail to keep leftist propaganda out of our classrooms."

The Astronomer CEO's Coldplay Concert Fiasco Is Emblematic of Our Social Media Surveillance Dystopia

Subscribe
Join the newsletter to get the latest updates.
Success
Great! Check your inbox and click the link.
Error
Please enter a valid email address.
The Astronomer CEO's Coldplay Concert Fiasco Is Emblematic of Our Social Media Surveillance Dystopia

The CEO seemingly having an affair with the head of HR at his company at the Coldplay concert is a viral video for the ages, but it is also, unfortunately, emblematic of our current private surveillance and social media hellscape.

The video, which is now viral on every platform that we can possibly think of, has been covered by various news outlets, and is Pop Crave official, shows Andy Byron, the CEO of a company called Astronomer, with his arms around Astronomer’s head of HR, Kristen Cabot. The jumbotron cuts from one fan to this seemingly happy couple. They both simultaneously die inside; “Oh look at this happy couple,” Coldplay lead singer Chris Martin says. The woman covers her face and spins away. The man ducks out of frame. “Either they’re having an affair or they’re very shy,” Martin said. The camera pans to another company executive standing next to them, who is seemingly shaking out of discomfort.

It is hard to describe how viral this is at the moment, in a world in which so many awful things are occurring and in which nothing holds anyone’s attention for any length of time and in a world in which we are all living in our own siloed realities. “Andy Byron” is currently the most popular trending Google term in the United States, with more than double the searches of the next closest term. 

There are so many levels to this embarrassment—the Coldplay of it all, the HR violation occurring on jumbotron, etc—that one could likely write a doctoral dissertation on this 15 second video.

0:00
/0:19

Inside ICE’s Supercharged Facial Recognition App of 200 Million Images

Inside ICE’s Supercharged Facial Recognition App of 200 Million Images

ICE officers are able to point their smartphone’s camera at a person and near instantaneously run their face against a bank of 200 million images, then pull up their name, date of birth, nationality, unique identifiers such as their “alien” number, and whether an immigration judge has determined they should be deported from the country, according to ICE material viewed by 404 Media.

The new material, which includes user manuals for ICE’s recently launched internal app called Mobile Fortify, provides granular insight into exactly how ICE’s new facial recognition app works, what data it can return on a subject, and where ICE is sourcing that data. The app represents an unprecedented linking of government databases into a single tool, including from the State Department, Customs and Border Protection (CBP), the FBI, and state records. It also includes the potential for ICE to later add commercially available databases that contain even more personal data on people inside the United States.

💡
Do you know anything else about this app? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at [email protected].

“This app shows that biometric technology has moved well beyond just confirming someone's identity. In the hands of ICE officers, it's becoming a way to retrieve vast amounts of data about a person on demand just by pointing a camera in their face,” Dave Maass, director of investigations at the Electronic Frontier Foundation (EFF), told 404 Media. “The more they streamline its use, the more they streamline its abuse. When an officer says, ‘papers please,’ you could choose to say nothing and face the consequences; with face recognition, your options are diminished.”

Flight Manifests Reveal Dozens of Previously Unknown People on Three Deportation Flights to El Salvador

Flight Manifests Reveal Dozens of Previously Unknown People on Three Deportation Flights to El Salvador

The flight manifests for three legally contested deportation flights from Texas to El Salvador contain dozens of additional, unaccounted for passengers than a previously published Department of Homeland Security (DHS) list of people deported from the United States on those flights, 404 Media has learned. The additional people on the flight manifest have not been publicly acknowledged by the U.S. government in any way, and immigration experts who have been closely monitoring Trump’s deportation campaign say they have no idea where these people are or what happened to them. 404 Media is now publishing the names of these people. 

On March 15, the Trump administration deported more than 200 people on three aircraft to a megaprison in El Salvador. A judge blocked the deportations, but hours later the flights still landed in the country. It marked one of the major turning points of the administration’s mass deportation efforts, and signaled what was to come around the country—a lack of due process, authorities ignoring judge’s rulings, and deporting people on the flimsiest of pretenses. Soon after these flights, CBS News published an “internal government list” of people it said were deported to CECOT, the notorious El Salvadorian megaprison.

But in May, a hacker targeted GlobalX, the airline that operated these flights and shared the data with 404 Media. In addition to the names of people who were on the list CBS News published, the GlobalX flight manifests contain the names of dozens of people who were supposedly on the flights but whose status and existence has not been acknowledged by the U.S. government or previously reported in the press. 

“We have this list of people that the U.S. government has not formally acknowledged in any real way and we pretty much have no idea if they are in CECOT or someplace else, or whether they received due process,” Michelle Brané, executive director of Together and Free, a group that has been working with families of deported people, told 404 Media. “I think this further demonstrates the callousness and lack of due process involved and is further evidence that the US government is disappearing people. These people were detained and no one knows where they are, and we don't know the circumstances […] For almost all of these people, there’s no records whatsoever. No court records, nothing.” 

💡
Do you know anything else about these people or flights? We would love to hear from you. Using a non-work device, you can message Jason securely on Signal at jason.404 or send an email to [email protected]. You can Signal Joseph at joseph.404 or email [email protected].

“[The government is] not disclosing it and they’ve presumably been sent to a prison or sent somewhere by the U.S. government on a plane and have never been heard from since,” she added. “We have not heard from these people’s families, so I think perhaps even they don’t know.”

Brané added that it remains entirely unclear whether all of these people were actually on the flights or why they were on the manifests. If they were indeed on the flights, it is unknown where they currently are. That uncertainty, and the unwillingness of the U.S. government to provide any clarity about these people, is a major problem, she said.

While the stories of some of the people deported on these flights have garnered a lot of attention, such as Kilmar Abrego Garcia, U.S. authorities have refused to reveal the names of everyone on board. 

While the whereabouts and circumstances of most of these people remain unknown, Brané’s organization used publicly available data to try to better understand who they are. In some cases, Together and Free was able to identify a few details about specific people on the manifest. For example, one person on the manifest appears to have been arrested by local police in Texas in late December on drug possession charges and is listed in arrest records as being an “illegal alien.” Another person was arrested in Nashville in February on charges of driving without a license. For many other people listed, there is no easily discernible public data about who they are or why they appeared on the flight manifest.

Several other people are on the flight manifests and do not appear on the CBS News list, but their identities had already become public because their families have filed lawsuits or have been looking for them on social media. These include Abrego Garcia and Ricardo Prada Vásquez, a man whose family said he was “disappeared” because he did not appear on any official, publicly published lists. After the New York Times published an article about his disappearance, the Trump administration said he was at CECOT, and 404 Media was able to find his name on the March 15 flight manifests. 

In Venezuela, the family of another man who appears on the flight manifests but not on the CBS News list, Keider Alexander Flores Navas, has been protesting his disappearance and demanding answers. In a TikTok video posted in March, his mother Ana Navas explains that they suddenly stopped hearing from Keider before the March 15 flights. She said she eventually heard he was in federal detention. Then, she saw a photo of him in CECOT amongst a group of other prisoners: “The thing that worried me the most was he was not on any list. But this photo is from El Salvador. Lots of family members here recognize their sons [in official CECOT photos]. That’s my son,” she says, the camera panning to a circled image of Keider in CECOT.

Flight Manifests Reveal Dozens of Previously Unknown People on Three Deportation Flights to El Salvador

In another TikTok video posted in June, the mother of 21-year-old Brandon Sigaran-Cruz explains that he had been “disappeared for three months” with no news of his whereabouts. Sigaran-Cruz also appears on the flight manifest but not the CBS News list. 

 The U.S. government previously acknowledged that, along with more than 200 Venezuelan citizens, it deported 23 Salvadorans to El Salvador on the three March 15 flights. There is no formal list of the Salvadorans who were on the flight, and none of them appeared on the CBS News list, which included only Venezuelan citizens. 

The United Nations’ Human Rights Office has also filed court petitions saying that it is investigating the “involuntary disappearances” of at least four Venezuelans who were sent to El Salvador on these flights. “Neither the Government of El Salvador nor the Government of the United States has published official information on the list of deported persons or their current place of detention,” the United Nations said in a “Report on Enforced or Involuntary Disappearances” it filed in court.

“There continues to be very little clarity as to the fate and whereabouts of the Venezuelans removed to El Salvador. To date, no official lists of the deported detainees have been published. Provision of further information by authorities is key, including providing families and their counsel with available information on the specific situation and whereabouts of their loved ones,” Elizabeth Throssell, a spokesperson for the UN Human Rights Office, told 404 Media in an email. “The UN Human Rights Office has been in contact with family members of over 100 Venezuelans believed to have been deported to El Salvador.”

404 Media asked the Department of Homeland Security (DHS) over multiple weeks if the agency had any legitimate security concerns with these names being published, or if it could tell us anything about these people. The agency never responded, despite responding to requests for comment for other 404 Media articles. GlobalX did not respond to a request for comment either.

“It is critical that we know who was on these March 15 flights,” Lee Gelernt, a lawyer at the American Civil Liberties Union (ACLU) and the lead counsel on the ACLU’s related case, told 404 Media. “These individuals were sent to a gulag-type prison without any due process, possibly for the remainder of their lives, yet the government has provided no meaningful information about them, much less the evidence against them. Transparency at a time like this is essential.”

In recent months, the U.S. government has said that the El Salvadorian government has jurisdiction over the people detained in CECOT, while El Salvador told the United Nations that “the jurisdiction and legal responsibility for these persons lie exclusively with the competent foreign authorities [the United States],” leading to a situation where people are detained in a foreign prison but both responsible parties are not willing to claim legal responsibility for them. A similar situation has happened in Florida at the “Alligator Alcatraz” camp, where people detained by the federal government are being held in a state-run facility, and experts have said it’s not clear who is in charge. Brané said with the massive increase in ICE funding as part of Trump’s new law, we are likely to see more detention camps, more detainments, more deportation flights, and, likely, more people who aren’t publicly accounted for in any way.

“When you look at what ICE is doing now in terms of how they treat people, how they operate when they're given even a little bit of rope, it’s terrifying to think what the budget increase is going to do,” Brané said. “This is a taste of what we're going to see on a much larger scale."

You can read the list below. 404 Media has removed people listed on the flight manifests as “guards” (404 Media found at least one of these names matched someone who lists their employment online as a flight transport detention officer). Reportedly eight women deported to El Salvador were later returned. 404 Media is not publishing the names of women known to have returned to the U.S. The manifest also includes the names of several El Salvadorians mentioned as being deported in a White House Press release, court proceedings, and media reports. We have not included their names below because the administration has formally acknowledged that they were deported.

Manuel Quijada-Leon
Irvin Quintanilla-Garcia
Jose Ramirez-Iraheta
Josue Rivera-Portillo
Jorge Rodriguez Gomez
Mario Jeavanni Rojas
Edgar Leonel Sanchez Rosales
Brandon Sigaran-Cruz
Miguel Enriquez Saravia
Abraham Hernandez-Mania
Jean Morales-Loaiza
Nelson Alfaro-Orellana
Jhonnarty Pachecho-Chirinos
Cristian Alpe-Tepas
Jordyn Alexander Alvarez
Jose Alvarez Gonzalez
Wilfredo Avendano Carrizalez
Jose Gregorio Buenano Cantillo
Istmar Campos Mejia
Jose Chanta-Ochoa
Keider Alexander Flores Navas
Noe Florez-Valladares
Miguel Fuentes-Lopez
Roberto Interiano Uceda
Jose Lopez Cruz
Diego Maldonado-Fuentes
William Martinez-Ruano
Osmer Mejias-Ruiz
Iran Ochoa Suescun
David Orantez Gonzalez
Ariadny Araque-Cerrada
Elena Cuenca Palma
Maria Franco Pina
Mayerkis Guariman Gonzalez
Wilmary Linares-Marcano
Scarlet Mendoza Perez
Ofreilimar Peña Boraure
Edilianny Stephany Rivero Sierralta
Dioneli Sanz Aljorna
Anyeli Sequera Ramirez
Yanny Suarez Rodriguez
Karla Villasmil-Castellano

The 800-Year-Old Mystery of a Lost Medieval Legend Is Solved, Researchers Say

🌘
Subscribe to 404 Media to get The Abstract, our newsletter about the most exciting and mind-boggling science news and studies of the week.
The 800-Year-Old Mystery of a Lost Medieval Legend Is Solved, Researchers Say

A major mystery about a long-lost legend that was all the rage in Medieval England but survives in only one known fragment has been solved, according to a study published on Tuesday in The Review of English Studies.

Roughly 800 years ago, a legend known today as the Song of Wade was a blockbuster hit for English audiences. Mentions of the heroic character showed up in the works of Geoffrey Chaucer, for example. But the tale vanished from the literature centuries later, puzzling generations of scholars who have tried to track down its origin and intent. 

Now, for the first time, researchers say they’ve deciphered its true meaning—which flies in the face of the existing interpretation. 

“It is one of these really interesting and very unusual situations where we have a legend that was widely known and hugely popular throughout the Middle Ages, and then very suddenly in the middle of the 16th century, in the High Renaissance, it's just completely lost,” said James Wade, fellow in English at Girton College, University of Cambridge, who co-authored the study.

In 1896, the Medieval scholar M.R. James made a breakthrough on this literary cold case when he discovered a fragment of the Song of Wade in the Humiliamini sermon, which is part of a compendium that dates back to the 12th century. James brought the text to his colleague Israel Gollancz, a philologist with expertise in early English literature, and together they worked on a translation.

It is “the only surviving fragment” of the Song of Wade, said co-author Seb Falk, fellow in history and philosophy of science also at Girton College. “Obviously it had been around already for a while by the time this was written because it's part of the culture. The person who writes this sermon clearly expects his listeners to understand it and to know what he's talking about.”

James and Gollancz “knew there was no surviving text and they understood what they were looking at,” added Wade. “This was big news. It made the papers in 1896.”

But while the 19th century scholars recovered the sermon, their translation only deepened the mystery of the enigmatic text. For instance, references to “elves” and “sprites” in the translation suggest that the Song of Wade falls into a genre of fantastical epics about supernatural monsters. But when Chaucer references Wade in his works Troilus and Criseyde and The Merchant’s Tale, he places the character in a totally different tradition of chivalric romances which are rich with metaphors, but typically favor more grounded scenarios.

Chaucer’s mentions of Wade have perplexed scholars for centuries: in 1598, for instance, an early Chaucer editor named Thomas Speght wrote: “Concerning Wade and his [boat] called Guingelot, as also his strange exploits in the same, because the matter is long and fabulous, I passe it over.” In other words, Speght didn’t even try to decipher what Chaucer meant with his references to Wade.

This punt has become legendary in literary circles. “F. N. Robinson wrote in 1933 that Speght’s comment ‘has often been called the most exasperating note ever written on Chaucer’, and Richard Firth Green observed that Speght’s note ‘has caused generations of scholars to tear their hair out,’” Wade and Falk write in their new study.  

In 1936, the scholar Jack Bennett "supposed that there is ‘probably no better known crux in Chaucer than the tale of Wade,’” the pair added.

A few years ago, Wade and Falk set out to see if they could shed light on this famous and persistent riddle. Like so many good ideas, it began over a lunchtime conversation. From there, the team slowly and methodically worked through the sermon, scrutinizing each letter and rune.  

“Just trying to decipher the thing took quite a lot of work—transcribing it and then making an initial translation,” Falk said. “But I started realizing there's some really interesting material here from my point of view, as a historian of science, with lots of animals mentioned, both in Middle English and in Latin.”

As the pair worked through the text, they began to suspect that the scribe who originally copied the work may not have been very familiar with Middle English, leading to some transcription errors with certain runes. In particular, they found that the longstanding translation of “elves” and “sprites” were, in their view, more likely to be “wolves” and “sea snakes.” This transforms a key Song of Wade passage, “Some are elves and some are adders; some are sprites that dwell by waters,” to: 

“Some are wolves and some are adders; some are sea-snakes that dwell by the water.”

It may seem like a subtle shift, but it is a major sea change for the interpretation of the text. The switch to animals, as opposed to supernatural beings, suggests that the preacher who wrote the sermon was using animals as metaphors for human vices and behaviors—a reading that provides a much better fit for the overall sermon, and at last explains why Chaucer viewed Wade in the tradition of chivalry.

“It became pretty clear to us that the scribe had probably made some kind of a mistake because he was used to writing Latin rather than English,” Falk said. “We're not here to say that people who read it differently previously were stupid to miss it, but I think by looking at it from the outside in, we got a different perspective.”

“It then radically changes the meaning of the passage from being about monsters to being about animals, and therefore, changes it from being a piece about mythical beasts to a piece about courtly romance,” he added.

In addition to relieving centuries-old headaches over the legend, Wade and Falk also speculate that the sermon was originally written by English poet and abbot Alexander Neckam (1157–1217), based on its style and context clues. 

For the researchers, the thrill of the discovery lies not only in decoding the fragment, but in recovering a missing piece of cultural memory. While their new translation and attribution to Neckam are both tentative and may be disputed by other scholars, the study still opens a window into a long-lost legend and shows how fresh eyes can uncover insights in even the most perplexing fragments.

“By putting a completely different slant on it and, we think, understanding it properly, we have come much closer to the true meaning of the Wade legend,” Falk said. “Now, obviously we've only got three lines of this presumably much longer poem, and therefore, we can't pretend that we understand the thing in full, but I think we can understand it much better than we ever have before.”

“It reminds us that as time moves on, there's always the possibility of generational amnesia, of forgetting things, of losing things, and that when you have a chance to get a little bit back of something that humanity has lost, or that culture has lost, it's a really exciting moment,” Wade concluded.

🌘
Subscribe to 404 Media to get The Abstract, our newsletter about the most exciting and mind-boggling science news and studies of the week.

Steam Bends to Payment Processors on Porn Games

Steam Bends to Payment Processors on Porn Games

Steam, the dominant digital storefront for PC games operated by Valve, updated its guidelines to forbid “certain kinds of adult content” and blamed restrictions from payment processors and financial institutions. The update was initially spotted by SteamDB.info , a platform that tracks and publishes data about Steam, and reported by the Japanese gaming site Gamespark.

The update is yet another signal that payment processors are lately becoming more vigilant about what online platforms that host adult content they’ll provide services to and another clear sign that they are currently the ultimate arbiter of what kind of content can be made easily available online, or not. 

Steam’s policy change appears under the onboarding portion of its Steamworks documentation for developers and publishers. The 15th item on a list of “what you shouldn’t publish on Steam” now reads: “Content that may violate the rules and standards set forth by Steam’s payment processors and related card networks and banks, or internet network providers. In particular, certain kinds of adult only content.”

It’s not clear when exactly Valve updated this list, but an archive of this page from April shows that it only had 14 items then. Other items that were already on the list included “nude or sexually explicit images of real people” and “adult content that isn’t appropriately labeled and age-gated,” but Valve did not previously mention payment processors specifically. 

"We were recently notified that certain games on Steam may violate the rules and standards set forth by our payment processors and their related card networks and banks," Valve spokesperson Kaci Aitchison Boyle told me in an email. "As a result, we are retiring those games from being sold on the Steam Store, because loss of payment methods would prevent customers from being able to purchase other titles and game content on Steam. We are directly notifying developers of these games, and issuing app credits should they have another game they’d like to distribute on Steam in the future."

Valve did not respond to questions about where developers might find more details about payment processors’ rules and standards. 

SteamDB.info, which also tracks when games are added or removed from Steam, noted many adult games have been removed from Steam in the last 24 hours. Sex games, many of which are of very low quality and sometimes include very extreme content, have been common on Steam for years. In April, I wrote about a “rape and incest” game called No Mercy which the developers eventually voluntarily removed from Steam after pressure from users, media, and lawmakers in the UK. The majority of games I saw that were removed from Steam recently revolve around similar themes, but we don’t know if they were removed by the developers or Valve, and if they were removed by Valve because of the recent policy change. Games are removed from Steam every day for a variety of reasons, including expired licensing deals or developers no longer wanting to support a game. 

However, Steam’s policy change comes at a time that we’ve seen increased pressure from payment processors around adult content. We recently reported that payment processors have forced two major AI models sharing platforms, Civitai and Tensor.Art, to remove certain adult content.

Update: This story has been updated with comment from Valve. 

Podcast: The AI Exodus Begins

Podcast: The AI Exodus Begins

We start this week with a series of articles from Emanuel about a crackdown in the AI industry. After the break, Sam tells us about the ‘Save Our Signs’ campaign which hopes to preserve the history of national parks. In the subscribers-only section, Jason rants about how AI will not save the media industry. 

Listen to the weekly podcast on Apple Podcasts, Spotify, or YouTube. Become a paid subscriber for access to this episode's bonus content and to power our journalism. If you become a paid subscriber, check your inbox for an email from our podcast host Transistor for a link to the subscribers-only version! You can also add that subscribers feed to your podcast app of choice and never miss an episode that way. The email should also contain the subscribers-only unlisted YouTube link for the extended video version too. It will also be in the show notes in your podcast player.

The Real Future of AI Is Ordering Mid Chicken at Bojangles

The Real Future of AI Is Ordering Mid Chicken at Bojangles

Yesterday I ordered my lunch from an AI operating a drive-thru. It was fine. Banal. Boring even. A new experience that I think will become routine in the future.

The AI drive-thru operator isn’t cutting edge tech deployed in an upscale market to win over high value consumers. I live at the edge of a South Carolina city with a little more than 140,000 people. A booming metropolis with the best and the finest, it is not.

There’s a lot of local fast food fried chicken joints here and one of them is Bojangles. It’s mid. Better than KFC and not as good as Popeyes, Bojangles is fine if you’re hungry but you’ll forget the meal as soon as it’s done and you’ll never yearn for it. Last year the restaurant said it would deploy an AI agent at its drive-thru windows. It’s called, I shit you not, Bo-Linda and made by the Israeli tech firm Hi-Auto.

According to the Bojangles website, “Bo-Linda™ can take guest orders 96+% of the time with no human intervention,” and “improve overall satisfaction by offloading order taking from team members and providing a consistent guest experience.”

When Bo-Linda finally arrived in South Carolina, I went to see what the fuss was about. It was crushingly dull. A preview of a time in the near future, I think, when the AI bubble retracts and the agents are common. It took my order with an efficiency that, I’ll be honest, is not typical of the typical fast food worker. The worst part was its constant attempts to up-sell me.

0:00
/0:39

“Do you want to upgrade your drink to our new water-melon iced tea?” It asked.

“No thank you.”

“Would you like to add our new peach cobbler for $1.99?”

“No thank you.”

“May I get you anything else?”

“No, that’s it.”“Would you like to round up for military scholarships?”“No thank you.”

“You’re welcome. Thank you. Your total is $10.89.”

When 404 Media founder Joseph Cox watched the video of my interactions, he made fun of my “no thank yous.” What can I say? There’s an ingrained and often stifling politeness that’s bred into us in the American South. Even though I knew I was talking to a machine, I couldn’t not be nice to it.

My thought in the immediate aftermath is that the whole thing was painless. My order wasn’t complicated, but it was correct. The machine never stumbled over itself or asked for clarification. It knew what I wanted and the humans at the window gave it to me. A few conversations with friends and a quick scan of social media in the area show that other people have had much the same interactions with Bo-Linda.

The drive-thru AI, much like the chicken it sold me, is fine. Forgettable.

It was later, sitting at home, and doing a little research for the story that concerns popped up. OpenAI CEO Sam Altman has said that saying “please” and “thank you” to ChatGPT has cost the company tens of millions of dollars. How much water and energy had I burned being polite to Bo-Linda the chatbot?

Sometimes it feels like the answers to these questions don’t matter. We’re barreling forward into the AI future, whether we like it or not. Data centers are springing up across America and nuclear power plants are coming back online, so Bojangles can make a little more money and so people in the drive-thru can feel a little less friction before eating their meal.

This is how a new technology takes over, what it feels like right before it becomes ubiquitous. One day you wake up and the cameras are everywhere, able to recognize your face and chart your movements across the city you live in. One day you look up and everyone has their face buried in their phone. It happened by degrees, but so gradually you didn’t notice. There were signs along the way, dangers and warnings.

But mostly, it was fine, as boring and routine as ordering chicken at a drive-thru.

3D Printing Patterns Might Make Ghost Guns More Traceable Than We Thought

3D Printing Patterns Might Make Ghost Guns More Traceable Than We Thought

So-called 3D-printed ghost guns are untraceable firearms that can be assembled at home. But cutting edge work from a forensic expert in California and researchers at the University of Oklahoma may soon show investigators can trace a 3D printed object to the specific printer that made it.

Weapons manufactured using 3D printers have been a subject of Biden-era legislation and recent Supreme Court scrutiny. It’s possible to download the blueprints for a firearm and build it in your home. There’s no serial number to track and no store to scrutinize your purchase. Luigi Mangione used a ghost gun to allegedly assassinate United Healthcare CEO Brian Thompson.

The Hyperpersonalized AI Slop Silo Machine Is Here

Subscribe
Join the newsletter to get the latest updates.
Success
Great! Check your inbox and click the link.
Error
Please enter a valid email address.
The Hyperpersonalized AI Slop Silo Machine Is Here

For a while, I have said that the AI slop endgame, for social media companies, is creating a hyper personalized feed full of highly specific content about anything one could possibly imagine. Because AI slop is so easy to make and because social media algorithms are so personalized, this means that Facebook, Instagram, TikTok, or YouTube can feed you anything they perceive its users to possibly want. So this means that AI slop makers are exploring ever more niche areas of content. 

Case in point: Facebook AI slop about the horrific and deadly Texas flood. Topical AI content about disasters, war, current events, and news stories are at this point so commonplace that they are now sadly barely notable, and AI-powered “misinformation” about horrible events are all over every social media feed I can think of. But as we document our descent into this hellhole, I thought some AI slop surfaced on Bluesky by Christina Stephens was particularly notable:

The Hyperpersonalized AI Slop Silo Machine Is Here

Saving the Lost Silent Zuckerberg Interview With the Amazing Power of AI

Saving the Lost Silent Zuckerberg Interview With the Amazing Power of AI

Yesterday, Silicon Valley trade publication The Information launched TITV, a live-streaming news program. For its first episode, the company’s founder Jessica Lessin interviewed Mark Zuckerberg about his recent attempts to hire AI executives away from other companies for huge sums of money.

It is impossible to say what actually happened in the interview, however, because there were audio problems with the livestream—the Zuckerberg segment was completely silent, giving us a minute and 50 seconds of video of Zuckerberg’s face moving with no sound coming out.

In recent weeks, however, Zuckerberg has really ramped up his rhetoric about “superintelligence,” throwing hundreds of millions of dollars at AI researchers in attempts to hire them at Meta, and beginning work on massive data centers around the country to power his AI. Zuckerberg and Meta have talked endlessly about AI as a game changer, while being unable or unwilling to keep AI-generated misinformation and slop off of Meta platforms.

Meanwhile, there has been increased discussion about the possible power of using AI for journalism, which we have written about extensively. I saw the silent interview with Zuckerberg as both an opportunity to check out the awesome abilities of Meta’s AI, and to test AI tools at a task some of them are supposedly good at—reading lips. If these technologies worked as well as their creators have promised, it might be possible to provide the world with information from The Lost Mark Zuckerberg Interview.

Here is how my experiment went.

Because Zuckerberg was obviously talking about Meta AI, I thought I would try his tool first. I took video from the interview and asked Meta AI to lip read it to tell me what they were saying:

Saving the Lost Silent Zuckerberg Interview With the Amazing Power of AI

“I’d love to help! However, I’m a text-based AI and don’t have capability to visually lip read videos,” Meta AI said. It then suggested that I “find a professional lip reader or transcription service that specializes in lip reading videos” or “use automated lip reading software.” 

I then tried to upload still images to see if Meta AI could parse them:

Saving the Lost Silent Zuckerberg Interview With the Amazing Power of AI

When I uploaded additional images, Meta AI was also not able to provide any information about what was being said.

I then went to ChatGPT, because Zuckerberg is reportedly offering pay packages of up to $300 million to OpenAI staffers to come work at Meta. I uploaded the 1:50 video and ChatGPT told me “the video processing took too long and timed out.” I then uploaded a 25 second clip and it told me “the system is still timing out while trying to extract frames.” I then asked it to do the first five seconds and it said “even with the shorter clip and smaller scope (first 5 seconds), the system timed out.” I then asked for it to extract one single frame, and it said “it looks like the system is currently unable to extract even a single frame from the video file.” ChatGPT then asked me to take a screenshot of Zuckerberg. I sent it this: 

Saving the Lost Silent Zuckerberg Interview With the Amazing Power of AI

And ChatGPT said “the person appears to be producing a sound like ‘f’ or ‘v’ (as in ‘video’ or ‘very’),” but that “possibly ‘m’ or ‘b,’ depending on the next motion.” I then shared the 10 frames around that single screenshot, and ChatGPT said “after closely analyzing the progression of lip shapes and facial motion,” the “probable lip-read phrase” was “This is version.” I then uploaded 10 more frames and it said the “full phrase so far (high confidence): ‘This version is just.’”

Saving the Lost Silent Zuckerberg Interview With the Amazing Power of AI
Saving the Lost Silent Zuckerberg Interview With the Amazing Power of AI
Saving the Lost Silent Zuckerberg Interview With the Amazing Power of AI
Saving the Lost Silent Zuckerberg Interview With the Amazing Power of AI
Saving the Lost Silent Zuckerberg Interview With the Amazing Power of AI
Saving the Lost Silent Zuckerberg Interview With the Amazing Power of AI
Saving the Lost Silent Zuckerberg Interview With the Amazing Power of AI
Saving the Lost Silent Zuckerberg Interview With the Amazing Power of AI
Saving the Lost Silent Zuckerberg Interview With the Amazing Power of AI
Saving the Lost Silent Zuckerberg Interview With the Amazing Power of AI

I then decided to try to extract every frame from the video and upload it to ChatGPT.

I went to a website called frame-extractor.com and cut the video into 3,000 frames. After it had processed 700 of them, I tried to upload them to ChatGPT and it did not work. I then decided I would go 10 frames at a time from the beginning of the clip. Even though I sent an entirely different portion of the video and told ChatGPT we were starting from a different part of the video, it still said that the beginning of the video said “this version is.” I continued uploading frames, 10 at a time. These frames included both Lessin and Zuckerberg, not just Zuckerberg.

ChatGPT slowly began to create a surely accurate transcript of the lost audio of this interview: “This version is just that it we built,” ChatGPT said. As I added more and more frames, it refined the answer: “This version is what we’re going to do,” it said. Finally, it seemed to make a breakthrough. “Is this version of LLaMA more powerful than the one we released last year?” the ChatGPT transcript said. It was not clear about who was speaking, however. ChatGPT said "her mouth movements," but then explained that the "speaker is the man on the left" (Lessin, not Zuckerberg, was speaking in these frames). 

I had uploaded 40 of a total of 3,000 frames. Zoom video is usually 30 fps, so in approximately 1.5 seconds, Lessin and/or Zuckerberg apparently said “Is this version of LLaMA more powerful than the one we released last year?” I then recorded this phrase at a normal speaking speed, and it took about four seconds. Just a data point.

Saving the Lost Silent Zuckerberg Interview With the Amazing Power of AI
Saving the Lost Silent Zuckerberg Interview With the Amazing Power of AI
Saving the Lost Silent Zuckerberg Interview With the Amazing Power of AI
Lipreadtest
0:00
/4.973333

I then got an error message from ChatGPT, and got rate-limited because I was uploading too much data. It told me that I needed to wait three hours to try again. 

Saving the Lost Silent Zuckerberg Interview With the Amazing Power of AI

Finally, I did what Meta AI told me to do, and tried a bespoke AI lip reading app. I found one called ReadTheirLips.com, which is powered by Symphonic Labs. This is a tool that people have been trying to use in recent months to figure out what Donald Trump and Jeffrey Epstein were saying to each other in silent b-roll news footage, without much success.

I paid $10 for three minutes worth of transcription and asked it to lip read using its “Multiface Detection.” After waiting 10 minutes, I got an error message that said “Transcription failed, no credits have been used, try again later.” I then asked it to focus only on Zuckerberg, and actually got some text. I separately asked it to focus on Lessin.

Here is a transcript of what the AI says they were talking about. It has not been edited for clarity and I have no idea which parts, if any, are accurate:

LESSIN: Thanks for joining us again, TV. We're happy to have you already this morning. News that you've spent even more money with your big announcement about your new supercomputers. We'll get to that, but to start, you've been in huge scale like I.

ZUCKERBERG: Happy TO BE HERE. We're GOING TO TALK A LITTLE BIT ABOUT META'S AI STRATEGY. It's BEEN BUSY, YOU KNOW? I THINK THE MOST EXCITING THING THIS YEAR IS THAT WE'RE STARTING TO SEE EARLY GLIMPSES OF SELF-IMPROVEMENT WITH THE MODELS, WHICH MEANS THAT DEVELOPING SUPERINTELLIGENCE IS NOW.

LESSIN: You HAVE BEEN ON A PLANE OF AI HIRING, WHY AND WHY NOW? 

ZUCKERBERG: Insight, and we just want to make sure that we really strengthen the effort as much as possible to go for it. Our mission with a lab is to deliver personal superintelligence to everyone in the world, so that way, you know, we can put that power in every individual's hand. I'm really excited about it.

LESSIN: I DON'T KNOW, I DON'T KNOW, I DON'T KNOW.

ZUCKERBERG: Than ONE OF THE OTHER LABS YOU'RE DOING, AND YOU KNOW MY VIEW IS THAT THIS IS GOING TO BE SOMETHING THAT IS THE MOST IMPORTANT TECHNOLOGY IN OUR LIVES. IT'S GOING TO UNDERPIN HOW WE DEVELOP EVERYTHING AND THE COMPANY, AND IT'S GOING TO AFFECT SOCIETY VERY WISELY. SO WE JUST WANT TO MAKE SURE WE GET THE BEST FOCUS.

LESSIN: Did YOU FEEL LIKE YOU WERE BEHIND WHAT WAS COMING OUT OF LAW BEFORE I'M NOT ADJUSTING.

ZUCKERBERG: On THIS FROM ENTREPRENEURS TO RESEARCHERS TO ENGINEERS WORKING ON THIS HIDDEN INFRASTRUCTURE, AND THEN OF COURSE WE WANT TO BACK IT UP WITH JUST AN ABSOLUTELY MASSIVE AMOUNT OF COMPUTER RESEARCH, WHICH WE CAN SUPPORT BECAUSE WE HAVE A VERY STRONG BUSINESS MODEL THAT THROWS OFF A LOT OF CAPITAL. LET'S TALK ABOUT.

LESSIN: Like THIS SUMMER, PARTICULARLY, YOU SWITCH GEARS A LITTLE BIT.

ZUCKERBERG: I THINK THE FIELD IS ACCELERATING, YOU KNOW, WE KEEP ON TRACK FOR WHERE WE WANT TO BE, AND THE FIELD KEEPS US MOVING FORWARD.

The video ends there, and it cuts back to the studio.

Update: The Information provided 404 Media with several clips (with audio) from Lessin's interview with Zuckerberg, as well as a real transcript of the interview. Here is the real segment of what was said. As you can see, the AI captured the jist of this portion of the interview, and actually did not do too bad:

Lessin: Mark, thanks for joining TITV. We're happy to have you here. Already this morning, [there’s] news that you've spent even more money with your big announcement about your new supercomputers. We'll get to that. But to start, you took a huge stake in ScaleAI. You have been on a blitz of AI hiring. Why, and why now?

Zuckerberg: Yeah, it's been busy. You know, I think the most exciting thing this year is that we're starting to see early glimpses of self-improvement with the models, which means that developing super intelligence is now in sight, and we just want to make sure that we really strengthen the effort as much as possible to go for it. Our mission with the lab is to deliver personal super intelligence to everyone in the world, so that way we can put that power in every individual's hand. And I'm really excited about it. It's a different thing than what the other labs are doing.

And my view is that this is going to be something that is the most important technology in our lives. It's going to underpin how we develop everything at the company, and it's going to affect society very widely. So we just want to make sure that we get the best folks to work on this, from entrepreneurs to researchers to engineers working on the data and infrastructure.

And then, of course, we want to back up with just an absolutely massive amount of compute which we can support, because we have a very strong business model that throws off a lot of capital.

Lessin: Did you feel like you were behind coming out of Llama 4? It seems like this summer, in particular, you switched gears a little bit.

Zuckerberg: I think the field is accelerating, you know, we keep on having goals for where we want to be. And then the field keeps on moving faster than we expect.

The rest of the interview is available at The Information.

Immigration Raid Tracking App ‘ICE Block’ Keeps Your Data Private, Researcher Finds

Immigration Raid Tracking App ‘ICE Block’ Keeps Your Data Private, Researcher Finds

ICE Block, an app that lets users warn others about the location of ICE officers, and which for a short while was the top of the social media App Store chart, does protect users’ privacy and doesn’t share your location with third parties, according to a recent analysis from a security researcher. ICE Block already claimed that it did not collect any data from the app; the analysis now corroborates that.

“It’s not uploading your location at all, when you make a report that report isn’t associated with your device in any way, and there are no third party services that it talks to or sends data to,” Cooper Quintin, senior public interest technologist at the Electronic Frontier Foundation (EFF), who analyzed the ICE Block app, told 404 Media.

Hugging Face Is Hosting 5,000 Nonconsensual AI Models of Real People

Hugging Face Is Hosting 5,000 Nonconsensual AI Models of Real People

Hugging Face, a company with a multi-billion dollar valuation and one of the most commonly used platforms for sharing AI tools and resources, is hosting over 5,000 AI image generation models that are designed to recreate the likeness of real people. These models were all previously hosted on Civitai, an AI model sharing platform 404 Media reporting has shown was used for creating nonconsensual pornography, until Civitai banned them due to pressure from payment processors. 

Users downloaded the models from Civitai and reuploaded them to Hugging Face as part of a concerted community effort to archive the models after Civitai announced in May it will ban them. In that announcement, Civitai said it will give the people who originally uploaded them “a short period of time” before they were removed. Civitai users began organizing an archiving effort on Discord earlier in May after Civitai indicated it had to make content policy changes due to pressure from payment processors, and the effort kicked into high gear when Civitai announced the new “real people” model policy. 

At the time of writing, the Discord channel has hundreds of members who are still finding and sharing models that have been removed from Civitai and are reuploading them to Hugging Face. Some users have even shared a piece of software, also hosted on Hugging Face, which allows users to automatically upload Civitai models to Hugging Face in batches. 

Hugging Face did not respond to multiple requests for comment. It also did not respond to specific questions about how and if it plans to moderate these models given the fact that they were previously hosted on a platform primarily used for AI generating pornography, and which our reporting shows were used to create noncensual pornography. 

I found the Civitai models of real people that were reuploaded to Hugging Face thanks to a paper I covered where researchers scraped Civitai. The paper showed that the platform was primarily used for pornographic content, and that it deleted at least 50,000 AI models designed to recreate the likeness of real people once it changed its policy in May. The researchers, Laura Wagner and Eva Cetinic from the University of Zurich, provided me with a spreadsheet of all the deleted models, which included the name of the models (which is almost always the name of a female celebrity or lesser known internet personality), a link to where it was previously hosted on Civitai, and the SHA256 hash Civitai uses to identify all the models hosted on its site. 

The people who are reuploading the Civitai models to Hugging Face are seemingly trying to hide the purpose of those models on Hugging Face. On Hugging Face, these models have generic names and URLs like “LORA” or “Test model.” Users can’t tell that these models are used to generate the likeness of real people just by looking at their Hugging Face page, nor would they be able to find them by searching for the names of celebrities on Hugging Face. In order to find them, users can go to a separate website the Civitai archivists created. There, they can enter the name of a Civitai model, the link where it used to be hosted on Civitai before it was deleted, or the model’s SHA256 hash. All of these will lead users to a page which explains what the model is, show its name, as well as several images showing the kind of images it can generate. At the bottom of that page is a link to one or more Hugging Face “mirrors” where the model has been reuploaded. 

By using Wagner’s and Cetinic’s data and entering it into this Civitai archive site, I was able to find the Civitai models hosted on Hugging Face. 

Hugging Face’s content policy bans “Unlawful, defamatory, fraudulent, or intentionally deceptive Content (e.g., disinformation, phishing, scams, inauthentic behavior),” as well as “Sexual Content used for harassment, bullying, or created without explicit consent.” Models that generate the likeness of real people don’t have to be used for unlawful or defamatory ends, and they only produce sexual content if people choose to use them that way. There’s nothing in Hugging Face’s content policy that explicitly forbids AI models that recreate the likeness of real people. 

However, the Hugging Face Ethics & Society group, which is “committed to operationalizing ethics at the cutting-edge of machine learning,” has identified six “high-level categories for describing ethical aspects of machine learning work,” one of which is that AI should be “Consentful.”

“Consentful technology supports the self-determination of people who use and are affected by these technologies,” the company explains. Examples of this, the company says, includes “Avoiding extractive, chauvinist, ‘dark,’ and otherwise ‘unethical’ patterns of engagement.”

Other AI models that recreate the likeness of real people could conceivably not violate any of these principles. For example, two of the deleted Civitai models that were reuploaded to Hugging Face were designed to recreate the likeness of Vladimir Putin, which in theory people would want to use in order to mock or criticize the Russian president. However, the vast majority of the models are of female celebrities, which my reporting has shown is being used to create nonconsensual sexual content, and which were deleted en masse from Civitai because of pressure from payment processors who didn’t want to be associated with that type of media. 

a16z-Backed AI Site Civitai Is Mostly Porn, Despite Claiming Otherwise

a16z-Backed AI Site Civitai Is Mostly Porn, Despite Claiming Otherwise

In the two years that I’ve been reporting about Civitai, a platform for sharing AI image generation models that has been instrumental in the production of AI generated non-consensual porn, Civitai has consistently argued that the amount of adult content on the site has been overstated. But new research shows that, if anything, the amount of adult content on Civitai has been underestimated.

In their paper, “Perpetuating Misogyny with Generative AI: How Model Personalization Normalizes Gendered Harm,” researchers Laura Wagner and Eva Cetinic from the University of Zurich studied more than 40 million user-generated images on Civitai and over 230,000 models. They found “a disproportionate rise in not-safe-for-work (NSFW) content and a significant number of models intended to mimic real individuals” on the platform, they write in the paper.

“What began as a promising creative breakthrough in TTI [text-to-image] generation and model personalization, has devolved into a pipeline for the large-scale production of sensational, biased, and abusive content. The open-source nature of TTI technologies, proclaimed as a democratizing force in generative AI, has also enabled the propagation of models that perpetuate hypersexualized imagery and nonconsensual deepfakes,” Wagner and Cetinic write in their paper. “Several indicators suggest a descent into a self-reinforcing feedback loop of platform decay. These include a dramatic increase in NSFW imagery, from 41% to 80% in two years, as well as the community’s normalization of deepfakes, misogynistic tropes, and other exploitative content.”

To visualize just how dominant adult content was on Civitai, check the chart below, which shows the distribution of images by “NSFW browsing levels” over time. These categories, which are inspired by the Motion Picture Association film rating system and are used by Civitai to tag images, show that adult content was always a significant portion of all images hosted on the site, but that the portion of “overtly sexual, or disturbing” content only grew as the site became more popular, and exploded starting in 2024. The chart is based on Civitai’s own numbers and categorization system which the researchers scraped from the site. It likely undercounts the number of explicit images on the site since as both the researchers and I observed during my reporting, not all adult content is tagged as such. 

a16z-Backed AI Site Civitai Is Mostly Porn, Despite Claiming Otherwise

In December, 2023, Civitai CEO Justin Maier told Venture Beat that “less than 20% of the posted content is what we would consider ‘PG-13’ or above.” When I reached Maier for comment for this article, he told me that “The VentureBeat figure cited a December 2023 snapshot, when adult posts were a minority. The mix shifted in 2024 as many NSFW creators migrated from platforms that no longer allow that content.”

However, the data in the paper shows that by October of 2023, 56 percent of all images on the site were tagged as “NSFW” and were designated by Civitai as “PG-13” or above.

In May, Civitai announced it’s banning all AI image generation models designed to recreate the likeness of real people because of pressure from payment processors. Since the authors of the paper were already tracking hundreds of thousands of models hosted on Civitai, they could easily see which models were removed, giving us a first clear look at how common those models were. 

Overall, they saw that more than 50,000 models designed to AI-generate the likeness of real people were removed because of the ban. These are models that Civitai itself tagged as “person of interest,” the tag it uses to indicate a model recreates the likeness of a real person, so the actual number of models depicting real people is likely higher. 

It’s hard to say if the most popular AI models on Civitai were all popular just because they were used to generate explicit images, because people could use models tagged as NSFW to generate non-nude images and vice versa. For example, according to the data collected by the researchers the most popular AI image generation model on Civitai was EasyNegative with almost 600,000 downloads. It’s not tagged or promoted as a model for generating pornography, but images that users created with it, which are shared on its Civitai model page, show it is commonly used that way. 

Other very popular models on Civitai are clearly designed to generate explicit images. The sixth most popular model with 360,000 downloads is Nudify XL: Better Bodies, which its creator says is for “nude female frontals.” A model called Realistic Vaginas - God Pussy 1 had 256,000 downloads. The POV Squatting Cowgirl LoRA model, which Civitai tagged as a “sex” model, had 189,000 downloads. 

a16z-Backed AI Site Civitai Is Mostly Porn, Despite Claiming Otherwise

The authors of the paper also conducted deeper analysis of the 40,000 most downloaded models on Civitai. In the 11,151 models where they could extract textual training data, meaning text that indicates what kind of images the models were trained on, they found “specifically abusive terms.” 5.6 percent included the keywords “loli” (558 models) and/or “shota” (69 models), Japanese terms commonly used to refer to sexualized depictions of pre-pubescent girls and boys. About 2.1 percent (189 models) included the keyword “rape.”

The data shows with clear numbers what we have long argued at 404 Media: adult content drives technological innovation and early adoption, and this has been especially true in the world of generative AI. Despite its protestation to the contrary, Civitai, which is one of the fastest growing platforms in that industry, and that the influential Silicon Valley venture capital firm Andreessen Horowitz invested in, grew because of explicit content, much of which was nonconsensual. 

“The rapid rise of NSFW content, the over-representation of young female subjects, and the prioritization of sensational content to drive engagement reflect an exploitative, even abusive dynamic,” the researchers wrote. “Additionally, structural discrimination embedded in today’s open-source TTI tools and models have the potential to cause significant downstream harm as they might become widely adopted and even integrated into future consumer applications.” 

Adult content driving innovation and early adoption doesn’t have to be harmful. As the researchers write, it’s the choices platforms like Civitai make that give us these outcomes. 

“The contingent nature of technology, shaped by online communities, platform operators, lawmakers, and society as a whole, also creates opportunities for intervention,” they write. “Model-sharing hubs and social media platforms both have the capacity to implement safeguards that can limit the spread of abusive practices such as deepfake creation and abusive imagery.”

Hackers Can Remotely Trigger the Brakes on American Trains and the Problem Has Been Ignored for Years

Hackers Can Remotely Trigger the Brakes on American Trains and the Problem Has Been Ignored for Years

Many trains in the U.S. are vulnerable to a hack that can remotely lock a train’s brakes, according to the U.S. Cybersecurity and Infrastructure Security Agency (CISA) and the researcher who discovered the vulnerability. The railroad industry has known about the vulnerability for more than a decade but only recently began to fix it.

Independent researcher Neil Smith first discovered the vulnerability, which can be exploited over radio frequencies, in 2012. 

“All of the knowledge to generate the exploit already exists on the internet. AI could even build it for you,” Smith told 404 Media. “The physical aspect really only means that you could not exploit this over the internet from another country, you would need to be some physical distance from the train [so] that your signal is still received.”

Swedish Prime Minister Pulls AI Campaign Tool After It Was Used to Ask Hitler for Support

Swedish Prime Minister Pulls AI Campaign Tool After It Was Used to Ask Hitler for Support

The Moderate Party of Sweden has removed an AI tool from its website after people used it to generate videos of Prime Minister Ulf Kristersson asking Adolf Hitler for support.The tool allowed users to generate videos of Kristersson holding an AI-generated message in an attempt to promote the candidate ahead of the general election in Sweden next year.

Swedish television station TV4 used the tool to generate a video of Kristersson on a newspaper above the headline “Sweden needs Adolf Hitler” after it noticed that it had no guardrails or filters.

In the video TV4 generated using the website, Kristersson makes his pitch over stock footage of old people embracing. A woman runs through a field, the camera focusing on flowers while the sun twinkles in the background. Cut to Kristersson. He turns a blue board around. “We need you, Adolf Hitler,” it says.

The Moderates removed the AI system from its website, but the videos of Ulf asking Hitler to join the Moderates remain on social media and TV4’s website..

In an attempt to bolster its party's ranks, Moderates launched a website that allowed users to generate a custom video of Kristersson asking someone to join the party. The idea was probably to have party members plug in the names of friends and family members and share what appeared to be a personalized message from the PM asking for their support.

In the video, Kristersson stands in front of stairs, makes his pitch, and turns around a blue tablet that bears a personalized message to the viewer. The system apparently had no guardrails or filters and Swedish television station TV4 was able to plug in the names Adolf Hitler, Ugandan dictator Idi Amin, and Norwegian mass murderer Anders Breivik.

The Moderate Party did not return 404 Media’s request for a comment about the situation, but told TV4 it shut down the site as soon as it learned people were using it to generate messages with inappropriate names.

The Moderate Party’s AI-generated video was simple.. It filmed the PM holding a blue board it could easily overlay with input from a user and then used AI to generate the fake newspaper and a few other slides. Preventing people from typing in “Hitler” or “Anders Brevik” would have been as simple as maintaining a list of prohibited names, words, and phrases, something that every video game and service does. Users are good at bypassing guardrails, but the Moderate’s AI tool appeared to have none.

Users making content you don’t want to be associated with is one of the oldest and most well known problems in AI. If you release a chatbot, generative photo system, or automated political greeting generator, someone will use it to reference the Nazis or make nonconsensual porn.

When Microsoft launched TAY in 2016, users turned it into a Hitler-loving white nationalist in a few hours. Eight years later, another Microsoft AI product had a loophole that let people make AI-generated nudes of Taylor Swift. Earlier this year, Instagram’s AI chatbots lied about being licensed therapists.

'Deportation Tok' Is Taking Off

'Deportation Tok' Is Taking Off

As immigration raids roll out across the U.S., those affected are processing the experience in the normal 2025 way—via vertical video. 

Across social media, people are uploading clips with uncanny-valley titles like “A normal day for me after being deported to Mexico” and “3 things I wish I knew before self-deporting from the US!” These posts have the normal shape, voiceovers, and fonts of influencer content, but their dystopian topic reflects the whiplash of the current historical moment. 

Doomscrolling last week, a particular clip caught my eye. A man sits on the bottom bunk of a metal bed, staring down at the floor, with the caption “Empezando una nueva vida después de que me Deportaran a México” (“Starting a new life after being Deported to Mexico”).

❌