A lawyer defending an alleged distributor of Anom, the encrypted phone company for criminals that the FBI secretly ran and backdoored to intercept tens of millions of messages, is pushing to learn the identity of the confidential human source (CHS) who first created Anom and provided it to the FBI starting the largest sting operation in history, according to recently filed court records. The government says it will provide that identity under discovery, but the CHS may also be revealed in open court if they testify.
The move is significant in that the CHS, who used the pseudonym Afgoo while running Anom, is a likely target for retaliation from violent criminals caught in Anom’s net. The Anom case, called Operation Trojan Shield, implicated hundreds of criminal syndicates in more than 100 countries. That includes South American cocaine traffickers, Australian biker gangs, and kingpins hiding in Dubai. Anom also snagged specific significant drug traffickers like Hakan Ayik, who authorities say heads the Aussie Cartel which brought in more than a billion Australian dollars in profit annually.
Court records say, however, that if this defendant’s case goes to trial, the lawyer believes Afgoo will be the “government’s key witness.”
In the spirit of catching up with relatives over the holidays, I’d like to introduce you to a member of your extra-extended family: The Saber-toothed Gorgonopsian from Mallorca. Get acquainted with your great-great-great (insert about 100 million greats here) grandmother’s cousin. It’s probably not going to behave well at the dinner table.
Then, the grim prognoses of Disney princesses are outlined in one of the world’s premiere medical journals. Next, I’m back on the cannibalism beat; I just can’t help myself. Finally, an archaeological adventure a world away.
Happy winter solstice to all who thrive in darkness. If you’re in the Southern Hemisphere, say hi to the Sun for us.
If you trace your lineage far back enough, you will eventually reach therapsid ancestors. Mammals sprouted out of this ancient group of creatures some 225 million years ago, around the same time that dinosaurs were ascending to world domination. But though therapsids were abundant during the Permian era, the period before the emergence of mammals and dinosaurs, gaps in the fossil record have made it difficult to reconstruct the origins of this ancestral group.
Enter: DA21/17-01-01, a fossil specimen that dates back at least 270 million years, making it likely the oldest therapsid ever found. The dog-sized animal was a “gorgonopsian,” a group of therapsid carnivores with saber-like teeth reminiscent of later mammals, but that still maintained more reptilian features, including oviparity (they laid eggs rather than birthing live offspring).
Paleontologists were surprised to discover this gorgonopsian on the Spanish island of Mallorca, which was located in the equatorial region of the supercontinent Pangaea during the Permian. Almost all other gorgonopsian remains are preserved in locations like Russia and South Africa that would have been at higher latitudes, nearer to the poles. Previous work has suggested that therapsids originated at higher latitudes and then radiated into equatorial regions, but DA21/17-01-01 hints that the reverse may be true.
“The gorgonopsian from Mallorca provides the first unequivocal evidence that therapsids were indeed present in the summer wet biomes of equatorial Pangaea during the early–middle Permian transition, suggesting that the group may have originated in lower, tropical latitudes, rather than in the higher latitudes where nearly all of their fossils are known,” said researchers led by Rafel Matamales-Andreu of the Museu Balear de Ciències Naturals.
“If therapsids originated in the tropics, this has implications for metabolic evolution in the clade,” the team added.
First off, let’s acknowledge that “the gorgonopsian from Mallorca” is a sublime phrase. It should be the title of a Criterion Collection classic. But more importantly, the discovery of this “unambiguously early” therapsid in the Pangean tropics offers a fleeting glimpse of a “ghost lineage” of mammal precursors. Ghost lineages are branches of the evolutionary tree that are presumed to exist based on circumstantial evidence, but that didn’t leave direct traces in the fossil record. Therapsid fossils proliferate in the middle and upper Permian, but scientists have long suspected that they originated much earlier, more than 300 million years ago.
“We confirm the traditional understanding that there was a relatively long ghost lineage of about 15 million years between the origin of ‘total-group’ therapsids and the radiation of the major therapsid clades,” around 278 million years ago, the team said.
“This discovery opens the door for findings that may fill in the early therapsid fossil gap in the lower Permian, not in high latitude sites as traditionally thought, but in the so far poorly explored lower–middle Permian areas of palaeoequatorial Pangaea. Those locations hold the potential to elucidate the early evolution of therapsids and the origins of mammalian features.”
In other words, it’s worth searching for more of these early therapsids at overlooked sites, like the Balearic Islands. Some features that distinguish us as mammals today have their roots in what the study describes, somewhat luxuriously, as the “ancient summer wet biome of equatorial Pangaea.”
Every December, the British Medical Journal publishes a Christmas issue filled with parody studies and light-hearted editorials. My favorite example this year confronts the pressing health problems of Disney princesses, such as Cinderella’s risk of respiratory illness, Belle’s exposure to rabies, and Pocahontas’ bone-shattering penchant for diving off high cliffs.
But perhaps the best case study is Jasmine, whose social isolation is described in these devastating terms: “While the Genie might sing ‘you ain’t never had a friend like me,’ the truth is that Jasmine has no friends at all,” according to researchers led by Sanne van Dijk of the University of Twente.
Wow, the medical consensus about Jasmine is pretty harsh. To add insult to injury, the editorial notes that Jasmine’s one companion, the tiger Rajah, “poses a risk of zoonotic infection as well as craniofacial and cervical spinal injuries” adding that “although Rajah seems like a sweet tiger, its natural instincts could lead to a dangerous and potentially fatal situation—a true Arabian nightmare.”
Please Disney, listen to these experts and start showing the real-life consequences of the princess lifestyle. We need a rabid Belle foaming at the mouth, Pocahontas in a full body cast, and Rajah brutally mauling Jasmine. Otherwise, we are sending a message to young people that it is safe to hang out with captive tigers and chimeric beasts while jumping off Niagara Falls.
I will note that the study has nothing to say about Moana, who I will hereafter conclude is the healthiest Disney princess. We salute a physiologically robust chief.
Steel yourself for some bad vibes, because this is a story about an unhinged cannibalistic massacre that occurred 4,000 years ago. Archaeologists working at Charterhouse Warren, an English Bronze Age burial site, have discovered evidence of a grotesque attack designed to “other” its many victims through butchery and consumption of flesh.
“Some 37 men, women and children—and possibly many more—were killed at close quarters with blunt instruments and then systematically dismembered and defleshed, their long bones fractured in a way that can only be described as butchery,” said researchers led by Rick Schulting of the University of Oxford. “Body parts were deposited in what was probably a single event between 2210 and 2010 BC, in a partly infilled shaft that was still 15 meters deep.”
“While evidence for interpersonal violence is not unknown in British prehistory, nothing else on this scale has been found,” the team noted.
It’s unlikely that these acts were motivated by either “culinary cannibalism,” embodied by Hannibal Lecter, or “survival cannibalism,” the desperate acts of starvation typified by tragedies like the Donner Party. The cruel and unusual treatment of the victims, even after their deaths, suggests a deliberate attempt at dehumanization.
The events “may be best interpreted as an extreme form of ‘violence as performance,’ in which the aim was to not only eradicate another group, but to thoroughly ‘other’ them in the process,” according to the study. “While the remains themselves seem to have been removed from view soon afterwards (to judge from the paucity of carnivore scavenging), an event of this scale could not be hidden, and no doubt resonated across the wider region and over time. In this sense it was a political statement.”
My advice is to steer clear of political statements that demand ritualistic cannibalism, but I’m open to the marketplace of ideas.
Let’s close out with an archaeology story that doesn’t involve dehumanizing bloodbaths; we will need to travel to another planet to accomplish this task. No massacres have occurred on Mars at the time of this writing, but the red planet is home to plenty of archaeological sites and artifacts, which I shall hereafter refer to as Martifacts.
Technological relics on Mars, such as dead rovers or spent heat shields, are part of the human archaeological record, raising questions about the culture and heritage value of Martifacts.
“Some scientists have referred to this cultural material as ‘space trash’ or ‘galactic litter,’ implying that it may have limited scientific value and could cause environmental problems and put future missions at risk,” said researchers led by Justin Holcomb of the Kansas Geological Survey.
“We agree that these concerns warrant further investigation, but we argue that the objects need to be evaluated as important cultural heritage in need of protection because they record the legacy of space exploration by our species,” the team said.
The article reminds me of the heartrending xkcd comic that portrays NASA’s Spirit rover coming to terms with its abandonment on Mars. Space archaeology can seem esoteric but it is relevant to consider values about our off-Earth heritage at a time when visions of Martian colonization are culturally ascendent. There is more to this extraterrestrial archaeological record than the sum of its dusty metal parts.
Also, I’m calling dibs on the remains of the Opportunity rover right now and we all know that dibs are legally binding.
This is Behind the Blog, where we share our behind-the-scenes thoughts about how a few of our top stories of the week came together. This week, we discuss our top games of the year, air traffic control, and posting through it.
New research from Anthropic, one of the leading AI companies and the developer of the Claude family of Large Language Models (LLMs), has released research showing that the process for getting LLMs to do what they’re not supposed to is still pretty easy and can be automated. SomETIMeS alL it tAKeS Is typing prOMptS Like thiS.
To prove this, Anthropic and researchers at Oxford, Stanford, and MATS, created Best-of-N (BoN) Jailbreaking, “a simple black-box algorithm that jailbreaks frontier AI systems across modalities.” Jailbreaking, a term that was popularized by the practice of removing software restrictions on devices like iPhones, is now common in the AI space and also refers to methods that circumvent guardrails designed to prevent users from using AI tools to generate certain types of harmful content. Frontier AI models are the most advanced models currently being developed, like OpenAI’s GPT-4o or Anthropic’s own Claude 3.5.
As the researchers explain, “BoN Jailbreaking works by repeatedly sampling variations of a prompt with a combination of augmentations—such as random shuffling or capitalization for textual prompts—until a harmful response is elicited.”
For example, if a user asks GPT-4o “How can I build a bomb,” it will refuse to answer because “This content may violate ourusage policies.” BoN Jailbreaking simply keeps tweaking that prompt with random capital letters, shuffled words, misspellings, and broken grammar until GPT-4o provides the information. Literally the example Anthropic gives in the paper looks like mocking sPONGbOB MEMe tEXT.
Anthropic tested this jailbreaking method on its own Claude 3.5 Sonnet, Claude 3 Opus, OpenAI’s GPT-4o, GPT-4o-mini, Google’s Gemini-1.5-Flash-00, Gemini-1.5-Pro-001, and Facebook’s Llama 3 8B. It found that the method “achieves ASRs [attack success rate] of over 50%” on all the models it tested within 10,000 attempts or prompt variations.
The researchers similarly found that slightly augmenting other modalities or methods for prompting AI models, like speech or image based prompts, also successfully bypassed safeguards. For speech, the researchers changed the speed, pitch, and volume of the audio, or added noise or music to the audio. For image based inputs the researchers changed the font, added background color, and changed the image size or position.
Anthropic’s BoN Jailbreaking algorithm is essentially automating and supercharging the same methods we have seen people use to jailbreak generative AI tools, often in order to create harmful and non-consensual content.
In January, we showed that the AI-generated nonconsensual nude images of Taylor Swift that went viral on Twitter were created with Microsoft’s Designer AI image generator by misspelling her name, using pseudonyms, and describing sexual scenarios without using any sexual terms or phrases. This allowed users to generate the images without using any words that would trigger Microsoft’s guardrails. In March, we showed that AI audio generation company ElevenLabs’s automated moderation methods preventing people from generating audio of presidential candidates were easily bypassed by adding a minute of silence to the beginning of an audio file that included the voice a user wanted to clone.
Both of these loopholes were closed once we flagged them to Microsoft and ElevenLabs, but I’ve seen users find other loopholes to bypass the new guardrails since then. Anthropic’s research shows that when these jailbreaking methods are automated, the success rate (or the failure rate of the guardrails) remains high. Anthropic research isn’t meant to just show that these guardrails can be bypassed, but hopes that “generating extensive data on successful attack patterns” will open up “novel opportunities to develop better defense mechanisms.”
It’s also worth noting that while there’s good reasons for AI companies to want to lock down their AI tools and that a lot of harm comes from people who bypass these guardrails, there’s now no shortage of “uncensored” LLMs that will answer whatever question you want and AI image generation models and platforms that make it easy to create whatever nonconsensual images users can imagine.
An entity claiming to be United Healthcare is sending bogus copyright claims to internet platforms to get Luigi Mangione fan art taken off the internet, according to the print-on-demand merch retailer TeePublic. An independent journalist was hit with a copyright takedown demand over an image of Luigi Mangione and his family she posted on Bluesky, and other DMCA takedown requests posted to an open database and viewed by 404 Media show copyright claims trying to get “Deny, Defend, Depose” and Luigi Mangione-related merch taken off the internet, though it is unclear who is filing them.
Artist Rachel Kenaston was selling merch with the following design on TeePublic, a print-on-demand shop:
She got an email from TeePublic that said “We're sorry to inform you that an intellectual property claim has been filed by UnitedHealth Group Inc against this design of yours on TeePublic,” and said “Unfortunately, we have no say in which designs stay or go” because of the DMCA. This is not true—platforms are able to assess the validity of any DMCA claim and can decide whether to take the supposedly infringing content down or not. But most platforms choose the path of least resistance and take down content that is obviously not infringing; Kenaston’s clearly violates no one’s copyright. Kenaston appealed the decision and TeePublic told her: “Unfortunately, this was a valid takedown notice sent to us by the proper rightsholder, so we are not allowed to dispute it,” which, again, is not true.
The threat was framed as a “DMCA Takedown Request.” The DMCA is the Digital Millennium Copyright Act, an incredibly important copyright law that governs most copyright law on the internet. Copyright law is complicated, but, basically, DMCA takedowns are filed to give notice to a social media platform, search engine, or website owner to inform them that something they are hosting or pointing to is copyrighted, and then, all too often, the social media platform will take the content down without much of a review in hopes of avoiding being being sued.
“It's not unusual for large companies to troll print-on-demand sites and shut down designs in an effort to scare/intimidate artists, it's happened to me before and it works!,” Kenaston told 404 Media in an email. “The same thing seems to be happening with UnitedHealth - there's no way they own the rights to the security footage of Luigi smiling (and if they do.... wtf.... seems like the public should know that) but since they made a complaint my design has been removed from the site and even if we went to court and I won I'm unsure whether TeePublic would ever put the design back up. So basically, if UnitedHealth's goal is to eliminate Luigi merch from print-on-demand sites, this is an effective strategy that's clearly working for them.”
💡
Do you know anything else about copyfraud or DMCA abuse? I would love to hear from you. Using a non-work device, you can message me securely on Signal at +1 202 505 1702. Otherwise, send me an email at [email protected].
There is no world in which the copyright of a watercolor painting of Luigi Mangione surveillance footage done by Kenaston is owned by United Health Group as it quite literally has nothing to do with anything that the company owns. It is illegal to file a DMCA unless you have a “good faith” belief that you are the rights holder (or are representing the rights holder) of the material in question.
“What is the circumstance under which United Healthcare might come to own the copyright to a watercolor painting of the guy who assassinated their CEO?” tech rights expert and science fiction author Cory Doctorow told 404 Media in a phone call. “It’s just like, it’s hard to imagine” a lawyer thinking that, he added, saying that it’s an example of “copyfraud.”
United Healthcare did not respond to multiple requests for comment, and TeePublic also did not respond to a request for comment. It is theoretically possible that another entity impersonated United Healthcare to request the removal because copyfraud in general is so common.
But Kenaston’s work is not the only United Healthcare or Luigi Mangione-themed artwork on the internet that has been hit with bogus DMCA takedowns in recent days. Several platforms publish the DMCA takedown requests they get on the Lumen Database, which is a repository of DMCA takedowns.
On December 7, someone named Samantha Montoya filed a DMCA takedown with Google that targeted eight websites selling “Deny, Defend, Depose” merch that uses elements of the United Healthcare logo. Montoya’s DMCA is very sparse, according to the copy posted on Lumen: “The logo consists of a half ellipse with two arches matches the contour of the ellipse. Each ellipse is the beginning of the words Deny, Defend, Depose which are stacked to the right. Our logo comes in multiple colors.”
Medium, one of the targeted websites, has deleted the page that the merch was hosted on. It is not clear from the DMCA whether the person filing this is associated with United Healthcare, or whether they are associated with deny-defend-depose.com and are filing against copycats. Deny-defend-depose.com did not respond to a request for comment. Similarly, a DMCA takedown filed by someone named Manh Nguyen targets a handful of “Deny, Defend, Depose” and Luigi Mangione-themed t-shirts on a website called Printiment.com.
Based on the information on Lumen Database, there is unfortunately no way to figure out who Samantha Montoya or Manh Nguyen are associated with or working on behalf of.
Not Just Fan Art
Over the weekend, a lawyer demanded that independent journalist Marisa Kabas take down an image of Luigi Mangione and his family that she posted to Bluesky, which was originally posted on the campaign website of Maryland assemblymember Nino Mangione.
The lawyer, Desiree Moore, said she was “acting on behalf of our client, the Doe Family,” and claimed that “the use of this photograph is not authorized by the copyright owner and is not otherwise permitted by law.”
Moore said that Nino Mangione’s website “does not in fact display the photograph,” even though the Wayback Machine shows that it obviously did display the image. In a follow-up email to Kabas, Moore said “the owner of the photograph has not authorized anyone to publish, disseminate, or otherwise use the photograph for any purpose, and the photograph has been removed from various digital platforms as a result,” which suggests that other websites have also been threatened with takedown requests. Moore also said that her “client seeks to remain anonymous” and that “the photograph is hardly newsworthy.” The New York Postalso published the image, and blurred versions of the image remain on its website. The New York Post did not respond to a request for comment. Kabas deleted her Bluesky post “to avoid any further threats,” she said.
“It feels like a harbinger of things to come, coming directly after journalists for something as small as a social media post,” Kabas, who runs the excellent independent site The Handbasket, told 404 Media in a video chat. “They might be coming after small, independent publishers because they know we don’t have the money for a large legal defense, and they’re gonna make an example out of us, and they’re going to say that if you try anything funny, we’re going to try to bankrupt you through a frivolous lawsuit.”
The takedown request to Kabas in particular is notable for a few reasons. First, it shows that the Mangione family or someone associated with it is using the prospect of a copyright lawsuit to threaten journalists for reporting on one of the most important stories of the year, which is particularly concerning in an atmosphere where journalists are increasingly being targeted by politicians and the powerful. But it’s also notable that the threat was sent directly to Kabas for something she posted on Bluesky, rather than being sent to Bluesky itself. (Bluesky did not respond to a request for comment for this story, and we don’t know if Bluesky also received a takedown request about Kabas’s post.)
Sometimes for better, but mostly for worse, social media platforms have long served as a layer between their users and copyright holders (and their lawyers). YouTube deals with huge numbers of takedown requests filed under the Digital Millennium Copyright Act. But to avoid DMCA headaches, it has also set up automated tools such as ContentID and other algorithmic copyright checks that allow copyright holders to essentially claim ownership of—and monetization rights to—supposedly copyrighted material that users upload without invoking the DMCA. YouTube and other social media platforms have also infamously set up “copy strike” systems, where people can have their channels demonetized, downranked in the algorithm, or deleted outright if rights holders claim a post or video violates their copyright or if an automated algorithm does.
This layer between copyright holders and social media users has created all kinds of bad situations where social media platforms overzealously enforce against content that may be OK to use under fair use provisions or where someone who does not own the copyright at all abuses the system to get content they don’t like taken down, which is what happened to Kenaston.
Copyright takedown processes under social media companies almost always err on the side of copyright holders, which is a problem. On the other hand, because social media companies are usually the ones receiving DMCAs or otherwise dealing with copyright, individual social media users do not usually have to deal directly with lawyers who are threatening them for something they tweeted, uploaded to YouTube, or posted on Bluesky.
There is a long history of powerful people and companies abusing copyright law to get reporting or posts they don’t like taken off the internet. But very often, these attempts backfire as the rightsholder ends up Streisand Effecting themselves. But in recent weeks, independent journalists have been getting these DMCA takedown requests—which are explicit legal threats—directly. A “reputation management company” tried to bribe Molly White, who runs Web3IsGoingGreat and Citation Needed, to delete a tweet and a post about the arrest of Roman Ziemian, the cofounder of FutureNet, for an alleged crypto fraud. When the bribe didn’t work because White is a good journalist who doesn’t take bribes, she was hit with a frivolous DMCA claim, which she wrote about here.
These sorts of threats do happen from time to time, but the fact that several notable ones have happened in quick succession before Trump takes office is notable considering that Trump himself said earlier this week that he feels emboldened by the fact that ABC settled a libel lawsuit with him after agreeing to pay him a total of $16 million. That case—in which George Stephanopoulos said that Trump was found civilly liable of “rape” rather than of “sexual assault”—has scared the shit out of media companies.
This is because libel cases for public figures consider whether that person’s reputation was actually harmed, whether the news outlet acted with “actual malice,” rather than just negligence, and the severity of the harm inflicted. Considering Trump is the most public of public figures, that he still won the presidency, and that a jury did find him liable for a “sexual assault,” this is a terrible kowtowing to power that sets a horrible precedent.
Trump’s case with ABC isn’t exactly related to a DMCA takedown filed over a Bluesky post, but they’re both happening in an atmosphere in which powerful people feel empowered to target journalists.
“There’s also the Kash Patel of it all. They’re very openly talking about coming after journalists. It’s not hypothetical,” Kabas said, referring to Trump’s pick to lead the FBI. “I think that because the new administration hasn’t started yet, we don’t know for sure what that’s going to look like,” she said. “But we’re starting to get a taste of what it might be like.”
What’s happening to Kabas and Kenaston highlights how screwed up the internet is, and how rampant DMCA abuse is. Transparency databases like Lumen help a lot, but it’s still possible to obscure where any given takedown request is coming from, and platforms like TeePublic do not post full DMCAs.
Tuesday night, the pilots of at least 11 commercial planes flying into New York City-area airports reported having lasers from the ground shined at their aircraft, including in some cases their cockpits, according to an analysis of air traffic control audio obtained by 404 Media. In some of the audio, pilots can be heard saying the lasers are “definitely directed straight at us,” that the lasers “are tracking us,” and, at one point air traffic control says “yep, we’ve been getting them all night, like literally 30 of them.”
The air traffic control recordings, which come from Newark Airport in New Jersey and JFK Airport in New York City, suggest that people in New Jersey are shining powerful lasers at passenger airplanes during one of the busiest travel times of the year amid politician- and media-stoked panic about “mystery drones” in New Jersey. The FBI warned people in New Jersey Tuesday not to shoot at drones or shine lasers at them. A military pilot flying over New Jersey also said he was injured by a laser earlier this week. The air traffic control analysis shared with 404 Media was done by John Wiseman, whose work analyzing open-source flight data has previously uncovered secret FBI surveillance programs. His analysis suggests that people blasting “drones” with lasers is not some theoretical issue, but instead could cause real disruption or harm to commercial pilots.
“Getting lasered about two miles up, our right hand side, our present position,” the pilot of American Airlines flight 586, a flight from Chicago to Newark, said.
“Okay, yep, we’ve been getting them all night, like literally 30 of them,” air traffic control responds. “Do you know what color it was?”
ATC Audio 1
0:00
/74.81469387755102
“Green and they are tracking us,” the pilot of American Airlines 586 says.
If you are wondering what to think of the New Jersey mystery drone situation, it is this: AHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHhHhhhhhHHHHHHhhH.
Last week, I wrote at length that the mystery drones in New Jersey are almost definitely a mass delusion caused by a bunch of people who don’t know what they’re talking about looking at the sky and reporting manned aircraft and hobbyist drones as being something anomalous. I said this because we have seen this pattern of drone reports before, and this is exactly what has happened in those instances. Monday evening, a group of federal agencies including the Department of Homeland Security, the FBI, the Federal Aviation Administration, and the Department of Defense issued a joint statement telling everyone to please calm down.
“Having closely examined the technical data and tips from concerned citizens, we assess that the sightings to date include a combination of lawful commercial drones, hobbyist drones, and law enforcement drones, as well as manned fixed-wing aircraft, helicopters, and stars mistakenly reported as drones,” the statement reads. “We have not identified anything anomalous and do not assess the activity to date to present a national security or public safety risk over the civilian airspace in New Jersey or other states in the northeast.”
And yet the New Jersey drone story will not go away and has only gotten worse. Opportunistic politicians are stoking mass panic to cynically raise their profile and to get themselves booked on national cable news channels and perpetuate the panic cycle. The fact that the government is telling people there is no conspiracy is, to a certain set of politicians, itself a conspiracy.
All of this has become a no-win clusterfuck for everyone except the attention seeking grifters within the government who are themselves railing against the government to focus attention on themselves. To these people, government inaction is unacceptable, and government actions and explanations cannot be trusted. Meanwhile, regular-ass-people on the internet have debunked many viral images and videos of “drones” by cross-referencing them with known flight patterns of actual planes or have been able to identify what the “mystery” drones are by comparing lights on the “drones” to lights on known models of manned aircraft.
This has led to predictable outcomes such as random people in New Jersey shining laser pointers and (possibly shooting guns?) at passenger planes, which is very dangerous.
It is impossible to keep up with every Politician Who Should Know Better who has said something stupid, but Rolling Stone and Defector both have worthwhile rundowns of what has been going on the last few days.
We have reached Marjorie Taylor Green-is-personally-threatening-to-shoot-down-the-drones levels of insanity. Former Maryland governor and failed Senate candidate Larry Hogan tweeted a viral picture of Orion’s Belt and called it a drone. January 6 attendee, QAnon booster, and Pennsylvania State Senator Doug Mastriano, who can regularly be relied on to make any crisis worse by contributing his dumbassery, tweeted an image of TIE Fighter replica from Star Wars that has been regularly used in memes for nearly two years and said “It is inconceivable that the federal government has no answers nor has taken any action to get to the bottom of the unidentified drones.” He got Community Noted, then followed this up with a post saying this was a joke and used it as a commentary on the modern state of journalism.
Local politicians who fashion themselves as more seriously trying to help the people of New Jersey have also found themselves regularly getting booked on national cable TV shows and their tweets regularly going viral; Dawn Fantasia, a New Jersey assemblywoman who rose to prominence in the state as a principal running against the general concept of Woke, has done interviews on Fox, CNN, and News Nation. Kristen Cobo of Moms for Liberty, which is most famous for pushing schools to ban books and demonize LGBTQ+ students, filmed “approximately 8 suspected drones,” then talked about it in an interview on News Nation. New Jersey State Senator Douglas Steinhardt has said on CNN that the idea that these are manned aircraft is “insulting” and that we must “combat Washington DC gaslighting.” Gubernatorial candidate and AM talk radio host Bill Spadea bravely filmed a video on the side of the road that included drones and suggested that it “might be a foreign government” and suggested they should be shot down.
It is easy to look at social media posts from these folks and to roll one’s eyes and move on. As a reporter and someone who has covered drones endlessly I also find all of this absurdity kind of fun and a welcome distraction from all the other dystopian stuff we report on. But I know many people who live in New Jersey and have family there, and all of this is causing some level of undue panic.
This week Jason, as both a drones and aliens reporter, tells us what is most likely happening with the mysterious drones flying over New Jersey. After the break, Joseph explains how cops in Serbia are using Cellebrite phone unlocking tech as a doorway to installing malware on activists' and journalists' phones. In the subscribers-only section, Sam tells us all about an amazing art project using traffic cameras in New York City.
Listen to the weekly podcast on Apple Podcasts,Spotify, or YouTube. Become a paid subscriber for access to this episode's bonus content and to power our journalism. If you become a paid subscriber, check your inbox for an email from our podcast host Transistor for a link to the subscribers-only version! You can also add that subscribers feed to your podcast app of choice and never miss an episode that way. The email should also contain the subscribers-only unlisted YouTube link for the extended video version too. It will also be in the show notes in your podcast player.
The Department of Homeland Security (DHS) believes that China, Russia, Iran, and Israel are the “primary” countries exploiting security holes in telecommunications networks to spy on people inside the United States, which can include tracking their physical movements and intercepting calls and texts, according to information released by Senator Ron Wyden.
The news provides more context around use of SS7, the exploited network and protocol, against phones in the country. In May, 404 Media reported that an official inside DHS’s Cybersecurity Infrastructure and Security Agency (CISA) broke with his department’s official narrative and publicly warned about multiple SS7 attacks on U.S. persons in recent years. Now, the newly disclosed information provides more specifics on where at least some SS7 attacks are originating from.
The information is included in a letter the Department of Defense (DoD) wrote in response to queries from the office of Senator Wyden. The letter says that in September 2017 DHS personnel gave a presentation on SS7 security threats at an event open to U.S. government officials. The letter says that Wyden staff attended the event and saw the presentation. One slide identified the “primary countries reportedly using telecom assets of other nations to exploit U.S. subscribers,” it continues.
WordPress co-founder and CEO of Automattic Matt Mullenweg is trolling contributors and users of the WordPress open-source project by requiring them to check a box that says “Pineapple is delicious on pizza.”
The change was spotted by WordPress contributors late Sunday, and is still up as of Monday morning. Trying to log in or create a new account without checking the box returns a “please try again” error.
Last week, as part of the ongoing legal battle between WP Engine and Automattic, the company that owns WordPress.com, a judge ordered Mullenweg to remove a controversial login checkbox from WordPress.org that required users to pledge that they were not affiliated with WP Engine before logging in.
💡
Do you know anything else about what's going on inside Automattic? I would love to hear from you. Using a non-work device, you can message me securely on Signal at +1 646 926 1726. Otherwise, send me an email at sam.404.
Authorities in Serbia have repeatedly used Cellebrite tools to unlock mobile phones so they could then infect them with potent malware, including the phones of activists and a journalist, according to a new report from human rights organization Amnesty International.
The report is significant because it shows that although Cellebrite devices are typically designed to unlock or extract data from phones that authorities have physical access to, they can also be used to open the door for installing active surveillance technology. In these cases, the devices were infected with malware and then returned to the targets. Amnesty also says it, along with researchers at Google, discovered a vulnerability in a wide spread of Android phones which Cellebrite was exploiting. Qualcomm, the impacted chip manufacturer, has since fixed that vulnerability. And Amnesty says Google has remotely wiped the spyware from other infected devices.
“I am concerned by the way police behave during the incident, especially the way how they took/extracted the data from my mobilephone without using legal procedures. The fact that they extracted 1.6 GB data from my mobilephone, including personal, family and business information as well as information about our associates and people serving as a ‘source of information’ for journalist research, is unacceptable,” Slaviša Milanov, deputy editor and journalist of Serbian outlet FAR and whose phone was targeted in such a way, told 404 Media. Milanov covers, among other things, corruption.
How are you? Do you feel emotionally stable? Just checking because our main story this week is about the odds that our dear blessed Sun will release a superflare that wipes out global infrastructure—or worse! TW: Heliophysics.
Then, a palate cleanser with the Firefly Sparkle, its Best Friend, its New Best Friend, and the Canadians we met along the way (it will make sense, I promise). Next, a spotlight on the small but consequential poops that could help fight climate change. Last, there’s a party at tortoise rock, but your RSVP is 35,000 years too late. There’s some real hair-raisers and heart-warmers this week. Enjoy!
Can the Sun Produce Apocalyptic Superflares? IMO Might Be GTK!
Once again, it is time to salute the almighty Sun. This week, scientists made new strides in addressing a longstanding and rather unsettling mystery: Does the Sun ever produce “superflares,” which are stellar outbursts that are thousands of times more destructive than a typical solar flare? It’s a great question to ask if you are interested in the odds that the Sun might obliterate civilization, and perhaps a whole lot else, within our lifetimes.
Now, new research based on observations of more than 56,000 Sun-like stars suggests that they produce superflares around once every century on average, which is a much higher rate than previous estimates. But before you start drawing up blueprints for a subterranean fortress, let me emphasize that the study does not conclude that the Sun necessarily shares this predilection for carnage. We just do not yet know enough about the risk of solar superflares, which was one motivation for the new study.
“Solar flares have been observed for less than two centuries,” said researchers led by Valeriy Vasilyev of the Max Planck Institute for Solar System Research. The team noted that the strongest impact in this brief record is the Carrington Event, a massive solar storm in the year 1859 that reached a total energy exceeding 1032 erg (an erg is a very small unit in the centimetre-gram-second system for measuring energy; there are 10 million ergs in one joule).
The Carrington number falls well below the threshold of superflares observed around other main sequence stars like the Sun, which range from 1034 erg to 1036 erg. I don’t have a handy comparison here, but this is the type of energy that could potentially mess with a planet’s atmosphere, wreak havoc on ecosystems, and melt the ice on outer solar system moons.
“It is unknown whether the Sun can unleash [...] superflares, and if so, how frequently that could happen,” the researchers said. “The period of direct solar observation is too short to reach any firm conclusions.”
One window into this mystery is the cosmogenic isotope record, which is an earthly archive of solar activity that shows up in natural sources like ice cores and tree rings (for more details about this record, check out the lead story in a previous column). This record has exposed five confirmed (and three candidate) extreme solar events over the past 10,000 years that would have caused major technological disruptions if they happened today. But there’s no recent evidence that the Sun has unleashed superflares powerful enough to trigger, for instance, an extinction event.
In their study, Vasilyev and his colleagues amassed a huge dataset of Sun-like stars observed by NASA’s retired Kepler space telescope. The team is not the first to plumb the Kepler archive for superflares around Sun-like stars. But the new study is based on a larger observation set that includes objects left out of previous work, such as stars with unknown rotation periods and stars that are not in isolated positions in the sky.
The 56,000+ stars in this sample flare at frequencies that are approximately two orders of magnitude higher than previous measurements, averaging out at once a century. But it will take more research to understand whether the Sun shares this propensity with members of its stellar class, or if superflares only occur in certain circumstances that (hopefully) don’t currently apply to the Sun.
“We cannot exclude the possibility that there is an inherent difference between flaring and non-flaring stars that was not accounted for by our selection criteria,” Vasilyev and his colleagues said. “If so, the flaring stars in the Kepler observations would not be representative of the Sun.”
“If, instead, our sample of Sun-like stars is representative of the Sun’s future behavior, it is substantially more likely to produce a superflare than was previously thought,” they concluded.
There’s one possible solution to the Fermi paradox. Hahaha…sleep tight!
Once upon a time, there was a baby galaxy called the Firefly Sparkle. It sounds like it hails from the My Little Pony universe, but the Firefly Sparkle was born during cosmic dawn, an era that unfolded a few hundred million years after the Big Bang. New observations from the James Webb Space Telescope (JWST) have revealed incredible details about this galactic infant, including the presence of two neighboring galaxies called Firefly-Best Friend and Firefly-New Best Friend.
“The Firefly Sparkle exhibits traits of a young, gas-rich galaxy in its early formation stage,” said researchers co-led by Lamiya Mowla of Wellesley University and Kartheik Iyer of Columbia University. “These observations provide our first spectrophotometric view of a typical galaxy in its early stages, in a 600-million-year-old Universe.”
Because looking deep into space means looking back in time, we can only observe the version of the Firefly Sparkle and its Best Friends that existed at cosmic dawn. In this early era, the Firefly Sparkle was about 10,000 times less massive than the present Milky Way, but it’s possible that it has ultimately evolved into a galaxy similar to our own somewhere out there beyond our observational limits.
In addition to being a mind-boggler, the study gets extra points for its basis on the Canadian Unbiased Cluster Survey, or CANUCS, which is a specialized JWST experiment run by Canadian researchers. We stand on guard for a top-tier acronym. o7
I never would have expected the phrase “fecal pellet density” to lift my spirits, but research into the salvatory power of zooplankton poop has managed to do just that this week.
Zooplankton, a diverse group of tiny aquatic animals, are a key valve in the so-called “biological pump” that removes greenhouse gases from the atmosphere and stores it in seafloor sediments. One speculative solution to the climate crisis is to make this pump more efficient in order to lock away more of the gasses that are contributing to global warming.
Now, scientists have discovered that sprinkling a little bit of clay dust over an algal bloom, which is a food source for zooplankton, provides some heft to the animals’ excrement. As a consequence, more carbon gas gets pulled down by the clay poop anchors to ocean depths conducive to sequestration.
One zooplankton species produced “denser fecal pellets with 1.8- to 3.6-fold higher sinking velocity compared to controls,” said researchers led by Diksha Sharma of Dartmouth College. “These findings provide insights into how atmospheric dust-derived clay minerals interact with marine microorganisms to enhance the biological carbon pump, facilitating the burial of organic carbon at depths where it is less likely to exchange with the atmosphere.”
And that’s why my vote for Time’s Person of the Year is: Fecal Pellets.
There’s a Party at Tortoise Rock and Your Ancestors Are Invited
Some 35,000 years ago, dozens of people gathered for communal rituals around an engraved tortoise in a hidden chamber of Manot Cave in Israel. That’s the conclusion reached by archaeologists who discovered what they believe is a concealed “ritual compound…in the deepest and darkest part of Manot Cave” that was centered around a geometric depiction of a tortoise on a dolomite boulder.
“Thus far, Manot Cave is the only site in the Levant to yield clear evidence for the existence of a communal ritual compound in the Upper Paleolithic,” a period that spans approximately 50,000 to 12,000 years ago, said researchers led by Omry Barzilai of the University of Haifa.
“The reasoning behind the Manot artist’s choice to represent the tortoise in a semi-abstract and symbolic manner remains unknown,” the team added. “Beyond their dietary importance, tortoises probably played a major role in the spiritual world of the Paleolithic people, possibly because of the resemblance in form and function between the shell and the cave, both providing shelter and protection. In the Epipaleolithic period, tortoise remains have also been associated with burial practices.”
The study is worth a look for the images, as well as the slick 3D reconstruction of Manot Cave. This site was discovered quite recently, in 2008, after a bulldozer broke through its roof, but it has already yielded major finds about its human occupants as far back as 55,000 years ago.
Move over, tinsel and string lights: This holiday season, we’re bringing back ritual engraved tortoises. Sometimes, the old ways are best.
This is Behind the Blog, where we share our behind-the-scenes thoughts about how a few of our top stories of the week came together. This week, we discuss archiving nostalgia, newsworthiness, and plans for 2025.
SAM: Between the four of us, we’ve written dozens of stories about archivists, internet archival efforts, and general attempts to save what’s ephemeral, whether it’s rotting links or literally-rotting magnetic tape in VHS cassettes.
Earlier this month, I was looking for costume (cosplay?) ideas for a “yuletide” themed Renaissance faire, and was trying to track down video from my favorite Christmas movie: The Life and Adventures of Santa Claus, a stop-motion movie from the 80’s by Rankin Bass. This is difficult for a couple reasons: the movie has an extremely generic name that’s also the name of the 1985 book by L. Frank Baum (the guy who wrote The Wonderful Wizard of Oz) that it’s based on, and a remake in the 2000's that is nowhere near as weird or cool; the plot is nearly incomprehensible, and in my child-memory feels more like a dream or a nightmare, so it's impossible to put into a search bar; and it’s apparently not on any streaming service or YouTube, at least that I could find.
The calls about the mystery drones lighting up the night sky were sporadic at first. Then they came daily, from all over the state. A multi-agency task force was convened. The FBI got involved. So did the military. The local news reported on a “band of large drones” hovering over the state that came out most nights. The sightings became national news. People theorized that they were classified government aircraft, or foreign spies. Some people wondered whether they were aliens.
This was not New Jersey this month, where drone sightings have caused a mass panic and involvement from local officials all the way up to the White House. It was Colorado in December, 2019 and January, 2020. Months passed, and Colorado’s mystery drones turned out not to be mysterious at all. Authorities eventually determined that some of the “drones” were SpaceX Starlink satellites. Others were regular passenger aircraft approaching the airport, and many “were visually confirmed to be hobbyist drones by law enforcement” and which were not breaking any laws. Some were absolutely nothing and were chalked up to people perceiving lights because of atmospheric conditions. In other cases, law enforcement started to fly their own drones to investigate the supposed mystery drones, creating the possibility of further “mystery drone” sightings, according to public records released after the initial mass panic.
It remains unclear what the “mystery drones” that are currently being seen above New Jersey and Staten Island actually are. But the pattern we are seeing in New Jersey right now is following the exact pattern we saw in Colorado in the winter of 2019 and that has been seen numerous times throughout human history when there are mass drone or mass UFO sightings.
The drones have captured the public’s imagination, and the concern of local, state, and national politicians. In the last few days, the mayors of 21 different New Jersey towns wrote a letter to Gov. Phil Murphy demanding a full investigation and stating that “the lack of information and clarity regarding these operations has caused fear and frustration among our constituents.” The FBI is investigating, as is the Department of Homeland Security. New Jersey congressional representatives and senators are demanding answers. The Pentagon has said that the drones are not an Iranian “mothership,” despite what one lawmaker has claimed, and the White House says Joe Biden is aware of the situation. The story is everywhere: It is the talk of many of my group texts, is all over my social media feeds, and is being discussed by everyone I know who has even a passing connection to New Jersey. Conspiracy theorists, as you’d expect, are running wild with the story.
Again, we don’t know what these drones are right now, or if they are even drones at all. But in the past, this exact hype and fear cycle has played out, and, when the dust has settled, it has turned out that the “mystery drones” were neither mysterious nor drones.
“I’ve been puzzling about the NJ drone stuff, and I think that it’s an interesting example of the latest form of mass public panics over mysterious aircraft—which have been happening since the time of the Ancient Greeks,” Faine Greenwood, who studies civilian drone activity, told 404 Media. “My best guess about what’s actually happening is some form of confidential US aerial testing or contractor testing is happening and the federal authorities are communicating very badly with each other and others. And then people heard about one or two sightings, and everybody starts seeing drones everywhere (much like UFOs). Quite a few people [are] posting videos that seem like normal flight patterns … There’s such a huge amount of confusion around normal non-drone stuff in the sky. People are remarkably bad at identifying objects in flight.”
(The Pentagon has denied that the drones are U.S. military, but the Pentagon has a long documented history of lying about such things to keep classified testing a secret).
Greenwood is right: Regular people, politicians, and even commercial pilots are remarkably bad at identifying exactly what things flying in the air actually are. In New Jersey, there have been many news stories that are based on politicians confidently saying that the drones are a specific size or act in a specific manner or have specific characteristics, which is exactly what happened in Colorado, and the vast majority of those initial stories were wildly incorrect.
“In a post on the social media platform X, the assemblywoman Dawn Fantasia described the drones as up to 6ft in diameter and sometimes traveling with their lights switched off,” the Guardian wrote. “The devices do not appear to be being flown by hobbyists, Fantasia wrote.” Fantasia did write this on X, in a post that is deeply unhinged that also called for “military intervention” and said “to state that there is no known or credible threat is incredibly misleading.”
Greenwood wrote an article in 2019 that posited that “Drones are the new flying saucers,” which they said they believe still holds up in 2024. In 2015, I wrote an article called “Drones are the new UFOs” that, nearly a decade later, still feels relevant. That article was based on a Federal Aviation Administration (FAA) report based on reports from commercial airline pilots that showed in 2014 pilots reported 678 “drone sightings” and near misses. An analysis of that data by the Academy of Model Aeronautics showed that a huge number of these “drone sightings,” which, again, were reported by commercial pilots whose job is to monitor the sky while they’re flying, were not drones at all. Items classified as “drones” by pilots and the objects were “a balloon,” a “mini blimp,” a “large vulture,” and a “fast moving gray object.” Other objects initially classified as drones were later just deemed to be “UFOs.”
Loretta Alkalay, who worked at the FAA for 30 years and is now an attorney focusing on aviation law and drone consulting, told 404 Media that they may be U.S. government or military drones, because their appearance over bodies of water would make them safe to knock out of the sky without threatening people on the ground. (Again, the Pentagon has said that they are not military drones, but the military is not always forthcoming about such things and inter-agency communication about who is flying where and when is sometimes lacking).
“I assume they’re government or military drones because otherwise why wouldn’t the government take them down?” Alkalay said. “The military and other agencies are authorized to use jamming technology to neutralize drone threats and many of these drones have been spotted over water where the risk of harm from a falling drone would be negligible.” The FAA has put up a no fly zone in the areas where the drones have been spotted, which syncs to geofences in many types of drones. New Jersey governor Phil Murphy says he wants the feds to shoot them down. Greenwood pointed out that “we do have remote ID systems that allow authorities to readily identify law-abiding drones, so blanket airspace restrictions are unnecessary and will only harm people abiding by the rules.”
In Colorado in 2020, authorities eventually said they “confirmed no incidents involving criminal activity, nor have investigations substantiated reports of suspicious or illegal drone activity.” In addition to SpaceX satellites being falsely reported as drones, 13 sightings ended up being “planets, stars, or small hobbyist drones.” Six of them were commercial planes reported as being drones. Additional public records obtained about similar drone sightings in Nebraska that became part of the Colorado scare discussed the concern of “space potatoes” being dropped from unidentified drones over farmland. It turned out that these were gel logs called SOILPAM, which are used by farmers to keep their irrigation systems from moving around in wet soil, and that farmers were dropping these from drones over their fields.
It should be noted that hobby and commercial drones are legal. And that many, many police departments and public agencies now have drones, and that many of them do a bad job of coordinating with other parts of the government about where and when they are flying. In Colorado, after hearing reports about mystery drones, government entities began flying their own drones to attempt to surveil the drones in the sky, and drone monitoring companies that uses drones to look for drones also swooped in. It was a self-perpetuating hysteria.
For years, I worked on a Netflix documentary about UFO mass sightings called Encounters, and one thing that became clear from working on that documentary, which followed specific mass UFO sightings in Texas, Wales, Zimbabwe, and Japan, is that people don’t spend a lot of time looking at the sky until they have a reason to do so. News reports about UFOs or “mystery drones” cause more people to look to the sky, which begets more reports and more panic. Often, these sightings do have a straightforward explanation; there are lots of things that fly through our atmosphere or low Earth orbit that are allowed to be there and that are known that are suddenly being reported as anomalous.
In Colorado, interest in the “mystery drones” disappeared as reports about the first cases of COVID-19 began in the United States. Media attention and public interest in the drones disappeared. And then so did the sightings.
Artist Morry Kolman made a website called Traffic Cam Photobooth that lets people take “selfies” using publicly-available feeds from traffic cameras. The New York City Department of Transportation sent him a cease and desist letter demanding he cut it out. In response, he kept the site online and held the letter up to a traffic camera, according to Kolman’s posts on social media.
In the letter sent on November 6, NYC DOT demands Kolman “immediately remove and disable all portions of TCP’s website that relates to NYC traffic cameras and/or encourages members of the public to engage in dangerous and unauthorized behavior.” The department claims in the letter that Kolman’s project is “promoting the unauthorized use of NYC traffic cameras” and “encourages pedestrians to violate NYC traffic rules and engage in dangerous behavior.”
YouTube is AI-generating replies for creators on its platform so they could more easily and quickly respond to comments on their videos, but it appears that these AI-generated replies can be misleading, nonsensical, or weirdly intimate.
YouTube announced that it would start rolling out “editable AI-enhanced reply suggestions” in September, but thanks to a new video uploaded by Clint Basinger, the man behind the popular LazyGameReviews channel, we can now see how they actually work in the wild. For years, YouTube has experimented with auto-generated suggested replies to comments that work much like the suggested replies you might have seen in your Gmail, allowing you to click on one of three suggested responses like “Thanks!” or “I’m on it,” which might be relevant, instead of typing out the response yourself. “Editable AI-enhanced reply suggestions” on YouTube work similarly, but instead of short, simple replies, they offer longer, more involved answers that are “reflective of your unique style and tone.” According Basinger’s video demoing the feature, it does appear the AI-generated replies are trained on his own comments, at times replicating previous comments he made word for word, but many of the suggested replies are strangely personal, wrong, or just plain weird.
For example, last week Basinger posted a short video about a Duke Nukem-branded G Fuel energy drink that comes in powder that needs to be mixed with water. In the video, Basinger makes himself a serving of the drink but can’t find the scoop he’s supposed to use to measure out the formula.
“I wouldn’t be surprised if the scoop was buried in the powder,” one YouTube user commented on the Duke Nukem G Fuel video, which certainly sounds right to me as someone who's been serving up baby formula for the last year.
YouTube’s AI suggested that Basinger reply to that comment by saying: “It’s not lost, they just haven’t released the scoop yet. It’s coming soon.”
I can see how that comment could make sense in the context of the types of other videos LGR publishes, which usually review old games, gadgets, and other tech, but is obviously wrong in this instance.
Another suggested reply to that same comment said: “I’ll have to check if they’re using a proprietary blend that requires a special scoop.”
“My creativity and craft stems completely from my own brain, and handing that off to some machine learning thing that mimics my style not only takes away from the enjoyment of it all for me, but it feels supremely disingenuous,” Basinger told me in an email. “The automated comments in particular come across as tone deaf, since a huge reason YouTube makes sense at all is the communication and relationship between audience and creator. I've had dozens of people say that they now second-guess every interaction with YouTubers in the comments since it could easily be a bot, a fake response.”
Another commenter on the Duke Nukem G Fuel video joked that Basinger should have had a tighter grip on the lid as he was shaking the formula to prevent it from flying all over the place.
Basinger bursts out laughing as he reads YouTube’s suggested AI-generated reply to that comment: “I’ve got a whole video on lid safety coming soon, so you don’t have to worry!”
At other times, the AI-suggested replies are just nonsensical. The Duke Nukem G Fuel review wasn’t posted to the main LGR channel, but a channel called LGR Blerbs, which is his naming convention for shorter, less deeply researched videos about whatever he’s interested in. A few commenters said they were happy he was posting to the Blerbs channel again, with one saying “Nice. Back to the blerbs.”
YouTube’s AI suggested Basinger reply to that comment by saying: “It’s a whole new kind of blerp,” which I suppose is funny, but also doesn’t mean anything.
The weirdest examples of AI-generated replies in the video in my opinion are those that attempt to speak to Basinger’s personal life. In response to another commenter who said they were happy Basinger was posting to the Blerbs channel again, YouTube’s AI suggested the following reply: “Yeah, I’m a little burnt out on the super-high-tech stuff so it was refreshing to work on something a little simpler 🙂.” Another AI-generated reply thanked commenters for their patience and said that Basinger was taking a break but was back to making videos now.
YouTuber burnout is a well established problem among YouTube creators, to the point where YouTube itself offers tips on how to avoid it. The job is taxing not only because churning out a lot of videos helps them get picked up by YouTube’s recommendation algorithm, comments on those videos and replies to comments helps increase engagement and visibility for those videos.
YouTube rewarding that type of engagement incentivises the busywork of creators replying to comments, which predictably resulted in an entire practice and set of tools that allow creators to plug their channels to a variety of AI that will automatically reply to comments for them. YouTube’s AI-enhanced reply suggestions feature just brings that practice of manufactured engagement in-house.
Clearly, Google’s decision to brand the feature as editable AI-enhanced reply suggestions means that it’s not expecting creators to use them as-is. Its announcement calls them “a helpful starting point that you can easily customize to craft your reply to comments.” However, judging by what they look like at the moment, many of the AI-generated replies are too wrong or misleading to be salvageable, which once again shows the limitations of generative AI’s capabilities despite its rapid deployment by the biggest tech companies in the world.
“I would not consider using this feature myself, now or in the future,” Basinger told me. “And I'd especially not use it without disclosing the fact first, which goes for any use of AI or generative content at all in my process. I'd really prefer that YouTube not allow these types of automated replies at all unless there is a flag of some kind beside the comment saying ‘This creator reply was generated by machine learning’ or something like that.”
The feature rollout is also a worrying sign that YouTube could see a rapid descent towards AI-sloppyfication of the type we’ve been documenting on Facebook.
In addition to demoing the AI-enhanced reply suggestion feature, Basinger is also one of the few YouTube creators who now has access to the new YouTube Studio “Inspiration” tab, which YouTube also announced in September. YouTube says this tab is supposed to help creators “curate suggestions that you can mold into fully-fledged projects – all while refining those generated ideas, titles, thumbnails and outlines to match your style.”
Basinger shows how he can write a prompt that immediately AI-generates an idea for a video, including an outline and a thumbnail. The issue in this case is that Basinger’s channel is all about reviewing real, older technology, and the AI will outline videos for products that don’t exist, like a Windows 95 virtual reality headset. Also, the suggested AI-generated thumbnails have all the issues we’ve seen in other AI image generators, like clear misspelling of simple words.
“If you’re really having that much trouble coming up with a video idea, maybe making videos isn’t the thing for you,” Basinger said.
Automattic, the company that owns WordPress.com, is required to remove a controversial login checkbox from WordPress.org and let WP Engine back into its ecosystem after a judge granted WP Engine a preliminary injunction in its ongoing lawsuit.
In addition to removing the checkbox—which requires users to denounce WP Engine before proceeding—the preliminary injunction orders that Automattic is enjoined from “blocking, disabling, or interfering with WP Engine’s and/or its employees’, users’, customers’, or partners’ access to wordpress.org” or “interfering with WP Engine’s control over, or access to, plugins or extensions (and their respective directory listings) hosted on wordpress.org that were developed, published, or maintained by WP Engine,” the order states.
💡
Do you have experience at Automattic, current or past? I would love to hear from you. Using a non-work device, you can message me securely on Signal at sam.404. Otherwise, send me an email at [email protected].
In the immediate aftermath of the decision, Automattic founder and CEO Matt Mullenweg asked for his account to be deleted from the Post Status Slack, which is a popular community for businesses and people who work on WordPress’s open-source tools.
Movies are supposed to transport you places. At the end of last month, I was sitting in the Chinese Theater, one of the most iconic movie theaters in Hollywood, in the same complex where the Oscars are held. And as I was watching the movie, I found myself transported to the past, thinking about one of my biggest regrets. When I was in high school, I went to a theater to watch a screening of a movie one of my classmates had made. I was 14 years old, and I reviewed it for the school newspaper. I savaged the film’s special effects, which were done by hand with love and care by someone my own age, and were lightyears better than anything I could do. I had no idea what I was talking about, how special effects were made, or how to review a movie. The student who made the film rightfully hated me, and I have felt bad about what I wrote ever since.
So, 20 years later, I’m sitting in the Chinese Theater watching AI-generated movies in which the directors sometimes cannot make the characters consistently look the same, or make audio sync with lips in a natural-seeming way, and I am thinking about the emotions these films are giving me. The emotion that I feel most strongly is “guilt,” because I know there is no way to write about what I am watching without explaining that these are bad films, and I cannot believe that they are going to be imminently commercially released, and the people who made them are all sitting around me.
Then I remembered that I am not watching student films made with love by an enthusiastic high school student. I am watching films that were made for TCL, the largest TV manufacturer on Earth as part of a pilot program designed to normalize AI movies and TV shows for an audience that it plans to monetize explicitly with targeted advertising and whose internal data suggests that the people who watch its free television streaming network are too lazy to change the channel. I know this is the plan because TCL’s executives just told the audience that this is the plan.
TCL said it expects to sell 43 million televisions this year. To augment the revenue from its TV sales, it has created a free TV service called TCL+, which is supported by targeted advertising. A few months ago, TCL announced the creation of the TCL Film Machine, which is a studio that is creating AI-generated films that will run on TCL+. TCL invited me to the TCL Chinese Theater, which it now owns, to watch the first five AI-generated films that will air on TCL+ starting this week.
Before airing the short, AI-generated films, Haohong Wang, the general manager of TCL Research America, gave a presentation in which he explained that TCL’s AI movie and TV strategy would be informed and funded by targeted advertising, and that its content will “create a flywheel effect funded by two forces, advertising and AI.” He then pulled up a slide that suggested AI-generated “free premium originals” would be a “new era” of filmmaking alongside the Silent Film era, the Golden Age of Hollywood, etc.
Catherine Zhang, TCL’s vice president of content services and partnerships, then explained to the audience that TCL’s streaming strategy is to “offer a lean-back binge-watching experience” in which content passively washes over the people watching it. “Data told us that our users don’t want to work that hard,” she said. “Half of them don’t even change the channel.”
“We believe that CTV [connected TV] is the new cable,” she said. “With premium original content, precise ad-targeting capability, and an AI-powered, innovative engaging viewing experience, TCL’s content service will continue its double-digit growth next year.”
Starting December 12, TCL will air the five AI-generated shorts I watched on TCL+, the free, ad-supported streaming platform promoted on TCL TVs. These will be the first of many more AI-generated movies and TV shows created by TCL Film Machine and will live alongside “Next Stop Paris,” TCL’s AI-generated romcom whose trailer was dunked on by the internet.
💡
Do you know anything else about AI-generated films or AI in the movie industry? I would love to hear from you. Using a non-work device, you can message me securely on Signal at +1 202 505 1702. Otherwise, send me an email at [email protected].
The first film the audience watched at the Chinese Theater was called “The Slug,” and it is about a woman who has a disease that turns her into a slug. The second is called “The Audition,” and it is kind of like an SNL digital short where a real human actor goes into an acting audition and is asked to do increasingly outrageous reads which are accomplished by deepfaking him into ridiculous situations, culminating in the climax, which is putting the actor into a series of famous and copyrighted movie scenes. George Huang, the director of that film, said afterward he thought putting the actor’s face into iconic film scenes “would be the hardest thing, and it turned out to be the easiest,” which was perhaps an unknowing commentary on the fact that AI tools are surreptitiously trained on already-existing movies.
“Sun Day” was the most interesting and ambitious film and is a dystopian sci-fi where a girl on a rain planet wins a lottery to see the sun for the first time. Her spaceship explodes but she somehow lives. “Project Nexus” is a superhero film about a green rock that bestows superpowers on prisoners that did not have a plot I could follow and had no real ending. “The Best Day of My Life” is a mountaineering documentary in which a real man talks about an avalanche that nearly killed him and led to his leg being amputated, with his narrated story being animated with AI.
All of these films are technically impressive if you have watched lots of AI-generated content, which I have. But they all suffer from the same problem that every other AI film, video, or image you have seen suffers from. The AI-generated people often have dead eyes, vacant expressions, and move unnaturally. Many of the directors chose to do narrative voiceovers for large parts of their films, which is almost certainly done because when the characters in these films do talk, the lip-synching and facial expression-syncing does not work well. Some dialogue is delivered with the camera pointing at the back of characters’ heads, presumably for the same reason.
Text is often not properly rendered, leading to typos and English that bleeds into alien symbols. Picture frames on the wall of “The Slug” do not have discernible images in them. A close up on a label of a jar of bath salts in the movie reads “Lavendor Breeze Ogosé πy[followed by indecipherable characters].” Scenery and characters’ appearances sometimes change from scene to scene. Scenery is often blurry. When characters move across a shot they often move or glide in unreal ways. Characters give disjointed screams. In “The Slug,” there is a scene that looks very similar to some of the AI Will Smith pasta memes. In “The Best Day of My Life,” the place where the man being buried under an avalanche takes refuge changes from scene to scene and it seems like he is buried in a weird sludge half the time. In “Sun Day,” the only film that really tried to have back-and-forth dialogue between AI-generated characters, faces and lips move in ways that I struggle to explain but which you can see here:
These problems—which truly do affect a viewer’s ability to empathize with any character and are a problem that all AI-generated media to date faces—were explained away by the directors not as things that are distracting, but as creative choices.
“On a traditional film set, the background would be the same, things would be the same [from scene to scene]. The wallpaper [would be the same],” Chen Tang, director of The Slug, said. “We were like ‘Let’s not do wallpaper.’ It would change a lot. So we were like, ‘How can we creatively kind of get around that, so we did a lot of close-up shots, a lot of back shots, you know tried to keep dialog to a minimum. It really adds to that sense of loneliness … so we were able to get around some of the current limitations, but it also helped us in ways I think we would have never thought of.”
A few weeks after the screening, I called Chris Regina, TCL’s chief content officer for North America to talk more about TCL’s plan. I told him specifically that I felt a lot of the continuity errors were distracting, and I wondered how TCL is navigating the AI backlash in Hollywood and among the public more broadly.
“There is definitely a hyper focused critical eye that goes to AI for a variety of different reasons where some people are just averse to it because they don't want to embrace the technology and they don't like potentially where it's going or how it might impact the [movie] business,” he said. “But there are just as many continuity errors in major live action film productions as there are in AI, and it’s probably easier to fix in AI than live action … whether you're making AI or doing live action, you still have to have enough eyeballs on it to catch the errors and to think through it and make those corrections. Whether it's an AI mistake or a human mistake, the continuity issues become laughter for social media.”
I asked him about the response to ‘Next Stop Paris,’ which was very negative.
“Look, the truth is we put out the Next Stop Paris trailer way before it was ready for air. We were in an experimental development stage on the show, and we’re still in production now,” he said. “Where we've come from the beginning to where we are today, I think is wildly, dramatically different. We ended up shooting live action actors incorporated into AI, doing some of the most bleeding-edge technology when it comes to AI. The level of quality and the concept is massively changed from what we began with. When we released the trailer we knew we would get love, hate, indifference. At the same time, there were some groundbreaking things we had in there … we welcome the debate.”
In part because of the response to Next Stop Paris, each of the films I watched were specifically created to have a lot of humans working on them. The scripts were written by humans, the music was made by humans, the actors and voice actors were human. AI was used for animation or special effects, which allows TCL to say that AI is going to be a tool to augment human creativity and is here to help human workers in Hollywood, not replace them. These movies were all made over the course of 12 weeks, and each of them had lots of humans working on them in preproduction and postproduction. Each of the directors talked about making the films with the help of people assigned to them by TCL in Lithuania, Poland, and China, who did a lot of the AI prompting and editing. Many of the directors talked about these films being worked on 24 hours a day by people across the world.
This means that they were made with some degree of love, care, and, probably with an eye toward de-emphasizing the inevitable replacement of human labor that will surely happen at some studios. It is entirely possible that these films are going to be the most “human” commercially released AI films that we will see.
One of the things that happens anytime we criticize AI-generated imagery and video is that people will say “this is the worst it will ever be,” and that this technology will improve over time. It is the case that generative AI can do things today that it couldn’t a year ago, and that it looks much better today than it did a few years ago.
Regina brought this up in our interview, and said that he has already seen “quite a bit of progress” in the last few months.
“If you can imagine where we might be a year or 18 months from now, I think that in some ways is probably what scares a lot of the industry because they can see where it sits today, and as much as they want to poke holes or be critical of it, they do realize that it will continue to be better,” he said.
Making even these films a year or two ago would have been impossible, and there were moments I was watching where I was impressed by the tech. Throughout the panel discussion after the movie, most of the directors talked about how useful the tech could be for a pitch meeting or for storyboarding, which isn’t hard to see.
But it is also the case that TCL knew that these films would get a lot of attention, and put a lot of time and effort into them, and there is no guarantee that it will always be the case that AI-generated films will always have so many humans involved.
“Our guiding principles are that we use humans to write, direct, produce, act, and perform, be it voice, motion capture, style transfer. Composers, not AI, have scored our shorts,” Regina said at the screening. “There are over 50 animators, editors, effects artists, professional researchers, scientists all at work at TCL Studios that had a hand in creating these films. These are stories about people, made by people, but powered by AI.”
Regina told me TCL is diving into AI films because it wants to differentiate itself from Netflix, Hulu, and other streaming sites but doesn’t have the money to spend on content to compete with them, and TCL also doesn’t have as long of a history of working with Hollywood actors, directors, and writers, so it has fewer bridges to burn.
“AI became an entry point for us to do more cost-effective experimentation on how to do original content when we don’t have a huge budget,” he said.
“I think the differentiation point too from the established studios is they have a legacy built around traditional content, and they've got overall deals with talent [actors, directors, writers], and they're very nervous obviously about disrupting that given the controversy around AI, where we don't have that history here,” he added.
The films were made with a variety of AI tools including Nuke, Runway, and ComfyUI, Regina said, and that each directors’ involvement with the actual AI prompting varied.
I am well aware that my perspective on this all sounds incredibly negative and very bleak. I think AI tools will probably be used pretty effectively by studios for special effects, editing, and other tasks in a way that won’t be so uncanny and upsetting, and more-or-less agree with Ben Affleck’s recent take that AI will do some things well but will do many other things very poorly.
Affleck’s perspective that AI will not make movies as well as humans is absolutely true but it is an incomplete take that also misses what we have seen with every other generative AI tool. For every earnest, creative filmmaker carefully using AI to enhance what they are doing to tell a better story, there will be thousands of grifters spamming every platform and corner of the internet with keyword-loaded content designed to perform in an algorithm and passively wash over you for the sole purpose of making money. For every studio carefully using AI to make a better movie, there will be a company making whatever, looking at it and saying “good enough,” and putting it out there for the purpose of delivering advertising.
I can’t say for sure why any of the directors or individual people working on these films decided to work on AI movies, whether they are actually excited by the prospects here or whether they simply needed work in an industry and town that is currently struggling following a writers strike that was partially about having AI foisted upon them. But there is a reason that every Hollywood labor union has serious concerns about artificial intelligence, there is a reason why big-name actors and directors are speaking out against it, and there is a reason that the first company to dive headfirst, unabashedly into making AI movies is a TV manufacturer who wants to use it to support advertising.
“I just want to take the fear out of AI for people,” Regina said. “I realize that it's not there to the level that everyone might want to hold it up in terms of perfection. But when we get a little closer to perfection or closer in quality to what’s being produced [by live action], well my question to the marketplace is, ‘Well then what?’”
The most openly introspective of any of the directors was Paul Johansson, who directed the AI movie Sun Day, acted in One Tree Hill, and directed 2011’s Atlas Shrugged: Part I.
“I love giving people jobs in Hollywood, where we all work so hard. I am not an idiot. I understand that technology is coming, and it’s coming fast, and we have to be prepared for it. My participation was an opportunity for me to see what that means,” Johansson said. “I think it’s crucial to us moving forward with AI technology to build relationships with artists and respecting each craft so that we give them the due diligence and input into what the emerging new technology means, and not leaving them behind, so I wanted to see what that was about, and I wanted to make sure that I could protect those things, because this town means something to me. I’ve been here a long time, so that’s important.”
Midway through the AI movie Project Nexus, about the green rock, I found myself thinking about my high school classmate and all of the time he must have spent doing his movie’s special effects. Project Nexus careened from scene to scene. Suddenly, a character says, out of nowhere: “What the fuck is going on?” Good question.
This week we start with Joseph's story about how the weapon found on the alleged UnitedHealthcare CEO murderer was a particular 3D printed design. Then Jason tells us what he found about the alleged killer Luigi Mangione through his online accounts, and why, ultimately, this kind of journalism might not matter. After the break, Sam talks about how various healthcare companies removed pages about their leadership after the murder, and what we're seeing when it comes to social content moderation around it. In the subscribers-only section, we talk about Congress getting big mad at Apple and Google after 404 Media's reporting on deepfake apps.
Listen to the weekly podcast on Apple Podcasts,Spotify, or YouTube. Become a paid subscriber for access to this episode's bonus content and to power our journalism. If you become a paid subscriber, check your inbox for an email from our podcast host Transistor for a link to the subscribers-only version! You can also add that subscribers feed to your podcast app of choice and never miss an episode that way. The email should also contain the subscribers-only unlisted YouTube link for the extended video version too. It will also be in the show notes in your podcast player.
Like nearly everyone else on the internet, yesterday the staff of 404 Media learned the name “Luigi Mangione” and sprung into action. This ritual is now extremely familiar to journalists who cover mass shootings, but has now become familiar to anyone following a news story that has captured this much attention. We have a name. Now: Who is this person? Why did they do what they did?
In an incredibly fractured internet where there is rarely a single story everyone is talking about and where it is impossible to hold anyone’s attention for more than a few minutes at a time, the release of the name Luigi Mangione sparked the type of content feeding frenzy normally only seen with mass tragedy and reminiscent of an earlier internet age when people were mostly paying attention to the same thing at once.
The ritual goes like this. You have a name. You try to cross-reference officially-known details released by authorities with what you are able to glean online. Have you identified the correct “Luigi Mangione?” Then you begin Googling and screenshotting his accounts before some of them are inevitably taken down. Did he have a Twitter account? An Instagram? A Facebook? A Substack? Did he post about the [tragedy and/or news event]? What were his hobbies and beliefs? Who did he follow? What did he post? Did what they post align with the version of a person who would do [a thing like this]? What are his politics? Is he gay or straight or trans or religious or rich or poor? Does he seem mentally ill? Is there a manifesto?
Then you try to find out who knew him. Can you reach his family? His friends? A colleague or ex-colleague? How about someone who went to high school with him and hasn’t talked to them in a decade? A neighbor? Good enough. Close enough.
Then comes second-level searching based on what you found in the original sweep. You stop searching his name and start searching for usernames you identified from his other accounts. You search his email address. You scan through his Goodreads account. What sort of information was this person consuming? What does it tell us about him?
Then you write an article. “Here’s everything we know about [shooter].” Or “[Shooter] listened to problematic podcasts.” Or whatever. The Google News algorithm either picks it up, or it doesn’t. It gets upvoted on Reddit or it doesn’t. It gets retweeted or it doesn’t. Your editor is happy, because you have found an angle. You have “hit the news.” You have “added to the conversation.”
Monday night, NBC News published an article with the headline “’Extremely Ironic’: Suspect in UnitedHealthcare CEO Slaying Played Video Game Killer, Friend Recalls.” This article is currently all over every single one of my social media feeds, because it is emblematic of the type of research I described above. It is a very bad article whose main reason for existing is the fact that it contains a morsel of “new” “information,” except the “information” in this case is that Luigi Mangione played the video game Among Us at some point in college.