It’s that time again! We’re planning our latest FOIA Forum, a live, hour-long or more interactive session where Joseph and Jason will teach you how to pry records from government agencies through public records requests. We’re planning this for Thursday, 23rd January at 1 PM Eastern. Add it to your calendar!
So, what’s the FOIA Forum? We'll share our screen and show you specifically how we file FOIA requests. We take questions from the chat and incorporate those into our FOIAs in real-time. We’ll also check on some requests we filed last time. This time we're particularly focusing on how to use FOIA in the new Trump administration. We'll talk all about local, state, and federal agencies; tricks for getting the records you want; requesting things you might not have thought of; and how to apply when the federal government tries to withhold those records.
If this will be your first FOIA Forum, don’t worry, we will do a quick primer on how to file requests (although if you do want to watch our previous FOIA Forums, the video archive is here). We really love talking directly to our community about something we are obsessed with (getting documents from governments) and showing other people how to do it too.
Paid subscribers can already find the link to join the livestream below. We'll also send out a reminder a day or so before. Not a subscriber yet? Sign up now here in time to join.
We've got a bunch of FOIAs that we need to file and are keen to hear from you all on what you want to see more of. Most of all, we want to teach you how to make your own too. Please consider coming along!
Meta is deleting links to Pixelfed, a decentralized Instagram competitor. On Facebook, the company is labeling links to Pixelfed.social as “spam” and deleting them immediately.
Pixelfed is an open-source, community funded and decentralized image sharing platform that runs on Activity Pub, which is the same technology that supports Mastodon and other federated services. Pixelfed.social is the largest Pixelfed server, which was launched in 2018 but has gained renewed attention over the last week.
Bluesky user AJ Sadauskas originally posted that links to Pixelfed were being deleted by Meta; 404 Media then also tried to post a link to Pixelfed on Facebook. It was immediately deleted.
Pixelfed is experiencing a surge in user signups in recent days, after Meta announced that it would loosen its rules to allow users to call LGBTQ+ people “mentally ill” amid a host of other changes that shift the company overtly to the right. Meta and Instagram have also leaned heavily into AI-generated content. Pixelfed announced earlier Monday that it is launching an iOS app later this week.
Pixelfed said Sunday it is “seeing unprecedented levels of traffic to pixelfed.social.”
Over the weekend, Daniel Supernault, the creator of Pixelfed, published a “declaration of fundamental rights and principles for ethical digital platforms, ensuring privacy, dignity, and fairness in online spaces.” The open source charter, which has been adopted by Pixelfed and can be adopted by other platforms, contains sections titled “right to privacy,” “freedom from surveillance,” “safeguards against hate speech,” “strong protections for vulnerable communities,” and “data portability and user agency.”
“Pixelfed is a lot of things, but one thing it is not, is an opportunity for VC or others to ruin the vibe. I've turned down VC funding and will not inject advertising of any form into the project,” Supernault wrote on Mastodon. “Pixelfed is for the people, period.”
Mikey Shulman, the CEO and founder of the AI music generator company Suno AI, thinks people don’t enjoy making music.
“We didn’t just want to build a company that makes the current crop of creators 10 percent faster or makes it 10 percent easier to make music. If you want to impact the way a billion people experience music you have to build something for a billion people,” Shulman said on the 20VC podcast. “And so that is first and foremost giving everybody the joys of creating music and this is a huge departure from how it is now. It’s not really enjoyable to make music now […] It takes a lot of time, it takes a lot of practice, you need to get really good at an instrument or really good at a piece of production software. I think the majority of people don’t enjoy the majority of the time they spend making music.”
Suno AI works like other popular generative AI tools, allowing users to generate music by writing text prompts describing the kind of music they want to hear. Also like many other generative AI tools, Suno was trained on heaps of copyrighted music it fed into its training dataset without consent, a practice Suno is currently being sued for by the recording industry.
In the interview, Shulman says he’s disappointed that the recording industry is suing his company because he believes Suno and other similar AI music generators will ultimately allow more people to make and enjoy music, which will only grow the audience and industry, benefiting everyone. That may end up being true, and could be compared to the history of electronic music, digital production tools, or any other technology that allowed more people to make more music.
However, the notion that “the majority of people don’t enjoy the majority of the time they spend making music” betrays a fundamental misunderstanding of music, why people make art, become artists, and the basic human practice of skill building and mastery.
Music is a form of creative expression that’s old as humanity itself and exists in every culture. Babies will “make music” by clapping their hands and smashing blocks together long before they can talk, and they don’t find that frustrating.
It’s true that becoming very good at making music takes time. Picking up a guitar for the first time does not immediately produce the joy of perfectly executing a sick guitar solo. You have to start from zero, maybe learn some theory, and build the muscle memory and calluses on your fingers. Some people enjoy this slow process of getting a little better over time and become musicians. Some people don’t and instead spend their time becoming good at blogging, carpentry, programming, cutting hair, etc.
The interviewer, Harry Stebbings, interjects while Shulman says the making music isn’t enjoyable and compares it to running, another obviously challenging thing that many people enjoy getting better at over time.
“Most people drop out of that pursuit because it’s hard, and so I think that the people you know that run, this is a highly biased selection of the population that fell in love with it,” Shulman said.
It’s funny and frustrating that Shulman can’t (or pretends he can’t) connect the dots and understand that the process of learning and challenging yourself is part of what makes music inherently appealing. During the interview, he repeatedly says that Suno can grow the music industry to be as big as the video game industry by making it more accessible. This, of course, ignores the fact that video games are designed to be challenging, that the most popular games in the world are incredibly competitive and difficult to master, and that most video games are essentially the process of slowly getting better at a difficult task.
This is not a surprising position for the CEO of a generative AI company to take. It is very possible that generative AI will become a more popular way for producing images, music, and text in the future. We report on how those AI-generated outputs are flooding the internet already, though in most cases that output is derided as “slop” because it’s low quality and annoying to users who find it increasingly difficult to find valuable, human-made content on the internet. Pretending that typing a text prompt into Suno makes one a musician inflates the worth of that output and the company.
“Every single person at Suno has an incredible deep love and respect for music,” Shulman said later in the interview.
Ali Riley, a professional soccer player for the Angel City Football Club, lost her home in Los Angeles’s Palisades Fire. The last image she saw of her house standing was an Amazon package delivery confirmation photo, sent after the neighborhood’s mandatory evacuation order.
Friday morning, Riley posted a screenshot of an Amazon delivery confirmation photo. The photo showed an Amazon box on a bench in front of a glass door.
“Last photo we have of the house standing is from this #amazon delivery made after the mandatory evacuation orders,” Riley wrote in the post. Riley’s home in the Pacific Palisades was included in the first evacuation order issued on January 7, about two hours after the fire started burning. “Bewildering! Sincerely hope this driver is ok.”
Amazon drivers have continued delivering packages in some areas of Los Angeles affected by ongoing wildfires, according to numerous posts by drivers on social media and corroborated by the company’s website.
Since Tuesday, uncontrolled fires in the northern parts of Los Angeles have burned down over 12,000 buildings, and thousands of people have lost their homes.
Amazon closed the DLX5 warehouse in Glendale on Wednesday, the day after the fires broke out. But Amazon’s distributed delivery system has led to some confusion. Amazon uses a network of “Delivery Service Partners,” which are nominally independent businesses who hire delivery drivers. Amazon also delivers packages in Los Angeles with a system called Flex, which functions sort of like DoorDash or Uber in that drivers use their personal vehicles to deliver packages.
An Amazon Flex driver posted that they had been instructed to deliver close to the fires on Thursday. The screenshot of their route map showed a road in Westgate Heights, in an area that is now under an evacuation warning and is immediately next to an area under a mandatory evacuation order. A photo they shared taken in their warehouse parking lot showed a massive plume of orange smoke. They said in a comment that they had refused to deliver the packages.
While some drivers told 404 Media or posted on driver subreddits and Discords that their routes had been canceled, some said they were given delivery routes close to fires or in areas that were eventually evacuated.
Multiple drivers wrote that the DLX5 warehouse in Glendale, for example, had closed on Wednesday. “I was still scheduled to work on the 8th,” one driver wrote to 404 Media in an online chat. “I didn’t hear much from management until 30 minutes before our clock in time, that the station had closed due to the fires.”
The driver posted a photo of a brown smoke-darkened sky above the parking lot of their warehouse.
Another Flex driver posted a screenshot of a delivery cancellation notice they got from VAX5, a warehouse in LA’s Boyle Heights neighborhood.
“The block you’re scheduled for on 09 January 2025 at 3:30 am at VAX5 has been canceled. Please don’t come to the delivery station. This cancellation is due to circumstances beyond your control. Your standing won’t be impacted and you will still be paid for the block.”
Multiple drivers on the Amazon delivery subreddit, r/AmazonDSPDrivers, have written that despite nearby fires and evacuation zones, their work days have gone on as normal over the last week.
“I deliver east in LA county and today was just another day on the job,” one user wrote in a comment on a post asking how drivers in the state were dealing with the fires. “Not really that bad out here tho[ugh], but one of our delivery areas is close to level 2 evacuation warning.”
Another driver wrote, “We cover the Burbank/Glendale area, still working. A lot of businesses are closed. Some unprecedented traffic. We were just given N95 masks for mild ashes falling.” Glendale sits just west of the Eaton fire, which is the second most destructive fire in the state.
A third driver in Santa Monica, about 20 minutes away from the Palisades, wrote last Wednesday that their workload had been reduced because of the fires. They posted a screenshot of a route with 192 packages. “I honestly thought they’d send us home since we deliver close to the fires but no they just gave us masks to wear,” the driver wrote.
Delivering in wildfire conditions can be dangerous even if you aren’t close to the source of the fire itself. In 2023, New York City was enveloped in smoke from Canadian wildfires, and the city’s air quality was categorized as “hazardous.” Delivery drivers at the time said they had spent their whole workday coughing. As of Sunday, Los Angeles’ air quality was “poor.”
The driver subreddits are also full of people discussing whether they would get paid for canceled routes, and screenshots of drivers talking to Amazon support. In many cases, Amazon appears to be paying drivers for routes cancelled because of the fires.
Amazon spokesperson Montana MacLachlan told 404 Media in a statement that the company was supplying drivers with N95 masks and was monitoring the air quality in the area.
“If [the air quality index] is over a certain threshold for extended timeframes as defined by Cal OSHA, we have mechanisms in place to reduce time on the road for drivers,” MacLachlan said. “If it’s still deemed safe to be on the road, we suggest DSPs [delivery service partners] advise their drivers to keep vehicle windows closed and to run the A/C on high with air recirculating, out of an abundance of caution.”
In a blog post written two days after the fires began burning, the company wrote that its customers would likely experience delays due to the “temporary closing of some Amazon facilities,” and that it would fulfill their orders “when it’s safe to do so from outside the affected region…Our top priority is ensuring the safety of our employees and partners.”
MacLachlan said Amazon had instructed drivers not to make deliveries in mandatory evacuation zones. “Safety is our utmost priority and drivers are encouraged and instructed to never make deliveries if they feel unsafe, and they will never be penalized for it,” MacLachlan said. She also said the company was investigating Riley’s post about the Amazon package.
“We’re looking into the details of this delivery,” MacLachlan said. “However, drivers have been instructed to not deliver in evacuation zones, or areas closed to public access. And if a driver arrives at a delivery location and the conditions are not safe to make a delivery, they are not expected to do so, and the driver’s performance will not be impacted.”
A hacker compromised an administrative account on the website for popular game Path of Exile 2, which allowed them to reset the passwords on dozens of players’ accounts, according to comments from developer Grinding Gear Games (GGG) made during a podcast on Sunday. This access would have given the hacker the ability to steal powerful and rare items from those players, with some players spending hundreds of hours grinding for valuable in-game currency.
The news comes after a wave of Path of Exile 2 players complained on the game’s forums and social media about being hacked and their inventories emptied. The comments also show how the hacker compromised the account shortly before the game’s launch, seemingly laying in wait for players to build up their stashes of items before pulling off their heist.
“We totally fucked up here,” Path of Exile 2 game director Jonathan Rogers said during a podcast recording with action roleplaying game (ARPG) content creators GhazzyTV and Darth Microtransaction.
Rogers said the hack started with the compromise of a Steam account. That Steam was linked to an administrative account on Path of Exile 2’s website, he said. This gave the hacker the ability to do things like reset player’s passwords, meaning they could then log into the game as those players. “Effectively what they had access to was the same stuff that customer service had access to,” Rogers said.
Ordinarily, whenever a member of Path of Exile 2’s support staff makes a change, that event is added to a list for potential later auditing. But when it came to resetting passwords, a bug meant that change was saved as a “note” and not an event, Rogers said. The hacker was then able to delete the note saying a password had been changed, an apparent attempt by the hacker to cover their tracks too. Because of this, it wasn’t immediately obvious to GGG what was happening with these account compromises, Rogers said.
💡
Do you know anything else about this hack? I would love to hear from you. Using a non-work device, you can message me securely on Signal at +44 20 8133 5190. Otherwise, send me an email at [email protected].
“66 notes were deleted, so that would imply that 66 accounts were compromised,” Rogers said, although caveated that GGG only keeps logs for 30 days. Interestingly, the compromise “was all prelaunch of POE2,” Rogers said, meaning that the hacker gained access before the game was even available to the public.
Path of Exile2 has been in the news lately after mounting evidence that Elon Musk, who presents himself as a high level ARPG player, has likely been cheating in the game.
Path of Exile 2 launched in early access in November and has remained one of the most popular games on Steam, reaching between 250,000 and 290,000 players over the past week, according to data on Steam player count site SteamDB. In it, players command a variety of classes like Sorceror, Warrior, or Witch, and trawl through dungeons looking for ever increasingly powerful loot, much in the style of other ARPGs like Diablo.
Path of Exile2 differs slightly in that it has a much stronger emphasis on trading with other players, which is basically essentially for making a character stronger unless players deliberately avoid trading for the increased challenge. This trade is facilitated by the official Path of Exile 2 website.
Players typically trade items for a rare consumable called a Divine Orb which can further improve their gear, making it the de facto currency of the Path of Exile 2 economy. A side effect is that many websites exist where people can pay real money for Divine Orbs, which they then use to trade for gear on the Path of Exile 2 trade site.
Multiple Path of Exile 2 players have recently complained of hackers breaking into their accounts and emptying their stashes of Divine Orbs. “My exalted and divine all gone,” one person wrote on the Path of Exile 2 forum in December, with Exalted Orbs being another in-game consumable.
“We totally fucked up here.”
In other cases hackers have stolen gear from players. In one a hacker stole a particular ring. That player then found what they said was the exact same ring being sold on the Path of Exile 2 trade website, according to a post on the Path of Exile 2 subreddit. That post has since been deleted by the subreddit moderators, and moderators have also deleted similar posts on the official forum. In some cases these posts named what victims believed was the hacker’s account.
Rogers said GGG is immediately adding two-factor authentication to all of its support accounts. “You can bet on that,” he said. Rogers said he also wants to introduce two-factor authentication for player accounts, but that comes with the additional complexity of implementing ways for players to recover their account when they inevitably lose that second factor, such as a backup code or phone number.
GGG did not immediately respond to a request for comment.
This week, it’s time to demand a new planet. Don’t we deserve it? Haven’t we been good? Fortunately, we may be on the cusp of finally discovering whether the solar system has, indeed, been hiding a massive world up its sleeve. Can you imagine the fight over naming this world, if it actually is discovered? I’m already exhausted. Let’s just skip the fuss and call it Becky.
Then, we’ll hang around the outer system for a while to check in on Pluto and Charon. How did they meet? Violently, it turns out! Next, scientists confirm that saber teeth are extremely efficient at converting living things into dead things. Last, meet Punk and Emo, founding members of the mollusc underground. It’s a week of deep space and deep time; enjoy the ride.
For nearly a decade, scientists have speculated that an undiscovered giant planet lurks in the distant reaches of the solar system. The existence of this unconfirmed “Planet X” or “Planet Nine” could explain strange observations of objects far beyond Neptune, known as trans-Neptunian objects (TNOs).
These TNOs appear to be being gravitationally influenced by some unknown entity, though there is a lot of debate about the origin of the anomalies—or whether they are “real” at all. Planet X is one popular hypothesis, but scientists have also speculated that the anomalies could point to an expansive disk of smaller objects, or even a primordial black hole. The effects may also just be a temporary coincidence that does not require the invocation of some hidden hulking entity.
To help constrain these possibilities, scientists have presented new predictions about Planet X, assuming it exists, in part by expanding the sample of TNOs from 11 objects to 51. The results suggest that a hypothetical Planet X would be about 4.4 times as massive as Earth, and occupy an orbit about 300 times farther from the Sun than Earth..
Most importantly, the study’s projected orbit places Planet X right into the sights of Vera C. Rubin Observatory’s Legacy Survey of Space and Time (LSST), a major new astronomical facility in Chile. LSST is expected to begin operating later this year, and it will be especially adept at illuminating the “here be space dragons” parts of our solar system map.
“Nearly all of the parameter space for the unseen planet proposed here falls within LSST’s field of view and detection limits, so if such a planet exists, it is likely to be discovered early on in the survey,” said researchers led by Amir Siraj of Princeton University. “LSST will simultaneously reveal whether the observed clustering of distant TNOs…is real, an observational selection effect, or a statistical fluke, given the large number of expected TNO discoveries.”
In other words, we may genuinely be on the cusp of adding a new planet to our solar family—or, perhaps, learning that Planet X was just an astronomical mirage. LSST is poised to answer the riddle, one way or another.
In addition to the exciting prospect, the new study offers other tantalizing predictions. The team found that the planet’s projected orbit is probably aligned with the plane of the solar system, a result that contrasts with past studies that predicted the planet would orbit at an angle. The angle of the orbit has implications for the origins of the planet; a world aligned to the plane of the solar system is more likely to be a homegrown member of our solar family, whereas a planet with a more inclined orbit could have been gravitationally captured by the Sun after making an interstellar journey from its native star system.
Look, we’re living through an overwhelming time of climate disasters, political strife, and obscene inequities. I really think we deserve a new planet, as a treat. I’ll even take a primordial black hole, if that’s what’s on offer. Given that LSST is not set to start running until the back-end of 2025, it will probably be at least a year before the existence of a planet is confirmed or refuted. But if anyone starts a betting market on this long-sought mystery, put me down for Planet X.
Speaking of TNOs, let’s talk about the most famous of them all: Pluto. This farflung world was the OG Planet Nine before it was officially downgraded to a dwarf planet in 2006, a decision that ignited an astronomical culture war. But though Pluto and its moon Charon aren’t big enough to count as planets, they are giants for TNOs; indeed, the Pluto-Charon system is the largest binary in the known TNO population. (Pluto is about two thirds the size of Earth’s Moon, and Charon is about half the size of Pluto.)
Scientists have long suspected that the system formed in the wake of a collision between two icy bodies billions of years ago, but the dynamics behind this event have defied easy explanation.
Now, scientists have developed a new formation model for this system that they call the “kiss-and-capture” regime. In this scenario, the two parent bodies of Pluto-Charon collided and then kind of just merged together for about 10 to 15 hours, before separating into the distinct bodies we see today.
“Kiss-and-capture leaves the bodies mostly intact; however, it does result in the resurfacing of Charon and a large portion of Pluto,” said researchers led by Adeene Denton of the University of Arizona. The scenario provides “a new foundation for the accumulation of geological features observed today, including Charon’s widespread fracture network and Pluto’s ancient ridge–trough system, which reflects early and widespread extension.”
Call me old-fashioned, but I prefer when kisses keep the bodies mostly intact. Given that Pluto has a giant heart-shaped region on its surface, this binary is really shaping up to be the most romantically coded system in the solar system.
You don’t need anyone to tell you that saber teeth are rad. They are deadly weapons that grow out of skulls. The allure is self-evident. But just in case you wanted empirical proof to back it up, scientists have now demonstrated that “extreme saber teeth” are functionally optimal for killing bites, which explains why they have independently evolved at least five times in mammals and mammal ancestors (including gorgonopsians).
To assess the advantages of saber teeth versus other canine morphologies, researchers examined 95 teeth from carnivorous mammals, including 25 from saber-toothed animals like Smilodon, Homotherium, and Thylacosmilus. The team concluded that saber teeth “optimize puncture performance at the expense of breakage resistance,” meaning that these dental daggers evolved to deliver swift death.
Predatory scenarios for saber-toothed animals “favor a killing bite through penetration causing tissue damage and blood loss over the suffocation through clamp-and-hold bite of conical-toothed pantherine felids,” such as snow leopards, said researchers led by Tahlia Pollock of the University of Bristol.
The most recent saber-toothed cat, Smilodon, went extinct only 10,000 years ago, so our ancestors would have encountered it. In fact, saber-toothed cats may have occasionally preyed on humans. But those iconic canines are no longer spilling blood and severing arteries out there in the wild anywhere, suggesting that “the niche(s) they once occupied do not exist in the modern context,” according to the study.
It’s bittersweet to live in an era devoid of saber teeth. While I wouldn’t want to see these fatal fangs up close, the world is undoubtedly duller without them.
A nice bonus of discovering a new species is that you typically get to name it. Scientists have been having fun with this responsibility for decades, which is why we have spiders called Hotwheels sisyphus, fungus called Spongiforma squarepantsii, and wasps called Aha ha.
Now, scientists have continued this tradition with two new mollusc species identified from fossils that date back 430 million years ago. Everyone, meet Punk (Punk ferox) and (Emo vorticaudum).
Punk is named for the “fancied resemblance of the spicule array to the spiked hairstyles associated with the punk rock movement” paired with ferox (Latin) meaning “wild, bold, defiant,” said researchers led by Mark Sutton of Imperial College London.
Emo is named “after the emo musical genre related to punk rock, whose exponents canonically bear long ‘bangs’ or fringes” which is reminiscent of the fossilized mollusc’s exoskeleton, the team added. In addition, Emo’s “anterior valves” resemble “studded clothing.”
There you have it: mohawks, devilocks, studs, and other punk culture mainstays were pioneered by rabble-rousing molluscs all the way back in the Silurian period, long before animals ever walked—let alone crowd-surfed—on land.
Now all we need is to discover a new species of screeching weasel to really round out the punk biological kingdom.
Meta deleted nonbinary and trans themes for its Messenger app this week, around the same time that the company announced it would change its rules to allow users to declare that LGBTQ+ people are “mentally ill,” 404 Media has learned.
Meta’s Messenger app allows users to change the color scheme and design of their chat windows with different themes. For example, there is currently a “Squid Game” theme, a “Minecraft” theme, a “Basketball” theme, and a “Love” theme, among many others.
These themes regularly change, but for the last few years they have featured a “trans” theme and a “nonbinary” theme, which had color schemes that matched the trans pride flag and the non-binary pride flag. Meta did not respond to a request for comment about why the company removed these themes, but the change comes right as Mark Zuckerberg’s company is publicly and loudly shifting rightward to more closely align itself with the views of the incoming Donald Trump administration. 404 Media reported Thursday that many employees are protesting the anti LGBTQ+ changes and that “it’s total chaos internally at Meta right now” because of the changes.
💡
Do you work at Meta? I would love to hear from you. Using a non-work device, you can message me securely on Signal at +1 202 505 1702.
“This June and beyond, we want people to #ConnectWithPride because when we show up as the most authentic version of ourselves, we can truly connect with people,” the post announcing the trans theme originally said. “Starting today, in support of the LGBTQ+ community and allies, Messenger is launching new expression features and celebrating the artists and creators who not only developed them, but inspire us each and every day.”
This is Behind the Blog, where we share our behind-the-scenes thoughts about how a few of our top stories of the week came together. This week, we discuss weird fake furniture and shared reality (or the lack thereof).
SAM: It has been such a busy week I forgot it’s Friday until I woke up this morning, so I’ll be brief with a quick couple of thoughts about Google’s AI Overview, which I wrote about serving bizarre results that suggested people pass a Magic Wand vibrator to a child like a talking stick for counseling purposes.
Emanuel found the thread on Reddit where someone posted a screenshot of the result they got with the search term “magic wand pregnancy.” He was getting the same result as that person; I assume both he and OP were doing a lot of Googling about pregnancy related topics, as Emanuel’s wife just had a kid and this person was clearly looking for answers about what sorts of buzz buzz were safe to throw down with while pregnant. When I searched with the same term, I didn’t get that answer, and couldn’t replicate it. I could get the other one mentioned in the story—searching “what is a magic wand” showed me an AI Overview result about magicians’ “small sticks” (ouch) — but not the pregnancy one. I assumed, and mentioned in the story, that this is because of the aforementioned Googling about babies; Google personalizes search based on activity, and parents are a valuable market for advertisers.
Some of the world’s most popular apps are likely being co-opted by rogue members of the advertising industry to harvest sensitive location data on a massive scale, with that data ending up with a location data company whose subsidiary has previously sold global location data to US law enforcement.
The thousands of apps, included in hacked files from location data company Gravy Analytics, include everything from games like Candy Crush to dating apps like Tinder, to pregnancy tracking and religious prayer apps across both Android and iOS. Because much of the collection is occurring through the advertising ecosystem—not code developed by the app creators themselves—this data collection is likely happening both without users’ and even app developers’ knowledge.
“For the first time publicly, we seem to have proof that one of the largest data brokers selling to both commercial and government clients, appears to be acquiring their data from the online advertising ‘bid stream,’” rather than code embedded into the apps themselves, Zach Edwards, senior threat analyst at cybersecurity firm Silent Push, and who has followed the location data industry closely, tells 404 Media after reviewing some of the data.
The data provides a rare glimpse inside the world of real-time bidding (RTB). Historically, location data firms paid app developers to include bundles of code that collected the location data of their users. Many companies have turned instead to sourcing location information through the advertising ecosystem, where companies bid to place ads inside apps. But a side effect is that data brokers can listen in on that process, and harvest the location of peoples’ mobile phones.
“This is a nightmare scenario for privacy because not only does this data breach contain data scraped from the RTB systems, but there's some company out there acting like a global honey badger, doing whatever it pleases with every piece of data that comes its way,” Edwards adds.
Meta employees are furious with the company’s newly announced content moderation changes that will allow users to say that LGBTQ+ people have “mental illness,” according to internal conversations obtained by 404 Media and interviews with five current employees. The changes were part of a larger shift Mark Zuckerberg announced Monday to do far less content moderation on Meta platforms.
“I am LGBT and Mentally Ill,” one post by an employee on an internal Meta platform called Workplace reads. “Just to let you know that I’ll be taking time out to look after my mental health.”
On Monday, Mark Zuckerberg announced that the company would be getting “back to our roots around free expression” to allow “more speech and fewer mistakes.” The company said “we’re getting rid of a number of restrictions on topics like immigration, gender identity, and gender that are the subject of frequent political discourse and debate.” A review of Meta’s official content moderation policies show, specifically, that some of the only substantive changes to the policy were made to specifically allow for “allegations of mental illness or abnormality when based on gender or sexual orientation.” It has long been known that being LGBTQ+ is not a sign of “mental illness,” and the false idea that sexuality or gender identification is a mental illness has long been used to stigmatize and discriminate against LGBTQ+ people.
There’s a video going viral this week of the Hollywood sign in Los Angeles with a wildfire raging behind it, letters glowing in the blaze. It’s a powerful scene, up there with the burning McDonald’s sign in imagery that’s come out of this week’s devastating fires spreading across LA.
I've seen several people sharing this same video with shades of shock and heartbreak. But it’s AI-generated. When it was posted, according to a Community Note on one of the posts, a look at the Hollywood sign livestreams showed the sign was fine; as of writing, the feeds are down, but Hollywoodsign.org, a website that runs a live webcam pointed at the sign, told fact-checking site Snopes "Griffith Park is temporarily closed as a safety precaution, but the Sign itself is not affected and is secure — and the cameras will slowly but surely come back up."
Another viral image of the Hollywood sign burning is also AI:
Then there’s X poster Kevin Dalton’s image, which he later admitted was made with X’s Grok generative AI tool “for now,” showing what I can only assume he imagines as “antifa” in all black descending on a burned-out neighborhood to loot it. “The remains of Pacific Palisades will get picked clean tonight,” he wrote. (Dalton’s been making AI paint him little fantasy pictures of Trump firing California governor Gavin Newsom, so this is a big week for him.)
People are also obsessively generating Grok images of Newsom fiddling in front of fires or saving goldfish (???).
The very real footage and images coming out of Southern California this week are so surreal they’re hard to believe, with entire miles of iconic coastline, whole neighborhoods, and massive swaths of the Pacific Palisades and LA’s east side turned to ash (and still burning as of writing).
Interestingly, a lot of this week’s news cycle has turned to blaming AI and its energy usage as contributing to climate change. But others are not wasting an opportunity for boosterism. In a stunning show of credulity, British-owned digital newspaper The Express ran a story with the headline “Five dead in LA fires as residents think AI tech could have prevented disaster” based on a quote from one evacuating 24 year old they found who took the opportunity in front of a reporter to breathlessly shill for AI, as an AI industry worker himself. “[Los Angeles’] fire and police departments don’t invest in technology [sic] hopefully more people build AI robotics solutions for monitoring or help. Instead a lot of people in ai are building military solutions. Aka putting a gun on top of a robot dog,” Chevy Chase Canyon resident William Lee told The Express. “Robotics operated fire response systems. It costs $6-18k for AI humanoid robots. LAFD salary is approx $100k/yr… 3,500 firefighters. We can slowly integrate robotics to put less lives at risk, but also for assistance."
That guy was so close to saying something prescient it’s painful: Robot dogs are a stupid waste of taxpayer money, and not a hypothetical one, as LA approved $278,000 for a surveillance robot dog toy for the LAPD in 2023. But the Los Angeles Fire Department’s budget was cut by nearly $17.6 million this fiscal year, while giving even more money to the police department’s already massive budget: the LAPD received a $2.14 billion budget for the 2025-26 fiscal year, representing an 8.1% increase.
“Humanoid robots” as an absurd proposition aside, I don’t want to write off all forms of new technology as useless in natural disasters. Machine learning and machine vision technology seem to show promise in helping detect, track, or prevent wildfires: Last week, University of California San Diego’s ALERTCalifornia camera network alerted fire officials of an anomaly spotted on video, and firefighters were reportedly able to contain the blaze to less than a quarter acre. But companies taking investment to “solve” wildfires are also profiting off of a crisis that’s only getting worse, with no promise that their solutions will improve the situation.
Overwhelmingly, AI is being crammed down the public’s throats as a tool for generating some of the dumbest bullshit imaginable. That includes misinformation like we’ve seen with these fires, but also bottomless ugliness, laughably terrible bots, sexual abuse and violence. And it’s sold to us as both our inevitable savior and the next world-ending existential crisis by people with billions earned on the theft of human creativity, and billions more yet to gain.
AI might help solve tough problems related to climate change and things like wildfires, water scarcity, and energy consumption. But in the meantime, data centers are projected to guzzle 6.6 billion cubic meters of water by 2027, in service of churning out sloppy, morbid fantasies about tragedies within tragedies.
In 2020, after walking by refrigerated trailers full of the bodies of people who died during the first wave of the COVID-19 pandemic one too many times, my fiancé and I decided that it would maybe be a good idea to get out of New York City for a while. Together with our dog, we spent months driving across the country and eventually made it to Los Angeles, where we intended to stay for two weeks. We arrived just in time for the worst COVID spike since the one we had just experienced in New York. It turned out we couldn’t and didn’t want to leave. Our two week stay has become five years.
While debating whether we were going to move to Los Angeles full time, my partner and I joked that we had to choose between the “fire coast” and the “water coast.” New York City had been getting pummeled by a series of tropical storms and downpours, and vast swaths of California were fighting some of the most devastating wildfires it had ever seen. We settled on the fire coast, mostly to try something new.
We have been very lucky, and very privileged. Our apartment is in Venice Beach, which is probably not going to burn down. This time, we will not lose our lives, our things, our memories. We had the money and the ability to evacuate from Los Angeles on Wednesday morning after it became clear to us that we should not stay. What is happening is a massive tragedy for the city of Los Angeles, the families who have lost their homes, businesses and schools.
I am writing this to try to understand my place in a truly horrifying event, and to try to understand how we are all supposed to process the ongoing slow- and fast-moving climate change-fueled disasters that we have all experienced, are experiencing, and will definitely experience in the future. My group chats and Instagram stories are full of my friends saying that they are fine, followed by stories and messages explaining that actually, they are not fine. Stories that start with “we’re safe, thank you for asking” have almost uniformly been followed with “circumstances have changed, we have evacuated Los Angeles.” Almost all of my friends in the city have now left their homes to go somewhere safer; some people I know have lost their homes.
I knew when I moved to Los Angeles that we would to some extent experience fires and earthquakes. I live in a “tsunami hazard zone.” I also know that there is no place that is safe from climate change and climate-fueled disaster, as we saw last year when parts of North Carolina that were considered to be “safer” from climate change were devastated by Hurricane Helene.
We are living in The Cool Zone, and, while I love my life, am very lucky, and have been less directly affected by COVID, political violence, war, and natural disasters than many people, I am starting to understand that maybe this is all taking a toll. Firefighters and people who have lost their homes are experiencing true hell. What I am experiencing is something more like the constant mundanity of dystopia that surrounds the direct horror but is decidedly also bad.
I knew it would be windy earlier this week because I check the surf forecast every day on an app called Surfline, which has cameras and weather monitoring up and down nearly every coast in the world. The Santa Ana winds—a powerful wind phenomenon I learned about only after moving to California—would be offshore, meaning they would blow from the land out to sea. This is somewhat rare in Los Angeles and also makes for very good, barreling waves. I was excited.
I had a busy day Tuesday and learned about the fire because the Surfline cameras near the fire were down. In fact, you can see what it looked like as the fires overtook the camera at Sunset Point here:
The camera livestream was replaced with a note saying “this camera is offline due to infrastructure issues caused by local wildfires.” The surf forecast did not mention anything about a fire.
I walked out to the beach and could see the mountains on fire, the smoke plumes blowing both out to sea and right over me. The ocean was indeed firing—meaning the waves were good—and lots of people were surfing. A few people were milling around the beach taking photos and videos of the fire like I was. By the time the sun started setting, there were huge crowds of people watching the fire. It was around this time that I realized I was having trouble breathing, my eyes were watering, and my throat was scratchy. My family locked ourselves into our bedroom with an air purifier running. Last week, we realized that we desperately needed to replace the filter, but we did not. A friend told us the air was better near them, so we went to their house for dinner.
While we were having dinner, the size of the fire doubled, and a second one broke out. Our phones blared emergency alerts. We downloaded Watch Duty, which is a nonprofit wildfire monitoring app. Most of the wildfire-monitoring cameras in the Pacific Palisades had been knocked offline; the ones in Santa Monica pointing towards the Palisades showed a raging fire.
Every few minutes the app sent us push notifications that the fire was rapidly expanding, that firefighters were overwhelmed, that evacuation orders had expanded and were beginning to creep toward our neighborhood. I opened Instagram and learned that Malibu’s Reel Inn, one of our favorite restaurants, had burned to the ground.
Apple Intelligence began summarizing all of the notifications I was getting from my various apps. “Multiple wildfires in Los Angeles, causing destruction and injuries,” from the neighborhood watch app Citizen, which I have only because of an article I did about the last time there was a fire in Pacific Palisades. Apple Intelligence’s summary of a group chat I’m in: “Saddened by situation; Instagram shared.” From a friend: "Wants to chat about existential questions." A summary from the LA Times: “Over 1,000 structures burned in LA Count wildfires; firefighter were overwhelmed.” From Nextdoor: “Restaurants destroyed.”
Earlier on Tuesday, I texted my mom “yes we are fine, it is very far away from us. It is many miles from us. We have an air purifier. It’s fine.” I began to tell people who asked that the problem for us was "just" the oppressive smoke, and the fact that we could not breathe. By the time we were going to bed, it became increasingly clear that it was not necessarily fine, and that it might be best if we left. I opened Bluesky and saw an image of a Cybertruck sitting in front of a burnt out mansion. A few posts later, I saw the same image but a Parental Advisory sticker had been photoshopped onto it. I clicked over to X and saw that people were spamming AI generated images of the fire.
We began wondering if we should drive toward cleaner air. We went home and tried to sleep. I woke up every hour because I was having trouble breathing. As the sun was supposed to be rising in the morning, it became clear that it was being hidden by thick clouds of smoke.
Within minutes of waking up, we knew that we should leave. That we would be leaving. I opened Airbnb and booked something. We do not have a “Go Bag,” but we did have time to pack. I aimlessly wandered around my apartment throwing things into bags and boxes, packing things that I did not need and leaving things that I should have brought. In the closet, I pushed aside our boxes of COVID tests to get to our box of N-95 masks. I packed a whole microphone rig because I need to record a podcast Friday.
I emailed the 404 Media customers who bought merch and told them it would be delayed because I had to leave my home and cannot mail them. I canceled meetings and calls with sources who I wanted to talk to.
Our next-door neighbor texted us, saying that she would actually be able to make it to a meeting next week with our landlord with a shared beef we’re having with them. Originally she thought she would have to work during the time the meeting was scheduled. She works at a school in the Palisades. Her school burned down. So had her sister’s house. I saw my neighbor right before we left. I told her I would be back on Friday. I had a flashback to my last day in the VICE office in March 2020, when they sent us home for COVID. I told everyone I would see them in a week or two. Some of those people I never saw again.
A friend texted me to tell me that the place we had been on a beautiful hike a few weeks ago was on fire: “sad and glad we went,” he said. A friend in Richmond, Virginia texted to ask if I was OK. I told him yes but that it was very scary. I asked him how he was doing. He responded, “We had a bad ice storm this week and that caused a power outage at water treatment that then caused server crashes and electrical equipment to get flooded. The whole city has been without water since Monday.” He told me he was supposed to come to Los Angeles for work this weekend. He was canceling his flight.
A group chat asked me if I was OK. I told them that I did not want to be dramatic but that we were having a hard time but were ultimately safe. I explained some of what we had been doing and why. The chat responded saying that “it’s insane how you start this by saying it sounds more dramatic than it is, only to then describe multiple horrors. I am mostly just glad you are safe.”
We got in the car. We started driving. I watched a driverless Waymo navigate streets in which the traffic lights were out because the power was out. My fiancé took two work meetings on the road, tethered to her phone, our dog sitting on her lap. We stopped at a fast food drive through.
Once we were out of Los Angeles, I stopped at a Best Buy to get an air purifier. On my phone, I searched the reviews for the one they had on sale. I picked one out. The employee tried to sell me an extended warranty plan. I said no thank you, got back in the car, and kept driving away from the fire. I do not know when we will be able to go back.
When Google’s AI Overview search feature launched in May, it was generally regarded as a mess. It told people to eat glue and rocks, libeled newsworthy figures, plagiarized journalists’ work, and was so bad Google added a feature to turn it off entirely. In the nearly 10 months since AI Overview launched, it’s still getting things wrong, like telling people that vibrators can be used for childrens’ behavioral therapy.
As discovered by Reddit user clist186, searching for “magic wand pregnancy” returns a bizarre answer about creativity with children as a “fun and engaging” activity alongside an image of a Magic Wand vibrator, one of the most popular and universally recognized sex toys in the world:
“The Magic Wand tool is a creative way for parents to identify behavioral changes they want to make, including those related to pregnancy. It can be used to make assessment fun and engaging, especially for long-time WIC clients. Here's how the Magic Wand tool works: Parents describe what parenting challenges they would change by ‘waving a magic wand’. The responses of both parents and older children can be used to start discussions. The Magic Wand tool can be purchased online or at a local store.”
Making this even weirder, I don’t get the parenting-related AI Overview result when I search that term, but 404's Emanuel Maiberg (famously, a parent himself) does.
AI Overview, which is powered by Google’s Gemini model, provides links to where the information comes from as part of its results; In this case, it’s summarizing a document from the New Hampshire Department of Health and Human Services about a thought exercise where a therapist passes clients a “magic wand” that helps them imagine an ideal scenario. This isn’t referring to passing a Hitachi to a child, but the AI doesn’t know that.
In another result for the search term "what is a magic wand," AI Overview pairs a photo of a Magic Wand sex toy with a description of a magician’s trick:
“A magic wand is a small stick used by magicians to perform tricks and make magic happen. It's often short, black, and has a white tip. Magicians use magic wands as part of their misdirection and to make things hap- pen like growing, vanishing, moving, or dis- playing a will of their own. For example, a classic magic trick involves making a bouquet of flowers appear from the wand's tip.”
This Overview result does eventually get around to describing the Magic Wand in question, though: “Magic wand may also refer to a brand of massager: Hitachi Magic Wand. A massaging device that was first listed for business use in 1968 and became available to the public in the 1970s. It's designed to relieve pain and tension, soothe sore muscles and nerves, and aid in rehabilitation after sports injuries.”
In a company blog about AI Overview, Google said the feature uses “multi-step reasoning capabilities” to “help with increasingly complex questions.” The search term “magic wand pregnancy” isn’t particularly complex; most people with basic reading comprehension skills would probably put it together that the term is looking for answers about using one of the world’s most popular sex toys while pregnant. But Gemini took the weirdest route possible instead, pulling from an obscure document about a talk therapy technique that happened to contain the phrases “magic wand” and “pregnancy.”
Searching with a full, natural language query — “can you use a magic wand while pregnant” — returns a more nuanced AI-generated response that considers the searcher might have several different kinds of wands in mind:
Last year, Reddit signed a $60 million per year contract with Google in exchange for licensing users’ content to train Google’s AI models. Adult content, including sex education and conversations about sex toys, are still allowed on Reddit as one of the few platforms that hasn’t banned sex entirely, and pregnancy-related questions are massively popular there. (There are dozens of questions specifically about using vibrators while pregnant.) It makes sense that phrasing the search term as a question rather than a set of keywords would turn up a better response from the AI: it’s what the AI was trained on.
Thankfully, most people searching “magic wand pregnancy” are probably able to use their human brains to deduce that they shouldn’t use a vibrator as a talking stick in group therapy with kids. But it’s yet another example of AI being shoved into every product and tool that adds more work, friction, and confusion to the experience of being online, instead of less — as tech companies constantly promise.
Google did not immediately respond to a request for comment.
In early December I got the kind of tip we’ve been getting a lot over the past year. A reader had noticed a post from someone on Reddit complaining about a very graphic sexual ad appearing in their Instagram Reels. I’ve seen a lot of ads for scams or shady dating sites recently, and some of them were pretty suggestive, to put it mildly, but the ad the person on Reddit complained about was straight up a close up image of a vagina.
The reader who tipped 404 Media did exactly what I would have done, which is look up the advertiser in Facebook’s Ad Library, and found that the same advertiser was running around 800 ads across all of Meta’s platforms in November, the vast majority of which are just different close-up images of vaginas. When clicked, the ad takes users to a variety of sites for "confidential dating” or “hot dates” in your area. Facebook started to remove some of these ads on December 13, but at the time of writing, most of them were still undetected by its moderators according to the Ad Library.
Like I said, we get a lot of tips like this these days. We get so many, in fact, that we don’t write stories about them unless there’s something novel or that our readers need to know about them. Facebook taking money to put explicit porn in its ads despite it being a clear violation of its own policies is not new, but definitely a new low for the company and a clear indicator of Facebook’s “fuck it” approach to content moderation, and moderation of its ads specifically.
We're back! And holy moly what a start to the year. We just published a bunch of stories. First, Jason talks about blowback inside Meta to its new board member, and Meta's subsequent censoring of those views. We also chat about those mad Meta AI profiles. After the break, Sam explains why Pornhub is blocked in most of the U.S. south. In the subscribers-only section, Joseph talks about why the government is planning to name one of its most important (and at risk) witnesses.
Listen to the weekly podcast on Apple Podcasts,Spotify, or YouTube. Become a paid subscriber for access to this episode's bonus content and to power our journalism. If you become a paid subscriber, check your inbox for an email from our podcast host Transistor for a link to the subscribers-only version! You can also add that subscribers feed to your podcast app of choice and never miss an episode that way. The email should also contain the subscribers-only unlisted YouTube link for the extended video version too. It will also be in the show notes in your podcast player.
Some Motorola automated license plate reader surveillance cameras are live-streaming video and car data to the unsecured internet where anyone can watch and scrape them, a security researcher has found. In a proof-of-concept, a privacy advocate then developed a tool that automatically scans the exposed footage for license plates, and dumps that information into a spreadsheet, allowing someone to track the movements of others in real time.
Matt Brown of Brown Fine Security made a series of YouTube videos showing vulnerabilities in a Motorola Reaper HD ALPR that he bought on eBay. As we have reported previously, these ALPRs are deployed all over the United States by cities and police departments. Brown initially found that it is possible to view the video and data that these cameras are collecting if you join the private networks that they are operating on. But then he found that many of them are misconfigured to stream to the open internet rather than a private network.
“My initial videos were showing that if you’re on the same network, you can access the video stream without authentication,” Brown told 404 Media in a video chat. “But then I asked the question: What if somebody misconfigured this and instead of it being on a private network, some of these found their way onto the public internet?”
In his most recent video, Brown shows that many of these cameras are indeed misconfigured to stream both video as well as the data they are collecting to the open internet and whose IP addresses can be found using the Internet of Things search engine Censys. The streams can be watched without any sort of login.
In many cases, they are streaming color video as well as infrared black-and-white video of the streets they are surveilling, and are broadcasting that data, including license plate information, onto the internet in real time.
Will Freeman, the creator of DeFlock, an open-source map of ALPRs in the United States, said that people in the DeFlock community have found many ALPRs that are streaming to the open internet. Freeman built a proof of concept script that takes data from unencrypted Motorola ALPR streams, decodes that data, and adds timestamped information about specific car movements into a spreadsheet. A spreadsheet he sent me shows a car’s make, model, color, and license plate number associated with the specific time that they drove past an unencrypted ALPR near Chicago. So far, roughly 170 unencrypted ALPR streams have been found.
“Let’s say 10 of them are in a city at strategic locations. If you connect to all 10 of them, you’d be able to track regular movements of people,” Freeman said.
Freeman told 404 Media that this fact is more evidence that the proliferation of ALPRs around the United States and the world represents a significant privacy risk, and Freeman has been a strong advocate against the widespread adoption of ALPRs.
“I’ve always thought these things were concerning, but this just goes to show that law enforcement agencies and the companies that provide ALPRs are no different than any other data company and can’t be trusted with this information,” Freeman told 404 Media. “So when a police department says there’s nothing to worry about unless you’re a criminal, there definitely is. Here’s evidence of a ton of cameras operated by law enforcement freely streaming sensitive data they’re collecting on us. My hometown is mostly Motorola [ALPRs], so someone could simply write a script that maps vehicles to times and precise locations.”
A Motorola Solutions spokesperson told 404 Media that the company is working on a firmware update that “will introduce additional security hardening.”
“Motorola Solutions designs, develops and deploys our products to prioritize data security and protect the confidentiality, integrity and availability of data,” the spokesperson said. “The ReaperHD camera is a legacy device, sales of which were discontinued in June 2022. Findings in the recent YouTube videos do not pose a risk to customers using their devices in accordance with our recommended configurations. Some customer-modified network configurations potentially exposed certain IP addresses. We are working directly with these customers to restore their system configurations consistent with our recommendations and industry best practices. Our next firmware update will introduce additional security hardening.”
Brown said that, although not all Motorola ALPRs are streaming to the internet, the security problems he found are deeply concerning and it’s not likely that ALPR security is something that’s going to suddenly be fixed.
“Let’s say the police or Motorola were like ‘Oh crap, we shouldn’t have put those on the public internet.’ They can clean that up,” he said. “But you still have a super vulnerable device that if you gain access to their network you can see the data. When you deploy the technology into the field, attacks always get easier, they don’t get harder.”
Meta’s HR team is deleting internal employee criticism of new board member, UFC president and CEO Dana White, at the same time that CEO Mark Zuckerberg announced to the world that Meta will “get back to our roots around free expression,” 404 Media has learned. Some employee posts questioning why criticism of White is being deleted are also being deleted.
Monday, Zuckerberg made a post on a platform for Meta employees called Workplace announcing that Meta is adding Dana White, John Elkann, and Charlie Songhurst to the company’s board of directors (Zuckerberg’s post on Workplace was identical to his public announcement). Employee response to this was mixed, according to screenshots of the thread obtained by 404 Media. Some posted positive or joking comments: “Major W,” one employee posted. “We hire Connor [McGregor] next for after work sparring?,” another said. “Joe Rogan may be next,” a third said. A fourth simply said “LOL.”
💡
Do you work at Meta? I would love to hear from you. Using a non-work device, you can message me securely on Signal at +1 202 505 1702.
But other employees criticized the decision and raised the point that there is video of White slapping his wife in a nightclub; White was not arrested and was not suspended from UFC for the domestic violence incident. McGregor, one of the most famous UFC fighters of all time, was held liable for sexual assault and was ordered by a civil court to pay $260,000 to a woman who accused him of raping her in 2018. McGregor is appealing the decision.
“Kind of disheartening to see people in the comments celebrating a man who is on video assaulting his wife and another who was recently convicted of rape,” one employee commented, referring to White and McGregor. “I can kind of excuse individuals for being unaware, but Meta surely did their due diligence on White and concluded that what he did is fine. I feel like I’m on another planet,” another employee commented. “We have completely lost the plot,” a third said.
Several posts critical of White were deleted by Meta’s “Internal Community Relations team” as violating a set of rules called the “Community Engagement Expectations,” which govern internal employee communications. In the thread, the Internal Community Relations team member explained why they were deleting content: “I’m posting a comment here with a reminder about the CEE, as multiple comments have been flagged by the community for review. It’s important that we maintain a respectful work environment where people can do their best work. We need to keep in mind that the CEE applies to how we communicate with and about members of our community—including members of our Board. Insulting, criticizing, or antagonizing our colleagues or Board members is not aligned with the CEE.” In 2022, Meta banned employees from discussing “very disruptive” topics.
Hackers claim to have compromised Gravy Analytics, the parent company of Venntel which has sold masses of smartphone location data to the U.S. government. The hackers said they have stolen a massive amount of data, including customer lists, information on the broader industry, and even location data harvested from smartphones which show peoples’ precise movements, and they are threatening to publish the data publicly.
The news is a crystalizing moment for the location data industry. For years, companies have harvested location information from smartphones, either through ordinary apps or the advertising ecosystem, and then built products based on that data or sold it to others. In many cases, those customers include the U.S. government, with arms of the military, DHS, the IRS, and FBI using it for various purposes. But collecting that data presents an attractive target to hackers.
“A location data broker like Gravy Analytics getting hacked is the nightmare scenario all privacy advocates have feared and warned about. The potential harms for individuals is haunting, and if all the bulk location data of Americans ends up being sold on underground markets, this will create countless deanonymization risks and tracking concerns for high risk individuals and organizations,” Zach Edwards, senior threat analyst at cybersecurity firm Silent Push, and who has followed the location data industry closely, told 404 Media. “This may be the first major breach of a bulk location data provider, but it won't be the last.”
People are using the popular AI video generator Runway to make real videos of murder look like they came from one of the animated Minions movies and upload them to social media platforms where they gain thousands of views before the platforms can detect and remove them. This AI editing method appears to make it harder for major platforms to moderate against infamously graphic videos which previously could only be found on the darkest corners of the internet.
The practice, which people have come to call “Minion Gore” or “Minion AI videos” started gaining popularity in mid-December, and while 404 Media has seen social media platforms remove many of these videos, at the time of writing we’ve seen examples of extremely violent Minion Gore videos hosted on YouTube, TikTok, Instagram, and X, which were undetected until we contacted these platforms for comment.
Specifically, by comparing the Minion Gore edits to the original videos, I was able to verify that TikTok was hosting a Minionfied video of Ronnie McNutt, who livestreamed his suicide on Facebook in 2020, shooting himself in the head. Instagram is still hosting a Minionfied clip from the 2019 Christchurch mosque shooting in New Zealand, in which a man livestreamed himself killing 51 people. I’ve also seen other Minion Gore videos I couldn’t locate the source materials for, but appear to include other public execution videos, war footage from the frontlines in Ukraine, and workplace accidents on construction sites.
The vast majority of these videos, including the Minion Gore videos of the Christchurch shooting and McNutt’s suicide, include a Runway watermark in the bottom right corner, indicating they were created on its platform. The videos appear to use the company’s Gen-3 “video to video” tool, which allows users to upload a video they can then modify with generative AI. I tested the free version of Runway’s video to video tool and was able to Minionify a video I uploaded to the platform by writing a text prompt asking Runway to “make the clip look like one of the Minions animated movies.”
Runway did not respond to a request for comment.
💡
Do you know anything else about these videos? I would love to hear from you. Using a non-work device, you can message me securely on Signal at emanuel.404. Otherwise, send me an email at [email protected].
I’ve seen several examples of TikTok removing Minion Gore videos before I reached out to the company for comment. For example, all the violent TikTok videos included in the Know Your Meme article about Minion Gore have already been removed. As the same Know Your Meme article notes, however, an early instance of the Minion Gore video of McNutt’s suicide gained over 250,000 views in just 10 days. I’ve also found another version of the same video reuploaded to TikTok in mid-December which wasn’t removed until I reached out to TikTok for comment on Tuesday.
TikTok told me it removes any content that violates its Community Guidelines, regardless of whether it was altered with AI. This, TikTok said, includes its policies prohibiting "hateful content as well as gory, gruesome, disturbing, or extremely violent content." TikTok also said that it has been proactively taking action to remove harmful AI-generated content that violates its policies, that it is continuously updating its detection rules for AI-generated content as the technology evolves, and that when made aware of a synthetic video clip that is spreading online and violates its policies, it creates detection rules to automatically catch and take action on similar versions of that content.
Major internet platforms create unique “hashes,” a unique string of letters and numbers that acts as a fingerprint for videos based on what they look like, for known videos that violate their policies. This allows platforms to automatically detect and remove these videos or prevent them from being uploaded in the first place. TikTok did not answer specific questions about whether Minion Gore edits of known violating videos would bypass this kind of automated moderation method. In 2020, Sam and I showed that this type of automated moderation can be bypassed with even simple edits of hashed, violating videos.
“In most cases, current hashing/fingerprinting are unable to reliably detect these variants,” Hany Farid, a professor at UC Berkeley and one of the world’s leading experts on digitally manipulated images and a developer of PhotoDNA, one of the most commonly used image identification and content filtering technologies, told me in an email. “Starting with the original violative content, it would be possible for the platforms to create these minion variations, hash/fingerprint them and add those signatures to the database. The efficacy of this approach would depend on the robustness of the hash algorithm and the ability to closely mimic the content being produced by others. And, of course, this would be a bit of a whack-a-mole problem as creators will replace minions with other cartoon characters.”
This, in fact, is already happening. I’ve seen a video of ISIS executions and the McNutt suicide posted to Twitter, which was also modified with Runway, but instead of turning the people in the video into Minions they were turned into Santa Claus. There are also several different Minion Gore videos of the same violent content, so in theory a hash of one version will not result in the automatic removal of another. Because Runway seemingly is not preventing people from using its tools to edit infamously violent videos, this creates a situation in which people can easily create infinite, slightly different versions of those videos and upload them across the internet.
YouTube acknowledged our request for comment but did not provide one in time for publication. Instagram and X did not respond to a request for comment.
Instagram has begun testing a feature in which Meta’s AI will automatically generate images of users in various situations and put them into that user’s feed. One Redditor posted over the weekend that they were scrolling through Instagram and were presented an AI-generated slideshow of themselves standing in front of “an endless maze of mirrors,” for example.
“Used Meta AI to edit a selfie, now Instagram is using my face on ads targeted at me,” the person posted. The user was shown a slideshow of AI-generated images in which an AI version of himself is standing in front of an endless “mirror maze.” “Imagined for you: Mirror maze,” the “location of the post reads.”
“Imagine yourself reflecting on life in an endless maze of mirrors where you’re the main focus,” the caption of the AI images say. The Reddit user told 404 Media that at one point he had uploaded selfies of himself into Instagram’s “Imagine” feature, which is Meta’s AI image generation feature.
💡
Do you work at Meta? I would love to hear from you. Using a non-work device, you can message me securely on Signal at +1 202 505 1702.
People on Reddit initially did not even believe that these were real, with people posting things like "it's a fake story," and "I doubt that this is true," "this is a straight up lie lol," and "why would they do this?" The Redditor has repeatedly had to explain that, yes, this did happen. "I don’t really have a reason to fake this, I posted screenshots on another thread," he said. 404 Media sent the link to the Reddit post directly to Meta who confirmed that it is real, but not an "ad."