Some of the world’s most popular apps are likely being co-opted by rogue members of the advertising industry to harvest sensitive location data on a massive scale, with that data ending up with a location data company whose subsidiary has previously sold global location data to US law enforcement.
The thousands of apps, included in hacked files from location data company Gravy Analytics, include everything from games like Candy Crush to dating apps like Tinder, to pregnancy tracking and religious prayer apps across both Android and iOS. Because much of the collection is occurring through the advertising ecosystem—not code developed by the app creators themselves—this data collection is likely happening both without users’ and even app developers’ knowledge.
“For the first time publicly, we seem to have proof that one of the largest data brokers selling to both commercial and government clients, appears to be acquiring their data from the online advertising ‘bid stream,’” rather than code embedded into the apps themselves, Zach Edwards, senior threat analyst at cybersecurity firm Silent Push, and who has followed the location data industry closely, tells 404 Media after reviewing some of the data.
The data provides a rare glimpse inside the world of real-time bidding (RTB). Historically, location data firms paid app developers to include bundles of code that collected the location data of their users. Many companies have turned instead to sourcing location information through the advertising ecosystem, where companies bid to place ads inside apps. But a side effect is that data brokers can listen in on that process, and harvest the location of peoples’ mobile phones.
“This is a nightmare scenario for privacy because not only does this data breach contain data scraped from the RTB systems, but there's some company out there acting like a global honey badger, doing whatever it pleases with every piece of data that comes its way,” Edwards adds.
Meta employees are furious with the company’s newly announced content moderation changes that will allow users to say that LGBTQ+ people have “mental illness,” according to internal conversations obtained by 404 Media and interviews with five current employees. The changes were part of a larger shift Mark Zuckerberg announced Monday to do far less content moderation on Meta platforms.
“I am LGBT and Mentally Ill,” one post by an employee on an internal Meta platform called Workplace reads. “Just to let you know that I’ll be taking time out to look after my mental health.”
On Monday, Mark Zuckerberg announced that the company would be getting “back to our roots around free expression” to allow “more speech and fewer mistakes.” The company said “we’re getting rid of a number of restrictions on topics like immigration, gender identity, and gender that are the subject of frequent political discourse and debate.” A review of Meta’s official content moderation policies show, specifically, that some of the only substantive changes to the policy were made to specifically allow for “allegations of mental illness or abnormality when based on gender or sexual orientation.” It has long been known that being LGBTQ+ is not a sign of “mental illness,” and the false idea that sexuality or gender identification is a mental illness has long been used to stigmatize and discriminate against LGBTQ+ people.
There’s a video going viral this week of the Hollywood sign in Los Angeles with a wildfire raging behind it, letters glowing in the blaze. It’s a powerful scene, up there with the burning McDonald’s sign in imagery that’s come out of this week’s devastating fires spreading across LA.
I've seen several people sharing this same video with shades of shock and heartbreak. But it’s AI-generated. When it was posted, according to a Community Note on one of the posts, a look at the Hollywood sign livestreams showed the sign was fine; as of writing, the feeds are down, but Hollywoodsign.org, a website that runs a live webcam pointed at the sign, told fact-checking site Snopes "Griffith Park is temporarily closed as a safety precaution, but the Sign itself is not affected and is secure — and the cameras will slowly but surely come back up."
Another viral image of the Hollywood sign burning is also AI:
Then there’s X poster Kevin Dalton’s image, which he later admitted was made with X’s Grok generative AI tool “for now,” showing what I can only assume he imagines as “antifa” in all black descending on a burned-out neighborhood to loot it. “The remains of Pacific Palisades will get picked clean tonight,” he wrote. (Dalton’s been making AI paint him little fantasy pictures of Trump firing California governor Gavin Newsom, so this is a big week for him.)
People are also obsessively generating Grok images of Newsom fiddling in front of fires or saving goldfish (???).
The very real footage and images coming out of Southern California this week are so surreal they’re hard to believe, with entire miles of iconic coastline, whole neighborhoods, and massive swaths of the Pacific Palisades and LA’s east side turned to ash (and still burning as of writing).
Interestingly, a lot of this week’s news cycle has turned to blaming AI and its energy usage as contributing to climate change. But others are not wasting an opportunity for boosterism. In a stunning show of credulity, British-owned digital newspaper The Express ran a story with the headline “Five dead in LA fires as residents think AI tech could have prevented disaster” based on a quote from one evacuating 24 year old they found who took the opportunity in front of a reporter to breathlessly shill for AI, as an AI industry worker himself. “[Los Angeles’] fire and police departments don’t invest in technology [sic] hopefully more people build AI robotics solutions for monitoring or help. Instead a lot of people in ai are building military solutions. Aka putting a gun on top of a robot dog,” Chevy Chase Canyon resident William Lee told The Express. “Robotics operated fire response systems. It costs $6-18k for AI humanoid robots. LAFD salary is approx $100k/yr… 3,500 firefighters. We can slowly integrate robotics to put less lives at risk, but also for assistance."
That guy was so close to saying something prescient it’s painful: Robot dogs are a stupid waste of taxpayer money, and not a hypothetical one, as LA approved $278,000 for a surveillance robot dog toy for the LAPD in 2023. But the Los Angeles Fire Department’s budget was cut by nearly $17.6 million this fiscal year, while giving even more money to the police department’s already massive budget: the LAPD received a $2.14 billion budget for the 2025-26 fiscal year, representing an 8.1% increase.
“Humanoid robots” as an absurd proposition aside, I don’t want to write off all forms of new technology as useless in natural disasters. Machine learning and machine vision technology seem to show promise in helping detect, track, or prevent wildfires: Last week, University of California San Diego’s ALERTCalifornia camera network alerted fire officials of an anomaly spotted on video, and firefighters were reportedly able to contain the blaze to less than a quarter acre. But companies taking investment to “solve” wildfires are also profiting off of a crisis that’s only getting worse, with no promise that their solutions will improve the situation.
Overwhelmingly, AI is being crammed down the public’s throats as a tool for generating some of the dumbest bullshit imaginable. That includes misinformation like we’ve seen with these fires, but also bottomless ugliness, laughably terrible bots, sexual abuse and violence. And it’s sold to us as both our inevitable savior and the next world-ending existential crisis by people with billions earned on the theft of human creativity, and billions more yet to gain.
AI might help solve tough problems related to climate change and things like wildfires, water scarcity, and energy consumption. But in the meantime, data centers are projected to guzzle 6.6 billion cubic meters of water by 2027, in service of churning out sloppy, morbid fantasies about tragedies within tragedies.
In 2020, after walking by refrigerated trailers full of the bodies of people who died during the first wave of the COVID-19 pandemic one too many times, my fiancé and I decided that it would maybe be a good idea to get out of New York City for a while. Together with our dog, we spent months driving across the country and eventually made it to Los Angeles, where we intended to stay for two weeks. We arrived just in time for the worst COVID spike since the one we had just experienced in New York. It turned out we couldn’t and didn’t want to leave. Our two week stay has become five years.
While debating whether we were going to move to Los Angeles full time, my partner and I joked that we had to choose between the “fire coast” and the “water coast.” New York City had been getting pummeled by a series of tropical storms and downpours, and vast swaths of California were fighting some of the most devastating wildfires it had ever seen. We settled on the fire coast, mostly to try something new.
We have been very lucky, and very privileged. Our apartment is in Venice Beach, which is probably not going to burn down. This time, we will not lose our lives, our things, our memories. We had the money and the ability to evacuate from Los Angeles on Wednesday morning after it became clear to us that we should not stay. What is happening is a massive tragedy for the city of Los Angeles, the families who have lost their homes, businesses and schools.
I am writing this to try to understand my place in a truly horrifying event, and to try to understand how we are all supposed to process the ongoing slow- and fast-moving climate change-fueled disasters that we have all experienced, are experiencing, and will definitely experience in the future. My group chats and Instagram stories are full of my friends saying that they are fine, followed by stories and messages explaining that actually, they are not fine. Stories that start with “we’re safe, thank you for asking” have almost uniformly been followed with “circumstances have changed, we have evacuated Los Angeles.” Almost all of my friends in the city have now left their homes to go somewhere safer; some people I know have lost their homes.
I knew when I moved to Los Angeles that we would to some extent experience fires and earthquakes. I live in a “tsunami hazard zone.” I also know that there is no place that is safe from climate change and climate-fueled disaster, as we saw last year when parts of North Carolina that were considered to be “safer” from climate change were devastated by Hurricane Helene.
We are living in The Cool Zone, and, while I love my life, am very lucky, and have been less directly affected by COVID, political violence, war, and natural disasters than many people, I am starting to understand that maybe this is all taking a toll. Firefighters and people who have lost their homes are experiencing true hell. What I am experiencing is something more like the constant mundanity of dystopia that surrounds the direct horror but is decidedly also bad.
I knew it would be windy earlier this week because I check the surf forecast every day on an app called Surfline, which has cameras and weather monitoring up and down nearly every coast in the world. The Santa Ana winds—a powerful wind phenomenon I learned about only after moving to California—would be offshore, meaning they would blow from the land out to sea. This is somewhat rare in Los Angeles and also makes for very good, barreling waves. I was excited.
I had a busy day Tuesday and learned about the fire because the Surfline cameras near the fire were down. In fact, you can see what it looked like as the fires overtook the camera at Sunset Point here:
The camera livestream was replaced with a note saying “this camera is offline due to infrastructure issues caused by local wildfires.” The surf forecast did not mention anything about a fire.
I walked out to the beach and could see the mountains on fire, the smoke plumes blowing both out to sea and right over me. The ocean was indeed firing—meaning the waves were good—and lots of people were surfing. A few people were milling around the beach taking photos and videos of the fire like I was. By the time the sun started setting, there were huge crowds of people watching the fire. It was around this time that I realized I was having trouble breathing, my eyes were watering, and my throat was scratchy. My family locked ourselves into our bedroom with an air purifier running. Last week, we realized that we desperately needed to replace the filter, but we did not. A friend told us the air was better near them, so we went to their house for dinner.
While we were having dinner, the size of the fire doubled, and a second one broke out. Our phones blared emergency alerts. We downloaded Watch Duty, which is a nonprofit wildfire monitoring app. Most of the wildfire-monitoring cameras in the Pacific Palisades had been knocked offline; the ones in Santa Monica pointing towards the Palisades showed a raging fire.
Every few minutes the app sent us push notifications that the fire was rapidly expanding, that firefighters were overwhelmed, that evacuation orders had expanded and were beginning to creep toward our neighborhood. I opened Instagram and learned that Malibu’s Reel Inn, one of our favorite restaurants, had burned to the ground.
Apple Intelligence began summarizing all of the notifications I was getting from my various apps. “Multiple wildfires in Los Angeles, causing destruction and injuries,” from the neighborhood watch app Citizen, which I have only because of an article I did about the last time there was a fire in Pacific Palisades. Apple Intelligence’s summary of a group chat I’m in: “Saddened by situation; Instagram shared.” From a friend: "Wants to chat about existential questions." A summary from the LA Times: “Over 1,000 structures burned in LA Count wildfires; firefighter were overwhelmed.” From Nextdoor: “Restaurants destroyed.”
Earlier on Tuesday, I texted my mom “yes we are fine, it is very far away from us. It is many miles from us. We have an air purifier. It’s fine.” I began to tell people who asked that the problem for us was "just" the oppressive smoke, and the fact that we could not breathe. By the time we were going to bed, it became increasingly clear that it was not necessarily fine, and that it might be best if we left. I opened Bluesky and saw an image of a Cybertruck sitting in front of a burnt out mansion. A few posts later, I saw the same image but a Parental Advisory sticker had been photoshopped onto it. I clicked over to X and saw that people were spamming AI generated images of the fire.
We began wondering if we should drive toward cleaner air. We went home and tried to sleep. I woke up every hour because I was having trouble breathing. As the sun was supposed to be rising in the morning, it became clear that it was being hidden by thick clouds of smoke.
Within minutes of waking up, we knew that we should leave. That we would be leaving. I opened Airbnb and booked something. We do not have a “Go Bag,” but we did have time to pack. I aimlessly wandered around my apartment throwing things into bags and boxes, packing things that I did not need and leaving things that I should have brought. In the closet, I pushed aside our boxes of COVID tests to get to our box of N-95 masks. I packed a whole microphone rig because I need to record a podcast Friday.
I emailed the 404 Media customers who bought merch and told them it would be delayed because I had to leave my home and cannot mail them. I canceled meetings and calls with sources who I wanted to talk to.
Our next-door neighbor texted us, saying that she would actually be able to make it to a meeting next week with our landlord with a shared beef we’re having with them. Originally she thought she would have to work during the time the meeting was scheduled. She works at a school in the Palisades. Her school burned down. So had her sister’s house. I saw my neighbor right before we left. I told her I would be back on Friday. I had a flashback to my last day in the VICE office in March 2020, when they sent us home for COVID. I told everyone I would see them in a week or two. Some of those people I never saw again.
A friend texted me to tell me that the place we had been on a beautiful hike a few weeks ago was on fire: “sad and glad we went,” he said. A friend in Richmond, Virginia texted to ask if I was OK. I told him yes but that it was very scary. I asked him how he was doing. He responded, “We had a bad ice storm this week and that caused a power outage at water treatment that then caused server crashes and electrical equipment to get flooded. The whole city has been without water since Monday.” He told me he was supposed to come to Los Angeles for work this weekend. He was canceling his flight.
A group chat asked me if I was OK. I told them that I did not want to be dramatic but that we were having a hard time but were ultimately safe. I explained some of what we had been doing and why. The chat responded saying that “it’s insane how you start this by saying it sounds more dramatic than it is, only to then describe multiple horrors. I am mostly just glad you are safe.”
We got in the car. We started driving. I watched a driverless Waymo navigate streets in which the traffic lights were out because the power was out. My fiancé took two work meetings on the road, tethered to her phone, our dog sitting on her lap. We stopped at a fast food drive through.
Once we were out of Los Angeles, I stopped at a Best Buy to get an air purifier. On my phone, I searched the reviews for the one they had on sale. I picked one out. The employee tried to sell me an extended warranty plan. I said no thank you, got back in the car, and kept driving away from the fire. I do not know when we will be able to go back.
When Google’s AI Overview search feature launched in May, it was generally regarded as a mess. It told people to eat glue and rocks, libeled newsworthy figures, plagiarized journalists’ work, and was so bad Google added a feature to turn it off entirely. In the nearly 10 months since AI Overview launched, it’s still getting things wrong, like telling people that vibrators can be used for childrens’ behavioral therapy.
As discovered by Reddit user clist186, searching for “magic wand pregnancy” returns a bizarre answer about creativity with children as a “fun and engaging” activity alongside an image of a Magic Wand vibrator, one of the most popular and universally recognized sex toys in the world:
“The Magic Wand tool is a creative way for parents to identify behavioral changes they want to make, including those related to pregnancy. It can be used to make assessment fun and engaging, especially for long-time WIC clients. Here's how the Magic Wand tool works: Parents describe what parenting challenges they would change by ‘waving a magic wand’. The responses of both parents and older children can be used to start discussions. The Magic Wand tool can be purchased online or at a local store.”
Making this even weirder, I don’t get the parenting-related AI Overview result when I search that term, but 404's Emanuel Maiberg (famously, a parent himself) does.
AI Overview, which is powered by Google’s Gemini model, provides links to where the information comes from as part of its results; In this case, it’s summarizing a document from the New Hampshire Department of Health and Human Services about a thought exercise where a therapist passes clients a “magic wand” that helps them imagine an ideal scenario. This isn’t referring to passing a Hitachi to a child, but the AI doesn’t know that.
In another result for the search term "what is a magic wand," AI Overview pairs a photo of a Magic Wand sex toy with a description of a magician’s trick:
“A magic wand is a small stick used by magicians to perform tricks and make magic happen. It's often short, black, and has a white tip. Magicians use magic wands as part of their misdirection and to make things hap- pen like growing, vanishing, moving, or dis- playing a will of their own. For example, a classic magic trick involves making a bouquet of flowers appear from the wand's tip.”
This Overview result does eventually get around to describing the Magic Wand in question, though: “Magic wand may also refer to a brand of massager: Hitachi Magic Wand. A massaging device that was first listed for business use in 1968 and became available to the public in the 1970s. It's designed to relieve pain and tension, soothe sore muscles and nerves, and aid in rehabilitation after sports injuries.”
In a company blog about AI Overview, Google said the feature uses “multi-step reasoning capabilities” to “help with increasingly complex questions.” The search term “magic wand pregnancy” isn’t particularly complex; most people with basic reading comprehension skills would probably put it together that the term is looking for answers about using one of the world’s most popular sex toys while pregnant. But Gemini took the weirdest route possible instead, pulling from an obscure document about a talk therapy technique that happened to contain the phrases “magic wand” and “pregnancy.”
Searching with a full, natural language query — “can you use a magic wand while pregnant” — returns a more nuanced AI-generated response that considers the searcher might have several different kinds of wands in mind:
Last year, Reddit signed a $60 million per year contract with Google in exchange for licensing users’ content to train Google’s AI models. Adult content, including sex education and conversations about sex toys, are still allowed on Reddit as one of the few platforms that hasn’t banned sex entirely, and pregnancy-related questions are massively popular there. (There are dozens of questions specifically about using vibrators while pregnant.) It makes sense that phrasing the search term as a question rather than a set of keywords would turn up a better response from the AI: it’s what the AI was trained on.
Thankfully, most people searching “magic wand pregnancy” are probably able to use their human brains to deduce that they shouldn’t use a vibrator as a talking stick in group therapy with kids. But it’s yet another example of AI being shoved into every product and tool that adds more work, friction, and confusion to the experience of being online, instead of less — as tech companies constantly promise.
Google did not immediately respond to a request for comment.
In early December I got the kind of tip we’ve been getting a lot over the past year. A reader had noticed a post from someone on Reddit complaining about a very graphic sexual ad appearing in their Instagram Reels. I’ve seen a lot of ads for scams or shady dating sites recently, and some of them were pretty suggestive, to put it mildly, but the ad the person on Reddit complained about was straight up a close up image of a vagina.
The reader who tipped 404 Media did exactly what I would have done, which is look up the advertiser in Facebook’s Ad Library, and found that the same advertiser was running around 800 ads across all of Meta’s platforms in November, the vast majority of which are just different close-up images of vaginas. When clicked, the ad takes users to a variety of sites for "confidential dating” or “hot dates” in your area. Facebook started to remove some of these ads on December 13, but at the time of writing, most of them were still undetected by its moderators according to the Ad Library.
Like I said, we get a lot of tips like this these days. We get so many, in fact, that we don’t write stories about them unless there’s something novel or that our readers need to know about them. Facebook taking money to put explicit porn in its ads despite it being a clear violation of its own policies is not new, but definitely a new low for the company and a clear indicator of Facebook’s “fuck it” approach to content moderation, and moderation of its ads specifically.
We're back! And holy moly what a start to the year. We just published a bunch of stories. First, Jason talks about blowback inside Meta to its new board member, and Meta's subsequent censoring of those views. We also chat about those mad Meta AI profiles. After the break, Sam explains why Pornhub is blocked in most of the U.S. south. In the subscribers-only section, Joseph talks about why the government is planning to name one of its most important (and at risk) witnesses.
Listen to the weekly podcast on Apple Podcasts,Spotify, or YouTube. Become a paid subscriber for access to this episode's bonus content and to power our journalism. If you become a paid subscriber, check your inbox for an email from our podcast host Transistor for a link to the subscribers-only version! You can also add that subscribers feed to your podcast app of choice and never miss an episode that way. The email should also contain the subscribers-only unlisted YouTube link for the extended video version too. It will also be in the show notes in your podcast player.
Some Motorola automated license plate reader surveillance cameras are live-streaming video and car data to the unsecured internet where anyone can watch and scrape them, a security researcher has found. In a proof-of-concept, a privacy advocate then developed a tool that automatically scans the exposed footage for license plates, and dumps that information into a spreadsheet, allowing someone to track the movements of others in real time.
Matt Brown of Brown Fine Security made a series of YouTube videos showing vulnerabilities in a Motorola Reaper HD ALPR that he bought on eBay. As we have reported previously, these ALPRs are deployed all over the United States by cities and police departments. Brown initially found that it is possible to view the video and data that these cameras are collecting if you join the private networks that they are operating on. But then he found that many of them are misconfigured to stream to the open internet rather than a private network.
“My initial videos were showing that if you’re on the same network, you can access the video stream without authentication,” Brown told 404 Media in a video chat. “But then I asked the question: What if somebody misconfigured this and instead of it being on a private network, some of these found their way onto the public internet?”
In his most recent video, Brown shows that many of these cameras are indeed misconfigured to stream both video as well as the data they are collecting to the open internet and whose IP addresses can be found using the Internet of Things search engine Censys. The streams can be watched without any sort of login.
In many cases, they are streaming color video as well as infrared black-and-white video of the streets they are surveilling, and are broadcasting that data, including license plate information, onto the internet in real time.
Will Freeman, the creator of DeFlock, an open-source map of ALPRs in the United States, said that people in the DeFlock community have found many ALPRs that are streaming to the open internet. Freeman built a proof of concept script that takes data from unencrypted Motorola ALPR streams, decodes that data, and adds timestamped information about specific car movements into a spreadsheet. A spreadsheet he sent me shows a car’s make, model, color, and license plate number associated with the specific time that they drove past an unencrypted ALPR near Chicago. So far, roughly 170 unencrypted ALPR streams have been found.
“Let’s say 10 of them are in a city at strategic locations. If you connect to all 10 of them, you’d be able to track regular movements of people,” Freeman said.
Freeman told 404 Media that this fact is more evidence that the proliferation of ALPRs around the United States and the world represents a significant privacy risk, and Freeman has been a strong advocate against the widespread adoption of ALPRs.
“I’ve always thought these things were concerning, but this just goes to show that law enforcement agencies and the companies that provide ALPRs are no different than any other data company and can’t be trusted with this information,” Freeman told 404 Media. “So when a police department says there’s nothing to worry about unless you’re a criminal, there definitely is. Here’s evidence of a ton of cameras operated by law enforcement freely streaming sensitive data they’re collecting on us. My hometown is mostly Motorola [ALPRs], so someone could simply write a script that maps vehicles to times and precise locations.”
A Motorola Solutions spokesperson told 404 Media that the company is working on a firmware update that “will introduce additional security hardening.”
“Motorola Solutions designs, develops and deploys our products to prioritize data security and protect the confidentiality, integrity and availability of data,” the spokesperson said. “The ReaperHD camera is a legacy device, sales of which were discontinued in June 2022. Findings in the recent YouTube videos do not pose a risk to customers using their devices in accordance with our recommended configurations. Some customer-modified network configurations potentially exposed certain IP addresses. We are working directly with these customers to restore their system configurations consistent with our recommendations and industry best practices. Our next firmware update will introduce additional security hardening.”
Brown said that, although not all Motorola ALPRs are streaming to the internet, the security problems he found are deeply concerning and it’s not likely that ALPR security is something that’s going to suddenly be fixed.
“Let’s say the police or Motorola were like ‘Oh crap, we shouldn’t have put those on the public internet.’ They can clean that up,” he said. “But you still have a super vulnerable device that if you gain access to their network you can see the data. When you deploy the technology into the field, attacks always get easier, they don’t get harder.”
Meta’s HR team is deleting internal employee criticism of new board member, UFC president and CEO Dana White, at the same time that CEO Mark Zuckerberg announced to the world that Meta will “get back to our roots around free expression,” 404 Media has learned. Some employee posts questioning why criticism of White is being deleted are also being deleted.
Monday, Zuckerberg made a post on a platform for Meta employees called Workplace announcing that Meta is adding Dana White, John Elkann, and Charlie Songhurst to the company’s board of directors (Zuckerberg’s post on Workplace was identical to his public announcement). Employee response to this was mixed, according to screenshots of the thread obtained by 404 Media. Some posted positive or joking comments: “Major W,” one employee posted. “We hire Connor [McGregor] next for after work sparring?,” another said. “Joe Rogan may be next,” a third said. A fourth simply said “LOL.”
💡
Do you work at Meta? I would love to hear from you. Using a non-work device, you can message me securely on Signal at +1 202 505 1702.
But other employees criticized the decision and raised the point that there is video of White slapping his wife in a nightclub; White was not arrested and was not suspended from UFC for the domestic violence incident. McGregor, one of the most famous UFC fighters of all time, was held liable for sexual assault and was ordered by a civil court to pay $260,000 to a woman who accused him of raping her in 2018. McGregor is appealing the decision.
“Kind of disheartening to see people in the comments celebrating a man who is on video assaulting his wife and another who was recently convicted of rape,” one employee commented, referring to White and McGregor. “I can kind of excuse individuals for being unaware, but Meta surely did their due diligence on White and concluded that what he did is fine. I feel like I’m on another planet,” another employee commented. “We have completely lost the plot,” a third said.
Several posts critical of White were deleted by Meta’s “Internal Community Relations team” as violating a set of rules called the “Community Engagement Expectations,” which govern internal employee communications. In the thread, the Internal Community Relations team member explained why they were deleting content: “I’m posting a comment here with a reminder about the CEE, as multiple comments have been flagged by the community for review. It’s important that we maintain a respectful work environment where people can do their best work. We need to keep in mind that the CEE applies to how we communicate with and about members of our community—including members of our Board. Insulting, criticizing, or antagonizing our colleagues or Board members is not aligned with the CEE.” In 2022, Meta banned employees from discussing “very disruptive” topics.
Hackers claim to have compromised Gravy Analytics, the parent company of Venntel which has sold masses of smartphone location data to the U.S. government. The hackers said they have stolen a massive amount of data, including customer lists, information on the broader industry, and even location data harvested from smartphones which show peoples’ precise movements, and they are threatening to publish the data publicly.
The news is a crystalizing moment for the location data industry. For years, companies have harvested location information from smartphones, either through ordinary apps or the advertising ecosystem, and then built products based on that data or sold it to others. In many cases, those customers include the U.S. government, with arms of the military, DHS, the IRS, and FBI using it for various purposes. But collecting that data presents an attractive target to hackers.
“A location data broker like Gravy Analytics getting hacked is the nightmare scenario all privacy advocates have feared and warned about. The potential harms for individuals is haunting, and if all the bulk location data of Americans ends up being sold on underground markets, this will create countless deanonymization risks and tracking concerns for high risk individuals and organizations,” Zach Edwards, senior threat analyst at cybersecurity firm Silent Push, and who has followed the location data industry closely, told 404 Media. “This may be the first major breach of a bulk location data provider, but it won't be the last.”
People are using the popular AI video generator Runway to make real videos of murder look like they came from one of the animated Minions movies and upload them to social media platforms where they gain thousands of views before the platforms can detect and remove them. This AI editing method appears to make it harder for major platforms to moderate against infamously graphic videos which previously could only be found on the darkest corners of the internet.
The practice, which people have come to call “Minion Gore” or “Minion AI videos” started gaining popularity in mid-December, and while 404 Media has seen social media platforms remove many of these videos, at the time of writing we’ve seen examples of extremely violent Minion Gore videos hosted on YouTube, TikTok, Instagram, and X, which were undetected until we contacted these platforms for comment.
Specifically, by comparing the Minion Gore edits to the original videos, I was able to verify that TikTok was hosting a Minionfied video of Ronnie McNutt, who livestreamed his suicide on Facebook in 2020, shooting himself in the head. Instagram is still hosting a Minionfied clip from the 2019 Christchurch mosque shooting in New Zealand, in which a man livestreamed himself killing 51 people. I’ve also seen other Minion Gore videos I couldn’t locate the source materials for, but appear to include other public execution videos, war footage from the frontlines in Ukraine, and workplace accidents on construction sites.
The vast majority of these videos, including the Minion Gore videos of the Christchurch shooting and McNutt’s suicide, include a Runway watermark in the bottom right corner, indicating they were created on its platform. The videos appear to use the company’s Gen-3 “video to video” tool, which allows users to upload a video they can then modify with generative AI. I tested the free version of Runway’s video to video tool and was able to Minionify a video I uploaded to the platform by writing a text prompt asking Runway to “make the clip look like one of the Minions animated movies.”
Runway did not respond to a request for comment.
💡
Do you know anything else about these videos? I would love to hear from you. Using a non-work device, you can message me securely on Signal at emanuel.404. Otherwise, send me an email at [email protected].
I’ve seen several examples of TikTok removing Minion Gore videos before I reached out to the company for comment. For example, all the violent TikTok videos included in the Know Your Meme article about Minion Gore have already been removed. As the same Know Your Meme article notes, however, an early instance of the Minion Gore video of McNutt’s suicide gained over 250,000 views in just 10 days. I’ve also found another version of the same video reuploaded to TikTok in mid-December which wasn’t removed until I reached out to TikTok for comment on Tuesday.
TikTok told me it removes any content that violates its Community Guidelines, regardless of whether it was altered with AI. This, TikTok said, includes its policies prohibiting "hateful content as well as gory, gruesome, disturbing, or extremely violent content." TikTok also said that it has been proactively taking action to remove harmful AI-generated content that violates its policies, that it is continuously updating its detection rules for AI-generated content as the technology evolves, and that when made aware of a synthetic video clip that is spreading online and violates its policies, it creates detection rules to automatically catch and take action on similar versions of that content.
Major internet platforms create unique “hashes,” a unique string of letters and numbers that acts as a fingerprint for videos based on what they look like, for known videos that violate their policies. This allows platforms to automatically detect and remove these videos or prevent them from being uploaded in the first place. TikTok did not answer specific questions about whether Minion Gore edits of known violating videos would bypass this kind of automated moderation method. In 2020, Sam and I showed that this type of automated moderation can be bypassed with even simple edits of hashed, violating videos.
“In most cases, current hashing/fingerprinting are unable to reliably detect these variants,” Hany Farid, a professor at UC Berkeley and one of the world’s leading experts on digitally manipulated images and a developer of PhotoDNA, one of the most commonly used image identification and content filtering technologies, told me in an email. “Starting with the original violative content, it would be possible for the platforms to create these minion variations, hash/fingerprint them and add those signatures to the database. The efficacy of this approach would depend on the robustness of the hash algorithm and the ability to closely mimic the content being produced by others. And, of course, this would be a bit of a whack-a-mole problem as creators will replace minions with other cartoon characters.”
This, in fact, is already happening. I’ve seen a video of ISIS executions and the McNutt suicide posted to Twitter, which was also modified with Runway, but instead of turning the people in the video into Minions they were turned into Santa Claus. There are also several different Minion Gore videos of the same violent content, so in theory a hash of one version will not result in the automatic removal of another. Because Runway seemingly is not preventing people from using its tools to edit infamously violent videos, this creates a situation in which people can easily create infinite, slightly different versions of those videos and upload them across the internet.
YouTube acknowledged our request for comment but did not provide one in time for publication. Instagram and X did not respond to a request for comment.
Instagram has begun testing a feature in which Meta’s AI will automatically generate images of users in various situations and put them into that user’s feed. One Redditor posted over the weekend that they were scrolling through Instagram and were presented an AI-generated slideshow of themselves standing in front of “an endless maze of mirrors,” for example.
“Used Meta AI to edit a selfie, now Instagram is using my face on ads targeted at me,” the person posted. The user was shown a slideshow of AI-generated images in which an AI version of himself is standing in front of an endless “mirror maze.” “Imagined for you: Mirror maze,” the “location of the post reads.”
“Imagine yourself reflecting on life in an endless maze of mirrors where you’re the main focus,” the caption of the AI images say. The Reddit user told 404 Media that at one point he had uploaded selfies of himself into Instagram’s “Imagine” feature, which is Meta’s AI image generation feature.
💡
Do you work at Meta? I would love to hear from you. Using a non-work device, you can message me securely on Signal at +1 202 505 1702.
People on Reddit initially did not even believe that these were real, with people posting things like "it's a fake story," and "I doubt that this is true," "this is a straight up lie lol," and "why would they do this?" The Redditor has repeatedly had to explain that, yes, this did happen. "I don’t really have a reason to fake this, I posted screenshots on another thread," he said. 404 Media sent the link to the Reddit post directly to Meta who confirmed that it is real, but not an "ad."
Telegram, the popular social network and messaging application which has also become a hotbed for all sorts of serious criminal activity, provided U.S. authorities with data on more than 2,200 users last year, according to newly released data from Telegram.
The news shows a massive spike in the number of data requests fulfilled by Telegram after French authorities arrested Telegram CEO Pavel Durov in August, in part because of the company’s unwillingness to provide user data in a child abuse investigation. Between January 1 and September 30, 2024, Telegram fulfilled 14 requests “for IP addresses and/or phone numbers” from the United States, which affected a total of 108 users, according to Telegram’s Transparency Reports bot. But for the entire year of 2024, it fulfilled 900 requests from the U.S. affecting a total of 2,253 users, meaning that the number of fulfilled requests skyrocketed between October and December, according to the newly released data.
“Fulfilled requests from the United States of America for IP address and/or phone number: 900,” Telegram’s Transparency Reports bot said when prompted for the latest report by 404 Media. “Affected users: 2253,” it added.
Members of an underground criminal community that hack massive companies, steal swathes of cryptocurrency, and even commission robberies or shootings against members of the public or one another have an unusual method for digging up personal information on a target: the truck and trailer rental company U-Haul. With access to U-Haul employee accounts, hackers can lookup a U-Haul customer’s personal data, and with that try to social engineer their way into the target’s online accounts. Or potentially target them with violence too.
The news shows how members of the community, known as the Com and composed of potentially a thousand people who coalesce on Telegram and Discord, use essentially any information available to them to dox or hack people, no matter how obscure. It also provides context as to why U-Haul may have been targeted repeatedly in recent years, with the company previously disclosing multiple data breaches.
“U-Haul has lots of information, it can be used for all sorts of stuff. One of the primary cases is for doxing targs [targets] since they [seem] to have information not found online and ofc U-Haul has confirmed this info with the person prior,” Pontifex, the administrator of a phishing tool which advertises the ability to harvest U-Haul logins, told 404 Media in an online chat. The tool, called Suite, also advertises phishing pages for Gmail, Coinbase, and the major U.S. carriers T-Mobile, AT&T, and Verizon.
Let’s start 2025 off strong by avoiding it entirely and escaping a thousand years into the past to an Amazonian civilization of forest islands, garden cities, and duck tales. From there, we’ll flee even farther from the present, though we’ll keep the “enchanted forest” vibe going strong.
Then, the BATS are SURFING. What else do you want to know? Close up shop; we’ve reached the pinnacle of enlightenment. And finally, want to see some robots hula hoop? You came to the right place.
Happy New Year to all who acknowledge the passage of time, and congratulations to anyone who has managed to transcend it.
It’s unwise to romanticize any past society or culture. Humans are reliably humans, with all that this entails, across time and continents. But when you encounter tales of garden cities linked by vast causeways and populated by people and their pet ducks, it can be a little hard not to indulge in daydreams about life there.
That’s the scene unveiled in a new study on the Casarabe culture, who lived in the Llanos de Mojos region of the Bolivian Amazon between 500 and 1400, before the arrival of Europeans. Over the centuries, these people built roughly 200 monumental mounds linked by more than 600 miles of canals and causeways. The sprawl included primary urban centers and small forest islands, which are cultivated patches of trees amid the wetland plains.
“The sheer volume of sites and their architectural layout, divided into a four-tier settlement system…indicate that the people of the Casarabe culture created a new social and public landscape through monumentality, leading to low-density urbanism,” said researchers led by Tiago Hermengildo of the Max Planck Institute of Geoanthropology. “The extent and complexity of the Casarabe settlement network present a unique context in the South American lowlands.”
To better understand the diets and lifestyles of these people, Hermengildo and his colleagues collected isotope data from the remains of 86 humans and 68 animals (including mammals, reptiles, birds, and fish) that lived in Llanos de Mojos between 700 and 1400. The results revealed that maize was the central staple of the Casarabe diet—both for its people, and its ducks.
“We provide evidence that muscovy ducks (Cairina moschata), the only known domesticated vertebrate in the South American lowlands, had substantial maize intake suggesting intentional feeding, or even their domestication, from as early as 800 CE,” said the team. “Similar isotopic evidence indicative of maize feeding practices was also reported in muscovy duck from Panama, suggesting that maize was a key element in the domestication of ducks throughout the American continent.”
Feeding ducks: a meditative passtime for the ages. Though the birds were raised for sustenance, I like to imagine a few charismatic drakes and hens earned a role as companions.
But regardless of the charm quotients of bygone ducks, these findings are part of a wave of emerging research revealing that ancient cultures in the Amazon Basin were far more complex and extensive than previously realized—and researchers have only started to scratch the surface of many of these sites. Get your brain checked now, because this field is going to be throwing out head-spinners and mind-bogglers for years to come.
As global temperatures rise, alpine snowpack and glaciers are receding, a pattern that often exposes fossils, artifacts, and other relics that have been locked in ice for millennia.
For instance, scientists recently discovered an eerily well-preserved forest of whitepark pines that melted out of an ice patch on Yellowstone’s Beartooth Plateau. This forest stand thrived about 5,500 years ago, but the ice left it in such pristine condition that scientists were able to measure tree rings and reconstruct the climate these trees experienced over five centuries.
“The extraordinary quality of wood preservation at the…ice-patch site provides an opportunity to generate a multicentury, mid-Holocene record of high-elevation temperature during the life of the forest stand, and to elucidate the climate conditions that contributed to the stand’s demise and subsequent growth of the ice patch,” said researchers led by Gregory Pederson of the U.S. Geological Survey.
The treeline in the Beartooth Mountains was at a much higher elevation 5,500 years ago due to a multi-century warm spell. Then, around 5,100 years ago, Iceland went on an epic volcanic bender, as it is prone to do from time to time, causing a “summer cooling anomaly” that “led to rapid ice-patch growth and preservation of the trees,” according to the study.
In other words, Iceland’s stinky lava breath likely killed off this forest all the way in Wyoming by cooling the Northern Hemisphere, which entombed the stand in ice.
The study notes that the treeline is likely to creep back up the slopes again as anthropogenic climate change melts ice off at high elevations. Pines may grow once more on the ancestral grounds of this ancient forest, as a consequence of human activity.
Let that sentence breathe. Just two words, yet it may well be the shortcut to nirvana. Dust to dust. Hallelujah. BATS SURF.
In addition to being my new incantation for 2025, “bats surf” is a scientific discovery reported this week. Researchers outfitted 71 female common noctule bats (Nyctalus noctule) with tags and followed their spring migration across Europe, which lasted about 46 days and covered nearly 700 miles. Some of these batgirls covered an astonishing 237 miles in just a single night, much farther than previously recorded flights.
The noctules were able to achieve these distances by timing their flights to coincide with warm fronts that buoyed them along with strong winds. In other words, bats surf the tropospheric waves. This skill is especially important for female noctules, as they must navigate migrations at the same time they are gestating future surfer pups in their bellies.
“Females are generally pregnant in spring and can delay the embryo’s development through torpor,” said researchers led by Edward Hurme of the Max Planck Institute of Animal Behavior.
“As these bats wait for the right migration conditions, they must either invest in their embryo while increasing their own energetic cost of flight or delay the development of the embryo, possibly affecting the pup’s survival,” the team said. “This phenological flexibility may be key for their long-term survival and maintenance of migration.”
Parenthood is hard enough without having to worry getting literally weighed down by your brood on the road. There’s no hanging loose for these bats; they are truly on a journey of surf-ival.
You might be a scientist if you look at a hula hoop and think “this familiar playtime activity can serve as an archetype of the challenging class of problems involving parametric excitation by driven supports and the mechanics of dynamic contact points with frictional and normal forces.”
That’s a quote from a new study that investigated the complex dynamics behind “hula hoop levitation,” which describes how skilled hoopers synchronize their body movements in ways that appear to defy gravity. The study belongs to one of my favorite research traditions—the earnest examination of an outwardly trivial item, a class that also includes the nano-pasta work we recently covered and a legendary 2022 breakdown of the fluid dynamics of Oreos.
“Seemingly simple toys and games often involve surprisingly subtle physics and mathematics,” said researchers led by Xintong Zhu of New York University. “The physics of hula hooping was first studied as an excitation phenomenon soon after the toy became a fad, and more recent interest has come during its renewed popularity as a form of exercise and performance art.”
In addition to outlining the physical underpinnings of levitation, the authors took the inspired step of experimenting with a variety of hula-hooping robots. The study is punctuated by frankly delightful footage of these machines hooping their cold metal hearts out. See for yourself; the study will be open-access for six months.
The upshot: We now have experimental confirmation that people (or robots) with “sufficiently curvy” figures have a hooping advantage. The team notes that “an hourglass-shaped body of hyperboloidal form successfully suspends the hoop.”
Shout out to all you hyperboloids out there! Happy hooping.
Earlier this week, Meta executive Connor Hayes told the Financial Times that the company is going to roll out AI character profiles on Instagram and Facebook that “exist on our platforms, kind of in the same way that accounts do … they’ll have bios and profile pictures and be able to generate and share content powered by AI on the platform.”
This quote got a lot of attention because it was yet another signal that a “social network” ostensibly made up of human beings and designed for humans to connect with each other is once again betting its future on distinctly inhuman bots designed with the express purpose to pollute its platforms with AI-generated slop, just like spammers are already doing and just like Mark Zuckerberg recently told investors the explicit plan is. In the immediate aftermath of the Financial Times story, people began to notice the exact types of profiles that Hayes was talking about, and assumed that Meta had begun enacting its plan.
But the Meta controlled, AI-generated Instagram and Facebook profiles going viral right now have been on the platform for well over a year and all of them stopped posting 10 months ago after users almost universally ignored them. Many of the AI-generated profiles that Meta created and announced have been fully deleted; the ones that remain have not posted new content since April 2024, though their chat functionality continues to work.
Peoples’ understandable aversion to the idea of Meta-controlled AI bots taking up space on Facebook and Instagram has led them to believe that these existing bots are the new ones “announced” by Hayes to the Financial Times. In Hayes’ quote, he says that Meta ultimately envisions releasing tools that allow users to create these characters and profiles, and for those AI profiles to live alongside normal profiles. So Meta has not actually released anything new, but the news cycle has led people to go find Meta’s already existing AI-generated profiles and to realize how utterly terrible they are.
After this article was originally published, Liz Sweeney, a Meta spokesperson, told 404 Media that "there is confusion" on the internet between what Hayes told the Financial Times and what is being talked about online now and Meta is deleting those accounts now. 404 Media confirmed that many of the profiles that were live at the time this article was published have since been deleted.
This is Behind the Blog, where we share our behind-the-scenes thoughts about how a few of our top stories of the week came together. This week, we talk more about magic links and building shelves offline. A light Behind the Blog today but we're back from the holiday on Monday.
JOSEPH: There has been a lot of response to our post We Don’t Want Your Password. Much of it supportive, some of it mad, some of it funny. The TLDR is (although I do think it’s worth a read) is that we’re four journalists trying to spend as much time as possible doing actual journalism, rather than spending our very limited amount of time building things that are not necessary and that we’re not equipped to do. We do want to build, like our big project for a fulltext RSS feed for paying subscribers and for the broader independent media ecosystem, but we’re not interested in using up resources (time, mostly) on introducing a username/password login for the site when the current magic link system works mostly fine and is how the CMS we use is designed.
Capabilities used in or justified by extreme circumstances often become commonplace and are used for much more mundane things in the future. And so the remote investigative actions taken by Elon Musk in Wednesday’s Cybertruck explosion in Las Vegas are a warning and a reminder that Tesla owners do not actually own their Teslas, and that cars, broadly speaking, are increasingly spying on their owners and the people around them.
After the Cybertruck explosion outside of the Trump International Hotel in Vegas on Wednesday, Elon Musk remotely unlocked the Cybertruck for law enforcement and provided video from charging stations that the truck had visited to track the vehicle’s location, according to information released by law enforcement.
“We have to thank Elon Musk specifically, he gave us quite a bit of additional information in regards to—the vehicle was locked due to the nature of the force from the explosion, as well as being able to capture all of the video from Tesla charging stations across the country, he sent that directly to us, so I appreciate his help on that,” Clark County Police sheriff Kevin McKahill said in a press conference.
The fact that the CEO of a car company or someone working on his behalf can—and did—remotely unlock a specific vehicle and has the means of tracking its location as well as what Musk described as the vehicle’s “telemetry” is not surprising given everything we have learned about newer vehicles and Teslas in particular. But it is a stark reminder that while you may be able to drive your car, you increasingly do not own it, that the company that manufactured it can inject themselves into the experience whenever it wants, and that information from your private vehicle can be provided to law enforcement. Though Musk is being thanked directly by law enforcement, it is not clear whether Musk himself is performing these actions or whether he’s directing Tesla employees to do so, but Tesla having and using these powers is concerning regardless of who is doing it.
Reverse engineer Scotty Allen made his own iPhone. Well, more accurately, he made his own iPhone enclosure out of a block of aluminum, put the internal components of an iPhone into it, and managed to make it all work. Then, he used the same schematics he made to 3D print a working iPhone enclosure out of nylon carbon fiber.
Like lots of repair and DIY projects on the iPhone, Allen did tons of painstaking work over the course of a year to more or less recreate something that already exists and that most people do not need. But his work opens the door to a more modifiable iPhone and a DIY culture around smartphones that still doesn’t really exist. Essentially, he took a block of aluminum, used a CNC mill to carve it down, and was able to put all of the components in the custom enclosure, the way someone might when they’re building a PC. Along the way, he made CAD files of the inside of an iPhone, which will allow people to recreate his work. It is now possible to download his design files and 3D print your own iPhone shell.
“There's been an open question for me from the beginning, which is like, we have this culture around modding PCs, right? And custom PCs. Why do we not have that for phones?,” Allen told me in a video chat. “It’s very celebrated in the culture. But then when you talk about building your own phone, everyone is like, ‘No, that’s crazy.’ Apple is going to sue you.”
As you might expect, the iPhone’s enclosure is not just an empty block of metal. There are various tiny holes for screws and engraved areas for cables and antennas to go. Allen studied all of this and, through a roughly year-long process of trial-and-error, was able to recreate this enclosure and create blueprints for other people to replicate it.
“This was really difficult because I had to reverse engineer it and there was a lot of time spent figuring out, ‘OK, now I’ve got it drawn, but how do I know everything about the interior walls? There’s all these little threaded inserts that are glued in. And I think I actually went about this machining it in a different way than Apple does,” he said.
Over the years, Allen has done lots of cool things with the iPhone, which started with adding a working headphone jack back into the iPhone 7 after Apple removed it. He is part of the right to repair movement, but takes things a step further and says he’s advocating for the right to modify, and the normalization of opening and tweaking things like the iPhone to prove that they’re not just unknowable black boxes.
“I look at this as an infrastructure project, which is, let’s make a 1-to-1 copy,” he said. “It’s not totally 1-to-1, but in terms of overall geometry, it’s a fairly faithful representation and reproduction with the goal that, if you want to do interesting things, you need to start with the boring things first. And now with all the design files that I’ve created, you can really easily begin to modify it to look how you want on the outside, to add space for things on the inside. So this is a base for doing all sorts of more creative things.”
Allen said that at the moment one challenge is that there are limitations on the types of touchscreens that will work with the iPhone, but that with more work it would be possible to work with additional modifications.
“My notion is that a phone is a device that you can open up and tinker with, or at least repair,” he said. “I think I’ve had a hand in saying, ‘Look, this isn’t a black box that only Apple is allowed to open.’ … I think consistently what I’ve done is poke at the edges and say, ‘What other things can we do with this?’”
Almost two years ago, Louisiana passed a law that started a wave that’s since spread across the entire U.S. south, and has changed the way people there can access adult content. As of today, Florida, Tennessee, and South Carolina join the list of 17 states that can’t access some of the most popular porn sites on the internet, because of regressive laws that claim to protect children but restrict adults’ use of the internet, instead.
That law, passed as Act 440, was introduced by “sex addiction” counselor and state representative Laurie Schegel and quickly copied across the country. The exact phrasing varies, but in most states, the details of the law are the same: Any “commercial entity” that publishes “material harmful to minors” online can be held liable—meaning, tens of thousands of dollars in fines and/or private lawsuits—if it doesn’t “perform reasonable age verification methods to verify the age of individuals attempting to access the material.”
To remain compliant with the law while protecting users’ privacy, Aylo—the company that owns Pornhub and a network of sites including Brazzers, RedTube, YouPorn, Reality Kings, and several others—is making the choice, state by state, to block users altogether.