Steam, the dominant digital storefront for PC games operated by Valve, updated its guidelines to forbid “certain kinds of adult content” and blamed restrictions from payment processors and financial institutions. The update was initially spotted by SteamDB.info , a platform that tracks and publishes data about Steam, and reported by the Japanese gaming site Gamespark.
The update is yet another signal that payment processors are lately becoming more vigilant about what online platforms that host adult content they’ll provide services to and another clear sign that they are currently the ultimate arbiter of what kind of content can be made easily available online, or not.
Steam’s policy change appears under the onboarding portion of its Steamworks documentation for developers and publishers. The 15th item on a list of “what you shouldn’t publish on Steam” now reads: “Content that may violate the rules and standards set forth by Steam’s payment processors and related card networks and banks, or internet network providers. In particular, certain kinds of adult only content.”
It’s not clear when exactly Valve updated this list, but an archive of this page from April shows that it only had 14 items then. Other items that were already on the list included “nude or sexually explicit images of real people” and “adult content that isn’t appropriately labeled and age-gated,” but Valve did not previously mention payment processors specifically.
Valve did not immediately respond to a request for comment and questions about where developers might find more details about payment processors’ rules and standards.
Steam.DB, which also tracks when games are added or removed from Steam, noted many adult games have been removed from Steam in the last 24 hours. Sex games, many of which are of very low quality and sometimes include very extreme content, have been common on Steam for years. In April, I wrote about a “rape and incest” game called No Mercy which the developers eventually voluntarily removed from Steam after pressure from users, media, and lawmakers in the UK. The majority of games I saw that were removed from Steam recently revolve around similar themes, but we don’t know if they were removed by the developers or Valve, and if they were removed by Valve because of the recent policy change. Games are removed from Steam every day for a variety of reasons, including expired licensing deals or developers no longer wanting to support a game.
However, Steam’s policy change comes at a time that we’ve seen increased pressure from payment processors around adult content. We recently reported that payment processors have forced two major AI models sharing platforms, Civitai and Tensor.Art, to remove certain adult content.
We start this week with a series of articles from Emanuel about a crackdown in the AI industry. After the break, Sam tells us about the ‘Save Our Signs’ campaign which hopes to preserve the history of national parks. In the subscribers-only section, Jason rants about how AI will not save the media industry.
Listen to the weekly podcast on Apple Podcasts,Spotify, or YouTube. Become a paid subscriber for access to this episode's bonus content and to power our journalism. If you become a paid subscriber, check your inbox for an email from our podcast host Transistor for a link to the subscribers-only version! You can also add that subscribers feed to your podcast app of choice and never miss an episode that way. The email should also contain the subscribers-only unlisted YouTube link for the extended video version too. It will also be in the show notes in your podcast player.
Yesterday I ordered my lunch from an AI operating a drive-thru. It was fine. Banal. Boring even. A new experience that I think will become routine in the future.
The AI drive-thru operator isn’t cutting edge tech deployed in an upscale market to win over high value consumers. I live at the edge of a South Carolina city with a little more than 140,000 people. A booming metropolis with the best and the finest, it is not.
There’s a lot of local fast food fried chicken joints here and one of them is Bojangles. It’s mid. Better than KFC and not as good as Popeyes, Bojangles is fine if you’re hungry but you’ll forget the meal as soon as it’s done and you’ll never yearn for it. Last year the restaurant said it would deploy an AI agent at its drive-thru windows. It’s called, I shit you not, Bo-Linda and made by the Israeli tech firm Hi-Auto.
According to the Bojangles website, “Bo-Linda™ can take guest orders 96+% of the time with no human intervention,” and “improve overall satisfaction by offloading order taking from team members and providing a consistent guest experience.”
When Bo-Linda finally arrived in South Carolina, I went to see what the fuss was about. It was crushingly dull. A preview of a time in the near future, I think, when the AI bubble retracts and the agents are common. It took my order with an efficiency that, I’ll be honest, is not typical of the typical fast food worker. The worst part was its constant attempts to up-sell me.
0:00
/0:39
“Do you want to upgrade your drink to our new water-melon iced tea?” It asked.
“No thank you.”
“Would you like to add our new peach cobbler for $1.99?”
“No thank you.”
“May I get you anything else?”
“No, that’s it.”“Would you like to round up for military scholarships?”“No thank you.”
“You’re welcome. Thank you. Your total is $10.89.”
When 404 Media founder Joseph Cox watched the video of my interactions, he made fun of my “no thank yous.” What can I say? There’s an ingrained and often stifling politeness that’s bred into us in the American South. Even though I knew I was talking to a machine, I couldn’t not be nice to it.
My thought in the immediate aftermath is that the whole thing was painless. My order wasn’t complicated, but it was correct. The machine never stumbled over itself or asked for clarification. It knew what I wanted and the humans at the window gave it to me. A few conversations with friends and a quick scan of social media in the area show that other people have had much the same interactions with Bo-Linda.
The drive-thru AI, much like the chicken it sold me, is fine. Forgettable.
It was later, sitting at home, and doing a little research for the story that concerns popped up. OpenAI CEO Sam Altman has said that saying “please” and “thank you” to ChatGPT has cost the company tens of millions of dollars. How much water and energy had I burned being polite to Bo-Linda the chatbot?
Sometimes it feels like the answers to these questions don’t matter. We’re barreling forward into the AI future, whether we like it or not. Data centers are springing up across America and nuclear power plants are coming back online, so Bojangles can make a little more money and so people in the drive-thru can feel a little less friction before eating their meal.
This is how a new technology takes over, what it feels like right before it becomes ubiquitous. One day you wake up and the cameras are everywhere, able to recognize your face and chart your movements across the city you live in. One day you look up and everyone has their face buried in their phone. It happened by degrees, but so gradually you didn’t notice. There were signs along the way, dangers and warnings.
But mostly, it was fine, as boring and routine as ordering chicken at a drive-thru.
So-called 3D-printed ghost guns are untraceable firearms that can be assembled at home. But cutting edge work from a forensic expert in California and researchers at the University of Oklahoma may soon show investigators can trace a 3D printed object to the specific printer that made it.
Weapons manufactured using 3D printers have been a subject of Biden-era legislation and recent Supreme Court scrutiny. It’s possible to download the blueprints for a firearm and build it in your home. There’s no serial number to track and no store to scrutinize your purchase. Luigi Mangione used a ghost gun to allegedly assassinate United Healthcare CEO Brian Thompson.
For a while, I have said that the AI slop endgame, for social media companies, is creating a hyper personalized feed full of highly specific content about anything one could possibly imagine. Because AI slop is so easy to make and because social media algorithms are so personalized, this means that Facebook, Instagram, TikTok, or YouTube can feed you anything they perceive its users to possibly want. So this means that AI slop makers are exploring ever more niche areas of content.
Case in point: Facebook AI slop about the horrific and deadly Texas flood. Topical AI content about disasters, war, current events, and news stories are at this point so commonplace that they are now sadly barely notable, and AI-powered “misinformation” about horrible events are all over every social media feed I can think of. But as we document our descent into this hellhole, I thought some AI slop surfaced on Bluesky by Christina Stephens was particularly notable:
Yesterday, Silicon Valley trade publication The Information launched TITV, a live-streaming news program. For its first episode, the company’s founder Jessica Lessin interviewed Mark Zuckerberg about his recent attempts to hire AI executives away from other companies for huge sums of money.
It is impossible to say what actually happened in the interview, however, because there were audio problems with the livestream—the Zuckerberg segment was completely silent, giving us a minute and 50 seconds of video of Zuckerberg’s face moving with no sound coming out.
In recent weeks, however, Zuckerberg has really ramped up his rhetoric about “superintelligence,” throwing hundreds of millions of dollars at AI researchers in attempts to hire them at Meta, and beginning work on massive data centers around the country to power his AI. Zuckerberg and Meta have talked endlessly about AI as a game changer, while being unable or unwilling to keep AI-generated misinformation and slop off of Meta platforms.
Meanwhile, there has been increased discussion about the possible power of using AI for journalism, which we have written about extensively. I saw the silent interview with Zuckerberg as both an opportunity to check out the awesome abilities of Meta’s AI, and to test AI tools at a task some of them are supposedly good at—reading lips. If these technologies worked as well as their creators have promised, it might be possible to provide the world with information from The Lost Mark Zuckerberg Interview.
Here is how my experiment went.
Because Zuckerberg was obviously talking about Meta AI, I thought I would try his tool first. I took video from the interview and asked Meta AI to lip read it to tell me what they were saying:
“I’d love to help! However, I’m a text-based AI and don’t have capability to visually lip read videos,” Meta AI said. It then suggested that I “find a professional lip reader or transcription service that specializes in lip reading videos” or “use automated lip reading software.”
I then tried to upload still images to see if Meta AI could parse them:
When I uploaded additional images, Meta AI was also not able to provide any information about what was being said.
I then went to ChatGPT, because Zuckerberg is reportedly offering pay packages of up to $300 million to OpenAI staffers to come work at Meta. I uploaded the 1:50 video and ChatGPT told me “the video processing took too long and timed out.” I then uploaded a 25 second clip and it told me “the system is still timing out while trying to extract frames.” I then asked it to do the first five seconds and it said “even with the shorter clip and smaller scope (first 5 seconds), the system timed out.” I then asked for it to extract one single frame, and it said “it looks like the system is currently unable to extract even a single frame from the video file.” ChatGPT then asked me to take a screenshot of Zuckerberg. I sent it this:
And ChatGPT said “the person appears to be producing a sound like ‘f’ or ‘v’ (as in ‘video’ or ‘very’),” but that “possibly ‘m’ or ‘b,’ depending on the next motion.” I then shared the 10 frames around that single screenshot, and ChatGPT said “after closely analyzing the progression of lip shapes and facial motion,” the “probable lip-read phrase” was “This is version.” I then uploaded 10 more frames and it said the “full phrase so far (high confidence): ‘This version is just.’”
I then decided to try to extract every frame from the video and upload it to ChatGPT.
I went to a website called frame-extractor.com and cut the video into 3,000 frames. After it had processed 700 of them, I tried to upload them to ChatGPT and it did not work. I then decided I would go 10 frames at a time from the beginning of the clip. Even though I sent an entirely different portion of the video and told ChatGPT we were starting from a different part of the video, it still said that the beginning of the video said “this version is.” I continued uploading frames, 10 at a time. These frames included both Lessin and Zuckerberg, not just Zuckerberg.
ChatGPT slowly began to create a surely accurate transcript of the lost audio of this interview: “This version is just that it we built,” ChatGPT said. As I added more and more frames, it refined the answer: “This version is what we’re going to do,” it said. Finally, it seemed to make a breakthrough. “Is this version of LLaMA more powerful than the one we released last year?” the ChatGPT transcript said. It was not clear about who was speaking, however. ChatGPT said "her mouth movements," but then explained that the "speaker is the man on the left" (Lessin, not Zuckerberg, was speaking in these frames).
I had uploaded 40 of a total of 3,000 frames. Zoom video is usually 30 fps, so in approximately 1.5 seconds, Lessin and/or Zuckerberg apparently said “Is this version of LLaMA more powerful than the one we released last year?” I then recorded this phrase at a normal speaking speed, and it took about four seconds. Just a data point.
Lipreadtest
0:00
/4.973333
I then got an error message from ChatGPT, and got rate-limited because I was uploading too much data. It told me that I needed to wait three hours to try again.
Finally, I did what Meta AI told me to do, and tried a bespoke AI lip reading app. I found one called ReadTheirLips.com, which is powered by Symphonic Labs. This is a tool that people have been trying to use in recent months to figure out what Donald Trump and Jeffrey Epstein were saying to each other in silent b-roll news footage, without much success.
I paid $10 for three minutes worth of transcription and asked it to lip read using its “Multiface Detection.” After waiting 10 minutes, I got an error message that said “Transcription failed, no credits have been used, try again later.” I then asked it to focus only on Zuckerberg, and actually got some text. I separately asked it to focus on Lessin.
Here is a transcript of what the AI says they were talking about. It has not been edited for clarity and I have no idea which parts, if any, are accurate:
LESSIN: Thanks for joining us again, TV. We're happy to have you already this morning. News that you've spent even more money with your big announcement about your new supercomputers. We'll get to that, but to start, you've been in huge scale like I.
ZUCKERBERG: Happy TO BE HERE. We're GOING TO TALK A LITTLE BIT ABOUT META'S AI STRATEGY. It's BEEN BUSY, YOU KNOW? I THINK THE MOST EXCITING THING THIS YEAR IS THAT WE'RE STARTING TO SEE EARLY GLIMPSES OF SELF-IMPROVEMENT WITH THE MODELS, WHICH MEANS THAT DEVELOPING SUPERINTELLIGENCE IS NOW.
LESSIN: You HAVE BEEN ON A PLANE OF AI HIRING, WHY AND WHY NOW?
ZUCKERBERG: Insight, and we just want to make sure that we really strengthen the effort as much as possible to go for it. Our mission with a lab is to deliver personal superintelligence to everyone in the world, so that way, you know, we can put that power in every individual's hand. I'm really excited about it.
LESSIN: I DON'T KNOW, I DON'T KNOW, I DON'T KNOW.
ZUCKERBERG: Than ONE OF THE OTHER LABS YOU'RE DOING, AND YOU KNOW MY VIEW IS THAT THIS IS GOING TO BE SOMETHING THAT IS THE MOST IMPORTANT TECHNOLOGY IN OUR LIVES. IT'S GOING TO UNDERPIN HOW WE DEVELOP EVERYTHING AND THE COMPANY, AND IT'S GOING TO AFFECT SOCIETY VERY WISELY. SO WE JUST WANT TO MAKE SURE WE GET THE BEST FOCUS.
LESSIN: Did YOU FEEL LIKE YOU WERE BEHIND WHAT WAS COMING OUT OF LAW BEFORE I'M NOT ADJUSTING.
ZUCKERBERG: On THIS FROM ENTREPRENEURS TO RESEARCHERS TO ENGINEERS WORKING ON THIS HIDDEN INFRASTRUCTURE, AND THEN OF COURSE WE WANT TO BACK IT UP WITH JUST AN ABSOLUTELY MASSIVE AMOUNT OF COMPUTER RESEARCH, WHICH WE CAN SUPPORT BECAUSE WE HAVE A VERY STRONG BUSINESS MODEL THAT THROWS OFF A LOT OF CAPITAL. LET'S TALK ABOUT.
LESSIN: Like THIS SUMMER, PARTICULARLY, YOU SWITCH GEARS A LITTLE BIT.
ZUCKERBERG: I THINK THE FIELD IS ACCELERATING, YOU KNOW, WE KEEP ON TRACK FOR WHERE WE WANT TO BE, AND THE FIELD KEEPS US MOVING FORWARD.
The video ends there, and it cuts back to the studio.
Update: The Information provided 404 Media with several clips (with audio) from Lessin's interview with Zuckerberg, as well as a real transcript of the interview. Here is the real segment of what was said. As you can see, the AI captured the jist of this portion of the interview, and actually did not do too bad:
Lessin: Mark, thanks for joining TITV. We're happy to have you here. Already this morning, [there’s] news that you've spent even more money with your big announcement about your new supercomputers. We'll get to that. But to start, you took a huge stake in ScaleAI. You have been on a blitz of AI hiring. Why, and why now?
Zuckerberg: Yeah, it's been busy. You know, I think the most exciting thing this year is that we're starting to see early glimpses of self-improvement with the models, which means that developing super intelligence is now in sight, and we just want to make sure that we really strengthen the effort as much as possible to go for it. Our mission with the lab is to deliver personal super intelligence to everyone in the world, so that way we can put that power in every individual's hand. And I'm really excited about it. It's a different thing than what the other labs are doing.
And my view is that this is going to be something that is the most important technology in our lives. It's going to underpin how we develop everything at the company, and it's going to affect society very widely. So we just want to make sure that we get the best folks to work on this, from entrepreneurs to researchers to engineers working on the data and infrastructure.
And then, of course, we want to back up with just an absolutely massive amount of compute which we can support, because we have a very strong business model that throws off a lot of capital.
Lessin: Did you feel like you were behind coming out of Llama 4? It seems like this summer, in particular, you switched gears a little bit.
Zuckerberg: I think the field is accelerating, you know, we keep on having goals for where we want to be. And then the field keeps on moving faster than we expect.
ICE Block, an app that lets users warn others about the location of ICE officers, and which for a short while was the top of the social media App Store chart, does protect users’ privacy and doesn’t share your location with third parties, according to a recent analysis from a security researcher. ICE Block already claimed that it did not collect any data from the app; the analysis now corroborates that.
“It’s not uploading your location at all, when you make a report that report isn’t associated with your device in any way, and there are no third party services that it talks to or sends data to,” Cooper Quintin, senior public interest technologist at the Electronic Frontier Foundation (EFF), who analyzed the ICE Block app, told 404 Media.
Hugging Face, a company with a multi-billion dollar valuation and one of the most commonly used platforms for sharing AI tools and resources, is hosting over 5,000 AI image generation models that are designed to recreate the likeness of real people. These models were all previously hosted on Civitai, an AI model sharing platform 404 Media reporting hasshownwas used for creating nonconsensual pornography, until Civitai banned them due to pressure from payment processors.
Users downloaded the models from Civitai and reuploaded them to Hugging Face as part of a concerted community effort to archive the models after Civitai announced in May it will ban them. In that announcement, Civitai said it will give the people who originally uploaded them “a short period of time” before they were removed. Civitai users began organizing an archiving effort on Discord earlier in May after Civitai indicated it had to make content policy changes due to pressure from payment processors, and the effort kicked into high gear when Civitai announced the new “real people” model policy.
At the time of writing, the Discord channel has hundreds of members who are still finding and sharing models that have been removed from Civitai and are reuploading them to Hugging Face. Some users have even shared a piece of software, also hosted on Hugging Face, which allows users to automatically upload Civitai models to Hugging Face in batches.
Hugging Face did not respond to multiple requests for comment. It also did not respond to specific questions about how and if it plans to moderate these models given the fact that they were previously hosted on a platform primarily used for AI generating pornography, and which our reporting shows were used to create noncensual pornography.
I found the Civitai models of real people that were reuploaded to Hugging Face thanks to a paper I covered where researchers scraped Civitai. The paper showed that the platform was primarily used for pornographic content, and that it deleted at least 50,000 AI models designed to recreate the likeness of real people once it changed its policy in May. The researchers, Laura Wagner and Eva Cetinic from the University of Zurich, provided me with a spreadsheet of all the deleted models, which included the name of the models (which is almost always the name of a female celebrity or lesser known internet personality), a link to where it was previously hosted on Civitai, and the SHA256 hash Civitai uses to identify all the models hosted on its site.
The people who are reuploading the Civitai models to Hugging Face are seemingly trying to hide the purpose of those models on Hugging Face. On Hugging Face, these models have generic names and URLs like “LORA” or “Test model.” Users can’t tell that these models are used to generate the likeness of real people just by looking at their Hugging Face page, nor would they be able to find them by searching for the names of celebrities on Hugging Face. In order to find them, users can go to a separate website the Civitai archivists created. There, they can enter the name of a Civitai model, the link where it used to be hosted on Civitai before it was deleted, or the model’s SHA256 hash. All of these will lead users to a page which explains what the model is, show its name, as well as several images showing the kind of images it can generate. At the bottom of that page is a link to one or more Hugging Face “mirrors” where the model has been reuploaded.
By using Wagner’s and Cetinic’s data and entering it into this Civitai archive site, I was able to find the Civitai models hosted on Hugging Face.
Hugging Face’s content policy bans “Unlawful, defamatory, fraudulent, or intentionally deceptive Content (e.g., disinformation, phishing, scams, inauthentic behavior),” as well as “Sexual Content used for harassment, bullying, or created without explicit consent.” Models that generate the likeness of real people don’t have to be used for unlawful or defamatory ends, and they only produce sexual content if people choose to use them that way. There’s nothing in Hugging Face’s content policy that explicitly forbids AI models that recreate the likeness of real people.
However, the Hugging Face Ethics & Society group, which is “committed to operationalizing ethics at the cutting-edge of machine learning,” has identified six “high-level categories for describing ethical aspects of machine learning work,” one of which is that AI should be “Consentful.”
“Consentful technology supports the self-determination of people who use and are affected by these technologies,” the company explains. Examples of this, the company says, includes “Avoiding extractive, chauvinist, ‘dark,’ and otherwise ‘unethical’ patterns of engagement.”
Other AI models that recreate the likeness of real people could conceivably not violate any of these principles. For example, two of the deleted Civitai models that were reuploaded to Hugging Face were designed to recreate the likeness of Vladimir Putin, which in theory people would want to use in order to mock or criticize the Russian president. However, the vast majority of the models are of female celebrities, which my reporting has shown is being used to create nonconsensual sexual content, and which were deleted en masse from Civitai because of pressure from payment processors who didn’t want to be associated with that type of media.
In the two years that I’ve been reporting about Civitai, a platform for sharing AI image generation models that has beeninstrumental in the production of AI generated non-consensual porn, Civitai has consistently argued that the amount of adult content on the site has been overstated. But new research shows that, if anything, the amount of adult content on Civitai has been underestimated.
In their paper, “Perpetuating Misogyny with Generative AI: How Model Personalization Normalizes Gendered Harm,” researchers Laura Wagner and Eva Cetinic from the University of Zurich studied more than 40 million user-generated images on Civitai and over 230,000 models. They found “a disproportionate rise in not-safe-for-work (NSFW) content and a significant number of models intended to mimic real individuals” on the platform, they write in the paper.
“What began as a promising creative breakthrough in TTI [text-to-image] generation and model personalization, has devolved into a pipeline for the large-scale production of sensational, biased, and abusive content. The open-source nature of TTI technologies, proclaimed as a democratizing force in generative AI, has also enabled the propagation of models that perpetuate hypersexualized imagery and nonconsensual deepfakes,” Wagner and Cetinic write in their paper. “Several indicators suggest a descent into a self-reinforcing feedback loop of platform decay. These include a dramatic increase in NSFW imagery, from 41% to 80% in two years, as well as the community’s normalization of deepfakes, misogynistic tropes, and other exploitative content.”
To visualize just how dominant adult content was on Civitai, check the chart below, which shows the distribution of images by “NSFW browsing levels” over time. These categories, which are inspired by the Motion Picture Association film rating system and are used by Civitai to tag images, show that adult content was always a significant portion of all images hosted on the site, but that the portion of “overtly sexual, or disturbing” content only grew as the site became more popular, and exploded starting in 2024. The chart is based on Civitai’s own numbers and categorization system which the researchers scraped from the site. It likely undercounts the number of explicit images on the site since as both the researchers and I observed during my reporting, not all adult content is tagged as such.
In December, 2023, Civitai CEO Justin Maier told Venture Beat that “less than 20% of the posted content is what we would consider ‘PG-13’ or above.” When I reached Maier for comment for this article, he told me that “The VentureBeat figure cited a December 2023 snapshot, when adult posts were a minority. The mix shifted in 2024 as many NSFW creators migrated from platforms that no longer allow that content.”
However, the data in the paper shows that by October of 2023, 56 percent of all images on the site were tagged as “NSFW” and were designated by Civitai as “PG-13” or above.
In May, Civitai announced it’s banning all AI image generation models designed to recreate the likeness of real people because of pressure from payment processors. Since the authors of the paper were already tracking hundreds of thousands of models hosted on Civitai, they could easily see which models were removed, giving us a first clear look at how common those models were.
Overall, they saw that more than 50,000 models designed to AI-generate the likeness of real people were removed because of the ban. These are models that Civitai itself tagged as “person of interest,” the tag it uses to indicate a model recreates the likeness of a real person, so the actual number of models depicting real people is likely higher.
It’s hard to say if the most popular AI models on Civitai were all popular just because they were used to generate explicit images, because people could use models tagged as NSFW to generate non-nude images and vice versa. For example, according to the data collected by the researchers the most popular AI image generation model on Civitai was EasyNegative with almost 600,000 downloads. It’s not tagged or promoted as a model for generating pornography, but images that users created with it, which are shared on its Civitai model page, show it is commonly used that way.
Other very popular models on Civitai are clearly designed to generate explicit images. The sixth most popular model with 360,000 downloads is Nudify XL: Better Bodies, which its creator says is for “nude female frontals.” A model called Realistic Vaginas - God Pussy 1 had 256,000 downloads. The POV Squatting Cowgirl LoRA model, which Civitai tagged as a “sex” model, had 189,000 downloads.
The authors of the paper also conducted deeper analysis of the 40,000 most downloaded models on Civitai. In the 11,151 models where they could extract textual training data, meaning text that indicates what kind of images the models were trained on, they found “specifically abusive terms.” 5.6 percent included the keywords “loli” (558 models) and/or “shota” (69 models), Japanese terms commonly used to refer to sexualized depictions of pre-pubescent girls and boys. About 2.1 percent (189 models) included the keyword “rape.”
The data shows with clear numbers what we have long argued at 404 Media: adult content drives technological innovation and early adoption, and this has been especially true in the world of generative AI. Despite its protestation to the contrary, Civitai, which is one of the fastest growing platforms in that industry, and that the influential Silicon Valley venture capital firm Andreessen Horowitz invested in, grew because of explicit content, much of which was nonconsensual.
“The rapid rise of NSFW content, the over-representation of young female subjects, and the prioritization of sensational content to drive engagement reflect an exploitative, even abusive dynamic,” the researchers wrote. “Additionally, structural discrimination embedded in today’s open-source TTI tools and models have the potential to cause significant downstream harm as they might become widely adopted and even integrated into future consumer applications.”
Adult content driving innovation and early adoption doesn’t have to be harmful. As the researchers write, it’s the choices platforms like Civitai make that give us these outcomes.
“The contingent nature of technology, shaped by online communities, platform operators, lawmakers, and society as a whole, also creates opportunities for intervention,” they write. “Model-sharing hubs and social media platforms both have the capacity to implement safeguards that can limit the spread of abusive practices such as deepfake creation and abusive imagery.”
Many trains in the U.S. are vulnerable to a hack that can remotely lock a train’s brakes, according to the U.S. Cybersecurity and Infrastructure Security Agency (CISA) and the researcher who discovered the vulnerability. The railroad industry has known about the vulnerability for more than a decade but only recently began to fix it.
Independent researcher Neil Smith first discovered the vulnerability, which can be exploited over radio frequencies, in 2012.
“All of the knowledge to generate the exploit already exists on the internet. AI could even build it for you,” Smith told 404 Media. “The physical aspect really only means that you could not exploit this over the internet from another country, you would need to be some physical distance from the train [so] that your signal is still received.”
The Moderate Party of Sweden has removed an AI tool from its website after people used it to generate videos of Prime Minister Ulf Kristersson asking Adolf Hitler for support.The tool allowed users to generate videos of Kristersson holding an AI-generated message in an attempt to promote the candidate ahead of the general election in Sweden next year.
Swedish television station TV4 used the tool to generate a video of Kristersson on a newspaper above the headline “Sweden needs Adolf Hitler” after it noticed that it had no guardrails or filters.
In the video TV4 generated using the website, Kristersson makes his pitch over stock footage of old people embracing. A woman runs through a field, the camera focusing on flowers while the sun twinkles in the background. Cut to Kristersson. He turns a blue board around. “We need you, Adolf Hitler,” it says.
The Moderates removed the AI system from its website, but the videos of Ulf asking Hitler to join the Moderates remain on social media and TV4’s website..
In an attempt to bolster its party's ranks, Moderates launched a website that allowed users to generate a custom video of Kristersson asking someone to join the party. The idea was probably to have party members plug in the names of friends and family members and share what appeared to be a personalized message from the PM asking for their support.
In the video, Kristersson stands in front of stairs, makes his pitch, and turns around a blue tablet that bears a personalized message to the viewer. The system apparently had no guardrails or filters and Swedish television station TV4 was able to plug in the names Adolf Hitler, Ugandan dictator Idi Amin, and Norwegian mass murderer Anders Breivik.
The Moderate Party did not return 404 Media’s request for a comment about the situation, but told TV4 it shut down the site as soon as it learned people were using it to generate messages with inappropriate names.
The Moderate Party’s AI-generated video was simple.. It filmed the PM holding a blue board it could easily overlay with input from a user and then used AI to generate the fake newspaper and a few other slides. Preventing people from typing in “Hitler” or “Anders Brevik” would have been as simple as maintaining a list of prohibited names, words, and phrases, something that every video game and service does. Users are good at bypassing guardrails, but the Moderate’s AI tool appeared to have none.
Users making content you don’t want to be associated with is one of the oldest and most well known problems in AI. If you release a chatbot, generative photo system, or automated political greeting generator, someone will use it to reference the Nazis or make nonconsensual porn.
When Microsoft launched TAY in 2016, users turned it into a Hitler-loving white nationalist in a few hours. Eight years later, another Microsoft AI product had a loophole that let people make AI-generated nudes of Taylor Swift. Earlier this year, Instagram’s AI chatbots lied about being licensed therapists.
As immigration raids roll out across the U.S., those affected are processing the experience in the normal 2025 way—via vertical video.
Across social media, people are uploading clips with uncanny-valley titles like “A normal day for me after being deported to Mexico” and “3 things I wish I knew before self-deporting from the US!” These posts have the normal shape, voiceovers, and fonts of influencer content, but their dystopian topic reflects the whiplash of the current historical moment.
Doomscrolling last week, a particular clip caught my eye. A man sits on the bottom bunk of a metal bed, staring down at the floor, with the caption “Empezando una nueva vida después de que me Deportaran a México” (“Starting a new life after being Deported to Mexico”).
On May 23, we got a very interesting email from Ghost, the service we use to make 404 Media. “Paid subscription started,” the email said, which is the subject line of all of the automated emails we get when someone subscribes to 404 Media. The interesting thing about this email was that the new subscriber had been referred to 404 Media directly from chatgpt.com, meaning the person clicked a link to 404 Media from within a ChatGPT window. It is the first and only time that ChatGPT has ever sent us a paid subscriber.
From what I can tell, ChatGPT.com has sent us 1,600 pageviews since we founded 404 Media nearly two years ago. To give you a sense of where this slots in, this is slightly fewer than the Czech news aggregator novinky.cz, the Hungarian news portal Telex.hu, the Polish news aggregator Wykop.pl, and barely more than the Russian news aggregator Dzen.ru, the paywall jumping website removepaywall.com, and a computer graphics job board called 80.lv. In that same time, Google has sent roughly 3 million visitors, or 187,400 percent more than ChatGPT.
This is really neither here nor there because we have tried to set our website up to block ChatGPT from scraping us, though it is clear this is not always working. But even for sites that don’t block ChatGPT, new research from the internet infrastructure company CloudFlare suggests that OpenAI is crawling 1,500 individual webpages for every one visitor that it is sending to a website. Google traffic has begun to dry up as both Google’s own AI snippets and AI-powered SEO spam have obliterated the business models of many media websites.
This general dynamic—plummeting traffic because of AI snippets, ChatGPT, AI slop, Twitter no workie so good no more—has been called the “traffic apocalypse” and has all but killed some smaller websites and has been blamed by executives for hundreds of layoffs at larger ones.
Despite the fact that generative AI has been a destructive force against their businesses, their industry, and the truth more broadly, media executives still see AI as a business opportunity and a shiny object that they can tell investors and their staffs that they are very bullish on. They have to say this, I guess, because everything else they have tried hasn’t worked, and pretending that they are forward thinking or have any clue what they are doing will perhaps allow a specific type of media executive to squeeze out a few more months of salary.
But pivoting to AI is not a business strategy. Telling journalists they must use AI is not a business strategy. Partnering with AI companies is a business move, but becoming reliant on revenue from tech giants who are creating a machine that duplicates the work you’ve already created is not a smart or sustainable business move, and therefore it is not a smart business strategy. It is true that AI is changing the internet and is threatening journalists and media outlets. But the only AI-related business strategy that makes any sense whatsoever is one where media companies and journalists go to great pains to show their audiences that they are human beings, and that the work they are doing is worth supporting because it is human work that is vital to their audiences. This is something GQ’s editorial director Will Welch recently told New York magazine: “The good news for any digital publisher is that the new game we all have to play is also a sustainable one: You have to build a direct relationship with your core readers,” he said.
Becoming an “AI-first” media company has become a buzzword that execs can point at to explain that their businesses can use AI to become more ‘efficient’ and thus have a chance to become more profitable. Often, but not always, this message comes from executives who are laying off large swaths of their human staff.
In May, Business Insider laid off 21 percent of its workforce. In her layoff letter, Business Insider’s CEO Barbara Peng said “there’s a huge opportunity for companies who harness AI first.” She told the remaining employees there that they are “fully embracing AI,” “we are going all-in on AI,” and said “over 70 percent of Business Insider employees are already using Enterprise ChatGPT regularly (our goal is 100%), and we’re building prompt libraries and sharing everyday use cases that help us work faster, smarter, and better.” She added they are “exploring how AI can boost operations across shared services, helping us scale and operate more efficiently.”
Last year, Hearst Newspapers executives, who operate 78 newspapers nationwide, told the company in an all-hands meeting audio obtained by 404 Media that they are “leaning into [AI] as Hearst overall, the entire corporation.” Examples given in the meeting included using AI for slide decks, a “quiz generation tool” for readers, translations, a tool called Dispatch, which is an email summarization tool, and a tool called “Assembly,” which is “basically a public meeting monitor, transcriber, summarizer, all in one. What it does is it goes into publicly posted meeting videos online, transcribes them automatically, [and] automatically alerts journalists through Slack about what’s going on and links to the transcript.”
Welcome back to the Abstract! Here are the studies that caught my eye this week.
First up, a bummer! NASA is facing devastating cuts to Earth science, and science in general, which is pretty important for an agency tasked with understanding the universe. I try to keep this newsletter relatively downer-free, but current events are not cooperating with this aim.
Then: the early bird gets the comet, a donkey destined for decapitation, a grand universal theory of coolness, and the case of the stolen exomoons.
Knowing about the planet we live on is good, actually
This may sound obvious for the esteemed readers of this newsletter, but it apparently bears repeating: Earth, our home planet and the only known life-bearing world in the vast expanse of spacetime, is worthy of some passing interest.
This is true because—and I can’t stress this enough—we live on planet Earth. All our bones and guts and snacks are here, so it’s probably wise to get the lay of the land. But as a bonus, Earth is wildly interesting, a restless substrate unlike anything we have seen in our own solar system or beyond it.
Despite these considerations, the Trump administration plans to gut NASA’s Earth Science Division (ESD), the world leader in world-watching. In a letter published in Science, researchers from the recently dissolved NASA Earth Science Advisory Committee lamented the administration’s proposed budget cut of more than 50 percent to ESD, warning that it “would come at a profound cost to US society and scientific leadership.”
“NASA ESD accounted for just 0.03 percent of US spending in 2024,” said researchers led by Dylan Millet of the University of Minnesota. “This investment returns its value many times over by improving predictions, by spurring technological innovation and high-tech jobs, and by forging the knowledge of the planet that is needed for short- and long-term planning.”
“The budget cuts proposed for ESD would cancel crucial satellites that observe Earth and its atmosphere, gut US science and engineering expertise, and potentially lead to the closure of NASA research centers,” the team said. “Given that the cuts would prevent the US from training and preparing the next generation of the scientific and technical workforce, the consequences would be long-lasting.”
This is just the latest appeal from scientists on behalf of NASA, which is also facing catastrophic cuts to its overall Science Mission Directorate (SMD), the arm that oversees ESD. Last week, every past administrator of the SMD, the agency's top job for science leadership, signed a letter urging Congress to reject the cuts of about 47 percent to the directorate.
“Each one of us knows what it’s like to shepherd an ambitious project forward, knowing that its payoff will come years after we have left the agency,” the administrators said. “This proposed budget ends nearly all future investments for both new missions and advanced technology for science. It walks away from dozens of current, extraordinarily successful and productive science missions in extended operations on a combined budget that is only about three percent of NASA’s annual funding.”
Fortunately, the US Senate appropriations committee has voted in favor of a bill rejecting the science cuts, but it has a long road to go down before taking effect, with plenty of opportunity to fall apart.
Needless to say, turning a blind eye to Earth at a time when our activities are reshaping its climate and biosphere, would be a huge loss. As one last twist of the knife, Trump just gave the top job at NASA to the guy from TheReal World—all while ignoring the actual real world.
Prepare for the return of everyone’s favorite space iceball: Comet Halley. Scientists have flagged the comet’s next visit as arriving in the summer of 2061 and proposed an audacious space rendezvous with Halley on its wild ride toward the Sun.
“Although the crucial phases of the comet’s ingress in the inner Solar System are still more than 30 years in the future, we started to examine the feasibility of a space mission using present-day rockets and technologies,” said researchers led by Cesare Barbieri of the University of Padova.
God bless the astro-preppers. Sure, this event will occur decades into the future, in the twilight of the millennials. But they make a pretty good case that we should get moving if we want to take full advantage of the iconic visitor, which “will be better positioned for observation from terrestrial observers than during the 1985–1986 apparition, as it will be on the same side of the Sun as the Earth.”
“We stress that a concerted effort is needed in the current decade to plan and approve a rendezvous mission to [Comet Halley],” the team concluded. “Indeed, the scenario here described needs launches before 2040, less than 15 years from now.”
Who would have guessed that a study about a Bronze Age donkey corpse would be a tear-jerker? Researchers have shed new light on female donkeys (or jennies), which were imported from Egypt to the Tell es-Safi/Gath site in Israel some 5,000 years ago for ritual purposes.
One specimen, called EQ1, is particularly noteworthy because it was decapitated and had its limbs tied together, unlike all the other donkeys buried at the site. "It is evident the animal was sacrificed, the head entirely cut off and carefully placed on the abdomen facing in the opposite direction,” said researchers led by Elizabeth Arnold of Grand Valley State University.
The four donkey burials at Tell eṣ-Ṣâfi/Gath. Image: Arnold et al., 2025, PLOS One, CC-BY 4.0 (https://creativecommons.org/licenses/by/4.0/)
“It can be deduced that even though EQ1 was grazed locally toward the end of her life, she was treated slightly differently from the other local equids,” the team continued. “This imported donkey was kept penned and foddered with hay that was harvested in the valley, a product of dry farmed cereals. This donkey was never herded with other livestock east of the site.”
The unique treatment of EQ1 suggests that the “Egyptian donkey might have been seen as an exotic and special animal, worthy of specific ritual use,” the study concluded. While it’s truly impressive that so much about this jenny can be inferred from her bones, there’s also an eerie pathos to imagining the animal hanging out for months, receiving preferential treatment, unaware of the sand flowing through the hourglass.
Science has invested its prodigious powers into the ultimate social mystery: What makes a person “cool”? Is it putting “cool” in scare quotes? (No!). Researchers have now developed a working theory of coolness by asking nearly 6,000 people in Australia, Chile, China, Hong Kong, Germany, India, Mexico, Nigeria, Spain, South Africa, South Korea, Turkey, and the United States to define this much-coveted attribute.
The results revealed six main traits associated with cool people, which were distinct from traits linked with “good” people. “Cool people are perceived to be more extraverted, hedonistic, powerful, adventurous, open, and autonomous, whereas good people are more conforming, traditional, secure, warm, agreeable, universalistic, conscientious, and calm,” said authors Todd Pezzuti of Universidad Adolfo Ibañez, Caleb Warren of the University of Arizona, and Jinjie Chen of the University of Georgia.
“This pattern is stable across countries, which suggests that the meaning of cool has crystallized on a similar set of values and traits around the globe,” the team said.
There you go, the cheat code to coolness. I’m exhausted just reading it.
The winner of Best Study Title this week goes to “Grand theft moons,” which explores how stars might steal moons from their own planets, and whether these “exomoons” could be habitable. The study models the formation of exomoons around giant gas worlds at various distances from their stars, measured in astronomical units (au), where one au is the distance between Earth and the Sun.
Planets with orbits of one or two au are more likely to sport exomoons in the habitable zone, but they are also at risk of stars yanking the exomoons away in brazen acts of “stellar theft,” according to the study.
“Our simulations show that moons with masses between Mars and Earth could form around planets with masses about ten times that of Jupiter, and many of these moons could potentially be habitable at 1 − 2 au stellar distances,” said researchers led by Zoltán Dencs of the Gothard Astrophysical Observatory, “These findings suggest that it is worth investigating not only rocky planets but also gas giants for Earth-like habitable environments.”
In addition to raising some fascinating questions, let’s hope this study inspires Rockstar to take the GTA franchise to outer space. I want to throw an astronaut off a lunar buggy and take it for a joyride across the Moon.
Tensor.Art, an AI image creating and model sharing site announced on Friday that it is “temporarily” restricting AI models, tools, and posts related to pornographic content or the depiction of real-world celebrities due to pressure from payment processors.
The announcement is yet another example of payment processors acting as the ultimate arbiter of what kind of content can be easily made available online and those companies’ seemingly increased focus on AI-generated adult or nonconsensual content.
The news is especially significant following a similar change in policy from Civitai, an AI model sharing platform 404 Media reporting hasshownwas used for creating nonconsensual pornography. After Civitai banned AI models designed to generate the likeness of real people and certain types of adult content in May, many Civitai users and model creators migrated their models to Tensor.Art. The announcement listed three items in the “Scope of Impact” of the decision: Banning “NSFW” content, banning content based on real-world celebrities, and temporarily disabling its “Civitai Import” feature, which allowed people to easily move their Civitai models to Tensor.Art.
💡
Do you know anything else about where else these models are hosted? I would love to hear from you. Using a non-work device, you can message me securely on Signal at @emanuel.404. Otherwise, send me an email at [email protected].
“We fully understand that this is very frustrating for many creators and users 😞,” Tensor.Art said in its announcement on Discord. “Due to the previous controversy over real-person content on Civitai, TA [Tensor.Art] has unfortunately been affected because of the ‘Civitai import’ feature. Owing to mandatory requirements from credit card organizations and regulatory authorities, we are compelled to make this temporary decision.”
Tensor.Art also listed the “Reasons for Adjustment” as:
- Review requirements for high-risk content from credit card organizations and multiple national regulatory bodies - Compliance measures necessary to maintain platform openness and creators’ ability to monetize
Tensor.Art said that these changes will take place within the next 72 hours, and asked model creators to clarify if their models are “safe for work” in order to “prevent unintended impact.”
It’s not clear what Tensor.Art will look like or what its policies will be at the end of this “temporary” period. Civitai made similar changes permanently and still hasn’t been able to renew service from its payment processing providers or find new ones. Tensor.Art, however, is suggesting it’s not ready to give up on that type of content.
“This is not the end,” Tensor.Art said in the announcement. “We are actively seeking solutions to minimize the impact of these restrictions and exploring compliant ways to restore currently hidden content. We remain committed to our original mission.”
Tensor.Art did not immediately respond to a request for comment.
This is Behind the Blog, where we share our behind-the-scenes thoughts about how a few of our top stories of the week came together. This week, we discuss choosing what to cover, fishing expeditions, and the library.
JOSEPH: There’s an interesting cybersecurity story going on: a politically-motivated, right wing extremist hacker broke into Columbia University, stole swathes of applicant and other data, and then leaked parts of it. That’s how you got this New York Times article about New York mayoral hopeful Zohran Mamdani applying to the university as Asian and African American. Predictably, some people are reading that in bad faith, when in reality it shows how stupid and regimented application processes can be in a diverse society.
And yet, there has been precious little reporting on the Columbia hack. Wired hasn’t covered it, and, until this story, neither has The Verge. Nor have The Chronicle of Higher Education, CyberScoop, 404 Media, TechCrunch, or Krebs on Security. These—including The Verge—are small to medium-size entities, and there’s any number of possible reasons why they didn’t pick it up. (On our end, it was partly because we were short-staffed during a national holiday, and partly because we didn’t immediately piece together how extraordinary this particular hack is.) But coverage at the much bigger, well-resourced institutions is also scanty. The Wall Street Journal passed on the story. Reuters has a brief on the initial outage; AP has a short write-up as well, which The Washington Post ran as part of their syndication deal.
Subscribe to 404 Media to get The Abstract, our newsletter about the most exciting and mind-boggling science news and studies of the week.
Scientists are searching for signs of a “fifth force” at the center of our galaxy that could rewrite the rules of gravity and help to resolve some fundamental mysteries in the universe, according to a recent study in the journal Astronomy & Astrophysics.
For decades, researchers have speculated that exotic new physics could fill missing links in our current understanding of gravity, which is based on Einstein’s general relativity. One idea is that a hypothetical fifth force—in addition to gravity, electromagnetism, and the strong and weak nuclear forces—known as a Yukawa-type correction might subtly alter how gravity behaves over certain distances. A direct detection of this force could shed light on longstanding puzzles like the nature of dark matter, an unidentified substance that accounts for most mass in the universe, or the behavior of gravity at quantum scales.
Now, researchers have used the advanced GRAVITY instrument at the Very Large Telescope in Chile to look for hints of this correction near the supermassive black hole at the center of the Milky Way.
“The current theory of gravity is unable to explain some observations performed in the universe” such as “the presence of dark matter, or the expanding universe,” said Arianna Foschi, a postdoctoral researcher at the Paris Observatory and an author of the new study, in an email to 404 Media.
“One possible explanation for this may be that the theory of gravity is not complete yet and some modifications to explain those effects are needed,” she added. “We looked exactly for the presence of such a modification.”
Whereas gravity influences objects over massive cosmic distances, the Yukawa correction is predicted to be short-ranged and undetectable in local environments, such as our planet or the solar system. However, hints of this force, if it exists, could be observable near our galaxy’s supermassive black hole, Sagittarius A*, a chaotic region that showcases gravity at an extreme.
With that in mind, the GRAVITY collaboration trained its namesake instrument on a massive star called S2 that is very close to the supermassive black hole, orbiting it once every 16 years. Due to its proximity to the black hole, S2 has yielded many insights about gravity and general relativity, making it an attractive target for the team’s hunt for a fifth force.
The motion of S2, along with other stars around Sagittarius A* “can be incredibly useful to check whether objects orbiting around a supermassive black hole follow the same rule as planets in the solar system,” Foschi said. “Observations suggest that indeed the law that makes S2 move is the same as the Earth, however there still can be modifications that cannot be seen ‘by eye’ but needed to be tested.”
As it turned out, the instrument’s precise measurements did not detect a fifth force, but they did get us closer. The results narrowed down the parameters of its possible intensity, represented by the variable “alpha.”
“If before, alpha must be less than 0.01, now with our data we showed that it must be smaller than 0.003, significantly improving this constraint,” Forschi said.
Lorenzo Iorio, a physicist with the Italian Ministry of Education and Merit and an expert on modified theories of gravity, said in an email that the team’s approach made sense in principle, but that he had some concerns with the methods. Iorio, who was not involved in the study, cited updated formulas and variables that were left out of its models, but that he said might have improved its accuracy. For instance, the models did not account for the Lense-Thirring effect, which is a relativistic phenomenon near massive objects, or the influence of the accelerations of stars near S2, among other factors.
“I'd say that it is an interesting study that, rather, points towards the possibilities offered by this peculiar celestial laboratory (Sagittarius A* and the S stars),” Iorio said. “It should be repeated more accurately.”
Foschi acknowledged that the variables were not included in the models, but noted that the GRAVITY observations were not yet sensitive enough to capture many of those details—yet.
“An upgrade of the GRAVITY instrument is already on its way to increase the sensitivity and measure indeed these higher order effects, but we have to wait for that,” she noted.
“We would like to extend the same analysis to data of other stars around Sagittarius A* and the improving precision of the GRAVITY instrument will help us to do so. In this way we will be able to put on even stronger constraints.”
To that end, the GRAVITY collaboration plans to continue gazing at the center of the galaxy for signs of a fifth force, or any other modifications to gravity.
“If such a force is observed, it would be an incredible breakthrough in physics, because it would modify one of the oldest physical laws we have,” Foschi concluded. “It would have consequences in every field of physics.”
🌘
Subscribe to 404 Media to get The Abstract, our newsletter about the most exciting and mind-boggling science news and studies of the week.
I am talking to Amina, an AI avatar that plays the role of a woman living in a refugee camp in Chad after she escaped violence in Sudan. “I’m hanging in there, yeah,” Amina tells me. “ It’s tough here in the camp, but we’re staying strong.” I ask her how she came to be in a refugee camp. “I fled my hometown of Al Junaynah in Sudan back in June 2023, due to the violence, yeah?” It says. “It was a terrifying journey to escape the conflict. Now I’m trying to keep my kids safe and find some stability here in the camp. It’s not easy but we have to keep going, right?”
0:00
/2:27
One of my conversations with 'Amina.'
Amina is an experiment, part of a pair of AI avatars created by the United Nations University Center for Policy Research (UNU-CPR), a research institution connected to the United Nations. It’s one that is sure to be controversial, considering that the UN itself says a major problem facing refugees is their dehumanization for political gain or convenience. The UNU-CPR project is using an inhuman technology in an attempt to help people learn more about what they are facing. The group also tested a soldier persona called “Abdalla,” which “simulates the behavior and decision-making patterns of an actual combatant, offering negotiators and mediators a possible tool to train for future high-stakes negotiations.”
Last month I put down $100 to pre-order the Trump Organization’s forthcoming mobile phone, the T1. Or, I tried to. As I wrote at the time, the website went to an error page, charged my credit card the wrong amount of $64.70, and sent a confirmation email saying I would receive another confirmation email when my order had been shipped, but I hadn’t provided a shipping address.
I was surprised then to see another two charges on my card from Trump Mobile on Thursday, for $100 and $64.70 respectively. I did not expect or authorize these charges and will be trying to get my money back, if they go through (they’re currently pending). I don’t know when I will get my phone. I also don’t know how to make the charges to my credit card stop because other parts of the (since updated) website also return errors and the customer service number I called on the website couldn’t help either.
At first, the Trump Mobile phone pre-order process was bumbling. The company is now charging my card again and I have no idea why.
Agents from Immigration and Customs Enforcement (ICE) have gained access to a massive database of health and car insurance claims and are using it to track down people they want to deport, according to internal ICE material viewed by 404 Media. The database, which contains details on more than 1.8 billion insurance claims and 58 million medical bills and growing, includes peoples’ names, addresses, telephone and tax identification numbers, license plates, and other sensitive personal information.
The news shows how ICE continues to try to leverage whatever data it is able to access or purchase as part of its deportation mission. The news also highlights the existence of the database, called ISO ClaimSearch, that many members of the public have likely never heard of, nor understand they may be included in. Traditionally ISO ClaimSearch is used by insurers to identify people committing fraud or police to recover stolen vehicles. Now, that database is being repurposed as a deportation tool.
💡
Do you know anything else about ICE's access to datasets like this? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at [email protected].
“ICE ERO use of this data reaffirms that ICE will stop at nothing to build a mass surveillance dragnet to track, surveil and criminalize all community members. Time and time again, ICE has shown us that it intends to build a mass surveillance system that nets all Americans. It is not about combatting crime, this is about the federal government having surveillance power and control over all Americans,” Julie Mao, co-founder and deputy director of Just Futures Law, told 404 Media in an email.