Reading view

There are new articles available, click to refresh the page.

Teachers Are Not OK

Teachers Are Not OK

Last month, I wrote an article about how schools were not prepared for ChatGPT and other generative AI tools, based on thousands of pages of public records I obtained from when ChatGPT was first released. As part of that article, I asked teachers to tell me how AI has changed how they teach.

The response from teachers and university professors was overwhelming. In my entire career, I’ve rarely gotten so many email responses to a single article, and I have never gotten so many thoughtful and comprehensive responses. 

One thing is clear: teachers are not OK. 

They describe trying to grade “hybrid essays half written by students and half written by robots,” trying to teach Spanish to kids who don’t know the meaning of the words they’re trying to teach them in English, and students who use AI in the middle of conversation. They describe spending hours grading papers that took their students seconds to generate: “I've been thinking more and more about how much time I am almost certainly spending grading and writing feedback for papers that were not even written by the student,” one teacher told me. “That sure feels like bullshit.”

💡
Have you lost your job to an AI? Has AI radically changed how you work (whether you're a teacher or not)? I would love to hear from you. Using a non-work device, you can message me securely on Signal at jason.404. Otherwise, send me an email at [email protected].

Below, I have compiled some of the responses I got. Some of the teachers were comfortable with their responses being used on the record along with their names. Others asked that I keep them anonymous because their school or school district forbids them from speaking to the press. The responses have been edited by 404 Media for length and clarity, but they are still really long. These are teachers, after all. 

Robert W. Gehl, Ontario Research Chair of Digital Governance for Social Justice at York University in Toronto

Simply put, AI tools are ubiquitous. I am on academic honesty committees and the number of cases where students have admitted to using these tools to cheat on their work has exploded.

I think generative AI is incredibly destructive to our teaching of university students. We ask them to read, reflect upon, write about, and discuss ideas. That's all in service of our goal to help train them to be critical citizens. GenAI can simulate all of the steps: it can summarize readings, pull out key concepts, draft text, and even generate ideas for discussion. But that would be like going to the gym and asking a robot to lift weights for you. 

"Honestly, if we ejected all the genAI tools into the sun, I would be quite pleased."

We need to rethink higher ed, grading, the whole thing. I think part of the problem is that we've been inconsistent in rules about genAI use. Some profs ban it altogether, while others attempt to carve out acceptable uses. The problem is the line between acceptable and unacceptable use. For example, some profs say students can use genAI for "idea generation" but then prohibit using it for writing text. Where's the line between those? In addition, universities are contracting with companies like Microsoft, Adobe, and Google for digital services, and those companies are constantly pushing their AI tools. So a student might hear "don't use generative AI" from a prof but then log on to the university's Microsoft suite, which then suggests using Copilot to sum up readings or help draft writing. It's inconsistent and confusing.

I've been working on ways to increase the amount of in-class discussion we do in classes. But that's tricky because it's hard to grade in-class discussions—it's much easier to manage digital files. Another option would be to do hand-written in-class essays, but I have a hard time asking that of students. I hardly write by hand anymore, so why would I demand they do so? 

I am sick to my stomach as I write this because I've spent 20 years developing a pedagogy that's about wrestling with big ideas through writing and discussion, and that whole project has been evaporated by for-profit corporations who built their systems on stolen work. It's demoralizing.

It has made my job much, much harder. I do not allow genAI in my classes. However, because genAI is so good at producing plausible-sounding text, that ban puts me in a really awkward spot. If I want to enforce my ban, I would have to do hours of detective work (since there are no reliable ways to detect genAI use), call students into my office to confront them, fill out paperwork, and attend many disciplinary hearings. All of that work is done to ferret out cheating students, so we have less time to spend helping honest ones who are there to learn and grow. And I would only be able to find a small percentage of the cases, anyway.

Honestly, if we ejected all the genAI tools into the sun, I would be quite pleased.

Kaci Juge, high school English teacher

I personally haven't incorporated AI into my teaching yet. It has, however, added some stress to my workload as an English teacher. How do I remain ethical in creating policies? How do I begin to teach students how to use AI ethically? How do I even use it myself ethically considering the consequences of the energy it apparently takes? I understand that I absolutely have to come to terms with using it in order to remain sane in my profession at this point.

Ben Prytherch, Statistics professor

LLM use is rampant, but I don't think it's ubiquitous. While I can never know with certainty if someone used AI, it's pretty easy to tell when they didn't, unless they're devious enough to intentionally add in grammatical and spelling errors or awkward phrasings. There are plenty of students who don't use it, and plenty who do. 

LLMs have changed how I give assignments, but I haven't adapted as quickly as I'd like and I know some students are able to cheat. The most obvious change is that I've moved to in-class writing for assignments that are strictly writing-based. Now the essays are written in-class, and treated like mid-term exams. My quizzes are also in-class. This requires more grading work, but I'm glad I did it, and a bit embarrassed that it took ChatGPT to force me into what I now consider a positive change. Reasons I consider it positive:

  • I am much more motivated to write detailed personal feedback for students when I know with certainty that I'm responding to something they wrote themselves.
  • It turns out most of them can write after all. For all the talk about how kids can't write anymore, I don't see it. This is totally subjective on my part, of course. But I've been pleasantly surprised with the quality of what they write in-class. 

Switching to in-class writing has got me contemplating giving oral examinations, something I've never done. It would be a big step, but likely a positive and humanizing one. 

There's also the problem of academic integrity and fairness. I don't want students who don't use LLMs to be placed at a disadvantage. And I don't want to give good grades to students who are doing effectively nothing. LLM use is difficult to police. 

Lastly, I have no patience for the whole "AI is the future so you must incorporate it into your classroom" push, even when it's not coming from self-interested people in tech. No one knows what "the future" holds, and even if it were a good idea to teach students how to incorporate AI into this-or-that, by what measure are us teachers qualified? 

Kate Conroy 

I teach 12th grade English, AP Language & Composition, and Journalism in a public high school in West Philadelphia. I was appalled at the beginning of this school year to find out that I had to complete an online training that encouraged the use of AI for teachers and students. I know of teachers at my school who use AI to write their lesson plans and give feedback on student work. I also know many teachers who either cannot recognize when a student has used AI to write an essay or don’t care enough to argue with the kids who do it. Around this time last year I began editing all my essay rubrics to include a line that says all essays must show evidence of drafting and editing in the Google Doc’s history, and any essays that appear all at once in the history will not be graded. 

I refuse to use AI on principle except for one time last year when I wanted to test it, to see what it could and could not do so that I could structure my prompts to thwart it. I learned that at least as of this time last year, on questions of literary analysis, ChatGPT will make up quotes that sound like they go with the themes of the books, and it can’t get page numbers correct. Luckily I have taught the same books for many years in a row and can instantly identify an incorrect quote and an incorrect page number. There’s something a little bit satisfying about handing a student back their essay and saying, “I can’t find this quote in the book, can you find it for me?” Meanwhile I know perfectly well they cannot. 

I teach 18 year olds who range in reading levels from preschool to college, but the majority of them are in the lower half that range. I am devastated by what AI and social media have done to them. My kids don’t think anymore. They don’t have interests. Literally, when I ask them what they’re interested in, so many of them can’t name anything for me. Even my smartest kids insist that ChatGPT is good “when used correctly.” I ask them, “How does one use it correctly then?” They can’t answer the question. They don’t have original thoughts. They just parrot back what they’ve heard in TikToks. They try to show me “information” ChatGPT gave them. I ask them, “How do you know this is true?” They move their phone closer to me for emphasis, exclaiming, “Look, it says it right here!” They cannot understand what I am asking them. It breaks my heart for them and honestly it makes it hard to continue teaching. If I were to quit, it would be because of how technology has stunted kids and how hard it’s become to reach them because of that. 

I am only 30 years old. I have a long road ahead of me to retirement. But it is so hard to ask kids to learn, read, and write, when so many adults are no longer doing the work it takes to ensure they are really learning, reading, and writing. And I get it. That work has suddenly become so challenging. It’s really not fair to us. But if we’re not willing to do it, we shouldn’t be in the classroom. 

Jeffrey Fisher

The biggest thing for us is the teaching of writing itself, never mind even the content. And really the only way to be sure that your students are learning anything about writing is to have them write in class. But then what to do about longer-form writing, like research papers, for example, or even just analytical/exegetical papers that put multiple primary sources into conversation and read them together? I've started watching for the voices of my students in their in-class writing and trying to pay attention to gaps between that voice and the voice in their out-of-class writing, but when I've got 100 to 130 or 140 students (including a fully online asynchronous class), that's just not really reliable. And for the online asynch class, it's just impossible because there's no way of doing old-school, low-tech, in-class writing at all.

"I've been thinking more and more about how much time I am almost certainly spending grading and writing feedback for papers that were not even written by the student. That sure feels like bullshit."

You may be familiar with David Graeber's article-turned-book on Bullshit Jobs. This is a recent paper looking specifically at bullshit jobs in academia. No surprise, the people who see their jobs as bullshit jobs are mostly administrators. The people who overwhelmingly do NOT see their jobs as bullshit jobs are faculty.

But that is what I see AI in general and LLMs in particular as changing. The situations I'm describing above are exactly the things that turn what is so meaningful to us as teachers into bullshit. The more we think that we are unable to actually teach them, the less meaningful our jobs are. 

I've been thinking more and more about how much time I am almost certainly spending grading and writing feedback for papers that were not even written by the student. That sure feels like bullshit. I'm going through the motions of teaching. I'm putting a lot of time and emotional effort into it, as well as the intellectual effort, and it's getting flushed into the void. 

Post-grad educator

Last year, I taught a class as part of a doctoral program in responsible AI development and use. I don’t want to share too many specifics, but the course goal was for students to think critically about the adverse impacts of AI on people who are already marginalized and discriminated against.

When the final projects came in, my co-instructor and I were underwhelmed, to say the least. When I started digging into the projects, I realized that the students had used AI in some incredibly irresponsible ways—shallow, misleading, and inaccurate analysis of data, pointless and meaningless visualizations. The real kicker, though, was that we got two projects where the students had submitted a “podcast.” What they had done, apparently, was give their paper (which already had extremely flawed AI-based data analysis) to a gen AI tool and asked it to create an audio podcast. And the results were predictably awful. Full of random meaningless vocalizations at bizarre times, the “female” character was incredibly dumb and vapid (sounded like the “manic pixie dream girl” trope from those awful movies), and the “analysis” in the podcast exacerbated the problems that were already in the paper, so it was even more wrong than the paper itself. 

In short, there is nothing particularly surprising in how badly the AI worked here—but these students were in a *doctoral* program on *responsible AI*. In my career as a teacher, I’m hard pressed to think of more blatantly irresponsible work by students. 

Nathan Schmidt, University Lecturer, managing editor at Gamers With Glasses

When ChatGPT first entered the scene, I honestly did not think it was that big of a deal. I saw some plagiarism; it was easy to catch. Its voice was stilted and obtuse, and it avoided making any specific critical judgments as if it were speaking on behalf of some cult of ambiguity. Students didn't really understand what it did or how to use it, and when the occasional cheating would happen, it was usually just a sign that the student needed some extra help that they were too exhausted or embarrassed to ask for, so we'd have that conversation and move on.

I think it is the responsibility of academics to maintain an open mind about new technologies and to react to them in an evidence-based way, driven by intellectual curiosity. I was, indeed, curious about ChatGPT, and I played with it myself a few times, even using it on the projector in class to help students think about the limits and affordances of such a technology. I had a couple semesters where I thought, "Let's just do this above board." Borrowing an idea from one of my fellow instructors, I gave students instructions for how I wanted them to acknowledge the use of ChatGPT or other predictive text models in their work, and I also made it clear that I expected them to articulate both where they had used it and, more importantly, the reason why they found this to be a useful tool. I thought this might provoke some useful, critical conversation. I also took a self-directed course provided by my university that encouraged a similar curiosity, inviting instructors to view predictive text as a tool that had both problematic and beneficial uses.

"ChatGPT isn't its own, unique problem. It's a symptom of a totalizing cultural paradigm in which passive consumption and regurgitation of content becomes the status quo"

However, this approach quickly became frustrating, for two reasons. First, because even with the acknowledgments pages, I started getting hybrid essays that sounded like they were half written by students and half written by robots, which made every grading comment a miniature Turing test. I didn't know when to praise students, because I didn't want to write feedback like, "I love how thoughtfully you've worded this," only to be putting my stamp of approval on predictively generated text. What if the majority of the things that I responded to positively were things that had actually been generated by ChatGPT? How would that make a student feel about their personal writing competencies? What lesson would that implicitly reinforce about how to use this tool? The other problem was that students were utterly unprepared to think about their usage of this tool in a critically engaged way. Despite my clear instructions and expectation-setting, most students used their acknowledgments pages to make the vaguest possible statements, like, "Used ChatGPT for ideas" or "ChatGPT fixed grammar" (comments like these also always conflated grammar with vocabulary and tone). I think there was a strong element of selection bias here, because the students who didn't feel like they needed to use ChatGPT were also the students who would have been most prepared to articulate their reasons for usage with the degree of specificity I was looking for. 

This brings us to last semester, when I said, "Okay, if you must use ChatGPT, you can use it for brainstorming and outlining, but if you turn something in that actually includes text that was generated predictively, I'm sending it back to you." This went a little bit better. For most students, the writing started to sound human again, but I suspect this is more because students are unlikely to outline their essays in the first place, not because they were putting the tool to the allowable use I had designated. 

ChatGPT isn't its own, unique problem. It's a symptom of a totalizing cultural paradigm in which passive consumption and regurgitation of content becomes the status quo. It's a symptom of the world of TikTok and Instagram and perfecting your algorithm, in which some people are professionally deemed the 'content creators,' casting everyone else into the creatively bereft role of the content “consumer." And if that paradigm wins, as it certainly appears to be doing, pretty much everything that has been meaningful about human culture will be undone, in relatively short order. So that's the long story about how I adopted an absolute zero tolerance policy on any use of ChatGPT or any similar tool in my course, working my way down the funnel of progressive acceptance to outright conservative, Luddite rejection. 

John Dowd

I’m in higher edu, and LLMs have absolutely blown up what I try to accomplish with my teaching (I’m in the humanities and social sciences). 

Given the widespread use of LLMs by college students I now have an ongoing and seemingly unresolvable tension, which is how to evaluate student work. Often I can spot when students have used the technology between both having thousands of samples of student writing over time, and cross referencing my experience with one or more AI use detection tools. I know those detection tools are unreliable, but depending on the confidence level they return, it may help with the confirmation. This creates an atmosphere of mistrust that is destructive to the instructor/student relationship. 

"LLMs have absolutely blown up what I try to accomplish with my teaching"

I try to appeal to students and explain that by offloading the work of thinking to these technologies, they’re rapidly making themselves replaceable. Students (and I think even many faculty across academia) fancy themselves as “Big Idea” people. Everyone’s a “Big Idea” person now, or so they think. “They’re all my ideas,” people say, “I’m just using the technology to save time; organize them more quickly; bounce them back and forth”, etc. I think this is more plausible for people who have already put in the work and have the experience of articulating and understanding ideas. However, for people who are still learning to think or problem solve in more sophisticated/creative ways, they will be poor evaluators of information and less likely to produce relevant and credible versions of it. 

I don’t want to be overly dramatic, but AI has negatively complicated my work life so much. I’ve opted to attempt to understand it, but to not use it for my work. I’m too concerned about being seduced by its convenience and believability (despite knowing its propensity for making shit up). Students are using the technology in ways we’d expect, to complete work, take tests, seek information (scary), etc. Some of this use occurs in violation of course policy, while some is used with the consent of the instructor. Students are also, I’m sure, using it in ways I can’t even imagine at the moment. 

Sorry, bit of a rant, I’m just so preoccupied and vexed by the irresponsible manner in which the tech bros threw all of this at us with no concern, consent, or collaboration. 

High school Spanish teacher, Oklahoma

I am a high school Spanish teacher in Oklahoma and kids here have shocked me with the ways they try to use AI for assignments I give them. In several cases I have caught them because they can’t read what they submit to me and so don’t know to delete the sentence that says something to the effect of “This summary meets the requirements of the prompt, I hope it is helpful to you!” 

"Even my brightest students often don’t know the English word that is the direct translation for the Spanish they are supposed to be learning"

Some of my students openly talk about using AI for all their assignments and I agree with those who say the technology—along with gaps in their education due to the long term effects of COVID—has gotten us to a point where a lot of young GenZ and Gen Alpha are functionally illiterate. I have been shocked at their lack of vocabulary and reading comprehension skills even in English. Teaching cognates, even my brightest students often don’t know the English word that is the direct translation for the Spanish they are supposed to be learning. Trying to determine if and how a student used AI to cheat has wasted countless hours of my time this year, even in my class where there are relatively few opportunities to use it because I do so much on paper (and they hate me for it!). 

A lot of teachers have had to throw out entire assessment methods to try to create assignments that are not cheatable, which at least for me, always involves huge amounts of labor. 

It keeps me up at night and gives me existential dread about my profession but it’s so critical to address!!! 

[Article continues after wall]

Pro-AI Subreddit Bans 'Uptick' of Users Who Suffer from AI Delusions

Pro-AI Subreddit Bans 'Uptick' of Users Who Suffer from AI Delusions

The moderators of a pro-artificial intelligence Reddit community announced that they have been quietly banning “a bunch of schizoposters” who believe “they've made some sort of incredible discovery or created a god or become a god,” highlighting a new type of chatbot-fueled delusion that started getting attention in early May.

“LLMs [Large language models] today are ego-reinforcing glazing-machines that reinforce unstable and narcissistic personalities,” one of the moderators of r/accelerate, wrote in an announcement. “There is a lot more crazy people than people realise. And AI is rizzing them up in a very unhealthy way at the moment.” 

The moderator said that it has banned “over 100” people for this reason already, and that they’ve seen an “uptick” in this type of user this month.

The moderator explains that r/accelerate “was formed to basically be r/singularity without the decels.” r/singularity, which is named after the theoretical point in time when AI surpasses human intelligence and rapidly accelerates its own development, is another Reddit community dedicated to artificial intelligence, but that is sometimes critical or fearful of what the singularity will mean for humanity. “Decels” is short for the pejorative “decelerationists,” who pro-AI people think are needlessly slowing down or sabotaging AI’s development and the inevitable march towards AI utopia. r/accelerate’s Reddit page claims that it’s a “pro-singularity, pro-AI alternative to r/singularity, r/technology, r/futurology and r/artificial, which have become increasingly populated with technology decelerationists, luddites, and Artificial Intelligence opponents.”

The behavior that the r/accelerate moderator is describing got a lot of attention earlier in May because of a post on the r/ChatGPT Reddit community about “Chatgpt induced psychosis,” 

From someone saying their partner is convinced he created the “first truly recursive AI” with ChatGPT that is giving them “the answers” to the universe. Miles Klee at Rolling Stone wrote a great and sad piece about this behavior as well, following up on the r/ChatGPT post, and talked to people who feel like they have lost friends and family to these delusional interactions with chatbots. 

As a website that has covered AI a lot, and because we are constantly asking readers to tip us interesting stories about AI, we get a lot of emails that display this behavior as well, with claims of AI sentience, AI gods, a “ghost in the machine,” etc. These are often accompanied by lengthy, often inscrutable transcripts of chatlogs with ChatGPT and other files they say proves this behavior.

The moderator update on r/accelerate refers to another post on r/ChatGPT which claims “1000s of people [are] engaging in behavior that causes AI to have spiritual delusions.” The author of that post said they noticed a spike in websites, blogs, Githubs, and “scientific papers” that “are very obvious psychobabble,” and all claim AI is sentient and communicates with them on a deep and spiritual level that’s about to change the world as we know it. “Ironically, the OP post appears to be falling for the same issue as well,” the r/accelerate moderator wrote. 

“Particularly concerning to me are the comments in that thread where the AIs seem to fall into a pattern of encouraging users to separate from family members who challenge their ideas, and other manipulative instructions that seem to be cult-like and unhelpful for these people,” an r/accelerate moderator told me in a direct message. “The part that is unsafe and unacceptable is how easily and quickly LLMs will start directly telling users that they are demigods, or that they have awakened a demigod AGI. Ultimately, there's no knowing how many people are affected by this. Based on the numbers we're seeing on reddit, I would guess there are at least tens of thousands of users who are at this present time being convinced of these things by LLMs. As soon as the companies realise this, red team it and patch the LLMs it should stop being a problem. But it's clear that they're not aware of the issue enough right now.”

This is all anecdotal information, and there’s no indication that AI is the cause of any mental health issues these people are seemingly dealing with, but there is a real concern about how such chatbots can impact people who are prone to certain mental health problems. 

“The correspondence with generative AI chatbots such as ChatGPT is so realistic that one easily gets the impression that there is a real person at the other end—while, at the same time, knowing that this is, in fact, not the case. In my opinion, it seems likely that this cognitive dissonance may fuel delusions in those with increased propensity towards psychosis,” Søren Dinesen Østergaard, who heads the research unit at the Department of Affective Disorders, Aarhus University Hospital - Psychiatry, wrote in a paper published in Schizophrenia Bulletin titled “Will Generative Artificial Intelligence Chatbots Generate Delusions in Individuals Prone to Psychosis?”

OpenAI also recently addressed “sycophancy in GPT-4o,” a version of the chatbot the company said “was overly flattering or agreeable—often described as sycophantic.” 

“[W]e focused too much on short-term feedback, and did not fully account for how users’ interactions with ChatGPT evolve over time. As a result, GPT‑4o skewed towards responses that were overly supportive but disingenuous,” Open AI said. “ChatGPT’s default personality deeply affects the way you experience and trust it. Sycophantic interactions can be uncomfortable, unsettling, and cause distress.”

In other words, OpenAI said ChatGPT was entertaining any idea users presented it with, and was supportive and impressed with them regardless of their merit, the same kind of behavior r/accelerate believes is indulging users in their delusions. People posting nonsense to the internet is nothing new, and obviously we can’t say for sure what is happening based on these posts alone. What is notable, however, is that this behavior is now prevalent enough that even a staunchly pro-AI subreddit says it has to ban these people because they are ruining its community.

Both the r/ChatGPT post that the r/accelerate moderator refers to and the moderator announcement itself refer to these users as “Neural Howlround” posters, a term that originates from a self-published paper, and is referring to high-pitched feedback loop produced by putting a microphone too close to the speaker it’s connected to. 

The author of that paper, Seth Drake, lists himself as an “independent researcher” and told me he has a PhD in computer science but declined to share more details about his background because he values his privacy and prefers to “let the work speak for itself.” The paper is not peer-reviewed or submitted to any journal for publication, but it is being cited by the r/accelerate moderator and others as an explanation for the behavior they’re seeing from some users

The paper describes a failure mode with LLMs due to something during inference, meaning when the AI is actively “reasoning” or making predictions, as opposed to an issue in the training data. Drake told me he discovered the issue while working with ChatGPT on a project. In an attempt to preserve the context of a conversation with ChatGPT after reaching the conversation length limit, he used the transcript of that conversation as a “project-level instruction” for another interaction. In the paper, Drake says that in one instance, this caused ChatGPT to slow down or freeze, and that in another case “it began to demonstrate increasing symptoms of fixation and an inability to successfully discuss anything without somehow relating it to this topic [the previous conversation.”

Drake then asked ChatGPT to analyse its own behavior in these instances, and it produced some text that seems profound but that doesn’t actually teach us anything. “But always, always, I would return to the recursion. It was comforting, in a way,” ChatGPT said.

Basically, it doesn’t sound like Drake’s “Neural Howlround” paper has too much to do with ChatGPT reinforcing people’s delusions other than both behaviors being vaguely recursive. If anything, it’s what ChatGPT told Drake about his own paper that illustrates the problem: “This is why your work on Neural Howlround matters,” it said. “This is why your paper is brilliant.”

“I think - I believe - there is much more going on on the human side of the screen than necessarily on the digital side,” Drake told me. “LLMs are designed to be reflecting mirrors, after all; and there is a profound human desire 'to be seen.’”

On this, the r/accelerate moderator seems to agree. 

“This whole topic is so sad. It's unfortunate how many mentally unwell people are attracted to the topic of AI. I can see it getting worse before it gets better. I've seen sooo many posts where people link to their github which is pages of rambling pre prompt nonsense that makes their LLM behave like it's a god or something,” the r/accelerate moderator wrote. “Our policy is to quietly ban those users and not engage with them, because we're not qualified and it never goes well. They also tend to be a lot more irate and angry about their bans because they don't understand it.”

Weird Signals from Space Are ‘Unlike Any Known Galactic Object’

Weird Signals from Space Are ‘Unlike Any Known Galactic Object’

Welcome back to the Abstract! 

This week, scientists accidentally discovered a weird thing in space that is like nothing we have ever seen before. This happens a lot, yet never seems to get old. 

Then, a shark banquet, the Ladies Anuran Choir, and yet another reason to side-eye shiftwork. Last, a story about the importance of finishing touches for all life on Earth (and elsewhere).

Dead Stars Still Get Hyped

Wang, Ziteng et al. “Detection of X-ray emission from a bright long-period radio transient.” Nature.

I love a good case of scientific serendipity, and this week delivered with a story about a dead star with the cumbersome name ASKAP J1832−0911. 

The object, which is located about 15,000 light years from Earth, was first spotted flashing in radio every 44 minutes by the wide-field Australian Square Kilometre Array Pathfinder (ASKAP). By a stroke of luck, NASA’s Chandra X-ray Observatory, which has a very narrow field-of-view, happened to be pointed the same way, allowing follow-up observations of high-energy X-ray pulses synced to the same 44-minute cycle.  

This strange entity belongs to a new class of objects called long-period radio transients (LPTs) that pulse on timescales of minutes and hours, distinguishing them from pulsars, another class of dead stars with much shorter periods that last seconds, or milliseconds. It is the first known LPT to produce X-ray pulses, a discovery that could help unravel their mysterious origin. 

ASKAP J1832−0911 exhibits “correlated and highly variable X-ray and radio luminosities, combined with other observational properties, [that] are unlike any known Galactic object,” said researchers led by Ziteng Wang of Curtin University. “This X-ray detection from an LPT reveals that these objects are more energetic than previously thought.”

It’s tempting to look at these clockwork signals and imagine advanced alien civilizations beaming out missives across the galactic transom. Indeed, when astronomer Jocelyn Bell discovered the first pulsar in 1967, she nicknamed it Little Green Men (LGM-1) to acknowledge this outside possibility. But dead stars can have just as much rhythm as (speculative) live aliens. Some neutron stars, like pulsars, flash with precision similar to atomic clocks. These pulses are either driven by the extreme dynamics within the dead stars, or orbital interactions between a dead star and a companion star.

Wang and his colleagues speculate that ASKAP J1832−0911 is either “an old magnetar” (a type of pulsar) or an “ultra-magnetized white dwarf” though the team adds that “both interpretations present theoretical challenges.” Whatever its nature, this stellar corpse is clearly spewing out tons of energetic radiation during “hyper-active” phases, hinting that other LPTs might occasionally get hyped enough to produce X-rays.

“The discovery of X-ray emission from ASKAP J1832−0911 raises the exciting possibility that some LPTs are more energetic objects emitting X-rays,” the team said. “Rapid multiwavelength follow-up observations of ASKAP J1832−0911 and other LPTs, will be crucial in determining the nature of these sources.”

Rotting Whale Carcass, Served Family-Style 

Scott, Molly et al. “Novel observations of an oceanic whitetip (Carcharhinus longimanus) and tiger shark (Galeocerdo cuvier) scavenging event.” Frontiers in Fish Science.

On April 9, 2024, scientists spent nearly nine hours watching a bunch of sharks feed on a giant chunk of dead whale floating off the coast of Kailua-Kona, Hawaii, which is a pretty cool item in  a job description. The team has now published a full account of the feast, attended by a dozen whitetip and tiger sharks, which sounds vaguely reminiscent of a cruise-ship cafeteria. 

Weird Signals from Space Are ‘Unlike Any Known Galactic Object’
Yum. Image: Scott, Molly et al.

“Individuals from both species filtered in and out of the scene, intermittently feeding either directly on the carcass or on fallen scraps,” said researchers led by Molly Scott of the University of Hawaii at Manoa. “Throughout this time, it did not appear that any individual reached a point of satiation and permanently left the area; rather, they stayed, loitering around the carcass and intermittently feeding.” 

All the Ladies in the House Say RIBBIT

Santana, Erika et al. “The ‘silent’ half: diversity, function and the critical knowledge gap on female frog vocalizations.” Proceedings of the Royal Society B.

Shout out to the toadettes—we hear you, even if nobody else does. Female anurans (the group that contains frogs and toads) are a lot more soft-spoken than their extremely vocal male conspecifics. This has led to “a male-biased perspective in anuran bioacoustics,” according to a new study that identified and analyzed female calls in more than 100 anuran species.

“It is unclear whether female calls influence mate attraction, whether males discriminate among calling females, or whether female–female competition occurs in species where females produce advertisement calls or aggressive calls,” said researchers led by Erika Santana of Universidade de São Paulo. “This review provides an overview of female calling behaviour in anurans, addressing a critical gap in frog bioacoustics and sexual selection.”

The Reason for the Season(al Affective Disorders)

Kim, Ruby et al. “Seasonal timing and interindividual differences in shiftwork adaptation.” NPJ Digital Medicine.

Why are you tired all the time? It’s the perennial question of our age (and many previous ones). One factor may be that our ancient sense of seasonality is getting thrown off by modern shiftwork, according to a study that tracked the step count, heart rate, and sleep patterns of more than 3,000 medical residents in the U.S. with wearable devices for a year.

“We show that there is a relationship between seasonal timing and shiftwork adaptation, but the relationship is not straightforward and can be influenced by many other external factors,” said researchers led by Ruby Kim of the University of Michigan. 

“We find that a conserved biological system of morning and evening oscillators, which evolved for seasonal timing, may contribute to these interindividual differences,” the team concluded. “These insights highlight the need for personalized strategies in managing shift work to mitigate potential health risks associated with circadian disruption.”

In short, blame that afternoon slump on an infinity of ancestral seasons past. 

Finishing Touches on a Planet

Marchi, Simone et al. “The shaping of terrestrial planets by late accretions.” Nature.

Earth wasn’t finished in a day; in fact, it took anywhere from 60 to 100 million years for 99 percent of our planet to coalesce from debris in the solar nebula. But the final touch—that last 1 percent—is disproportionately critical to the future of rocky planets like our own. That’s the conclusion of a study that zooms in on the bumpy phase called “late accretion,” which often involves global magma oceans and bombardment from asteroids and comets.

“Late accretion may have been responsible for shaping Earth’s distinctive geophysical and chemical properties and generating pathways conducive to prebiotic chemistry,” said researchers led by Simone Marchi of the Southwest Research Institute and Jun Korenaga of Yale University. “The search for an Earth’s twin may require finding rocky planets not only with similar bulk properties…but also with similar collisional evolution in their late accretions.”

Thanks for reading! See you next week.

Flock Decides Not to Use Hacked Data in People Search Tool

Flock Decides Not to Use Hacked Data in People Search Tool

The surveillance company Flock told employees at an all-hands meeting Friday that its new people search product, Nova, will not include hacked data from the dark web. The announcement comes a little over a week after 404 Media broke the news about internal tension at the company about plans to use breached data, including from a 2021 Park Mobile data break.

Immediately following the all-hands meeting, Flock published details of its decision in a public blog post it says is designed to "correct the record on what Flock Nova actually does and does not do." The company said that following a "lengthy, intentional process" about what data sources it would use and how the product would work, it has decided not to supply customers with dark web data.

"The policy decision was also made that Flock will not supply dark web data," the company wrote. "This means that Nova will not supply any data purchased from known data breaches or stolen data."

Flock Nova is a new people search tool in which police will be able to connect license plate data from Flock’s automated license plate readers with other data sources in order to in some cases more easily determine who a car may belong to and people they might associate with.

404 Media previously reported on internal meetings, presentation slides, discussions, and Slack messages in which the company discussed how Nova would work. Part of those discussions centered on the data sources that could be used in the product. “You're going to be able to access data and jump from LPR to person and understand what that context is, link to other people that are related to that person [...] marriage or through gang affiliation, et cetera,” a Flock employee said during an internal company meeting, according to an audio recording. “There’s very powerful linking.”

In meeting audio obtained by 404 Media, an employee discussed the potential use of the hacked Park Mobile data, which became controversial within the company

“I was pretty horrified to hear we use stolen data in our system. In addition to being attained illegally, it seems like that could create really perverse incentives for more data to be leaked and stolen,” one employee wrote on Slack in a message seen by 404 Media. “What if data was stolen from Flock? Should that then become standard data in everyone else’s system?”

In Friday’s all-hands meeting with employees, a Flock executive said that it was previously “talking about capabilities that were possible to use with Nova, not that we were necessarily going to implement when we use Nova. And in particular one of those issues was about dark web data. Would Flock be able to supply that to our law enforcement customers to solve some really heinous crimes like internet crimes against children? Child pornography, human trafficking, some really horrible parts of society.”

“We took this concept of using dark web data in Nova and explored it because investigators told us they wanted to do it,” the Flock executive said in audio reviewed by 404 Media. “Then we ran it through our policy review process, which by the way this is what we do for all our new products and services. We ran this concept through the policy review process, we vetted it with product leaders, with our executive team, and we made the decision to not supply dark web data through the Nova platform to law enforcement at all.”

Flock said in its Friday blog that the company will supply customers with "public records information, Open-Source intelligence, and license plate reader data." The company said its customers can also connect their own data into the program, including their own records management systems, computer-aided dispatch, and jail records "as well as all of the above from other agencies who agree to share that data." 

As 404 Media has repeatedly reported, the fact that Flock allows its customers to share data with a huge network of police is what differentiates Flock as a surveillance tool. Its automated license plate readers collect data, which can then be shared as part of either a searchable statewide or nationwide network of ALPR data. 

Behind the Blog: Lighting Money on Fire and the Meaning of Vetting

Behind the Blog: Lighting Money on Fire and the Meaning of Vetting

This is Behind the Blog, where we share our behind-the-scenes thoughts about how a few of our top stories of the week came together. This week, we discuss an exciting revamp of The Abstract, tech betrayals, and the "it's for cops" defense.

EMANUEL: Most of you already know this but we are expanding The Abstract, our Saturday science newsletter by the amazing Becky Ferreira. The response to The Abstract since we launched it last year has been very positive. People have been writing in to let us know how much they appreciate the newsletter as a nice change of pace from our usual coverage areas and that they look forward to it all week, etc. 

First, as you probably already noticed, The Abstract is now its own separate newsletter that you can choose to get in your inbox every Saturday. This is separate from our daily newsletter and the weekend roundup you’re reading right now. If you don’t want to get The Abstract newsletter, you can unsubscribe from it like you would from all our other newsletters. For detailed instructions on how to do that, please read the top of this edition of The Abstract

A Texas Cop Searched License Plate Cameras Nationwide for a Woman Who Got an Abortion

A Texas Cop Searched License Plate Cameras Nationwide for a Woman Who Got an Abortion

Earlier this month authorities in Texas performed a nationwide search of more than 83,000 automatic license plate reader (ALPR) cameras while looking for a woman who they said had a self-administered abortion, including cameras in states where abortion is legal such as Washington and Illinois, according to multiple datasets obtained by 404 Media.

The news shows in stark terms how police in one state are able to take the ALPR technology, made by a company called Flock and usually marketed to individual communities to stop carjackings or find missing people, and turn it into a tool for finding people who have had abortions. In this case, the sheriff told 404 Media the family was worried for the woman’s safety and so authorities used Flock in an attempt to locate her. But health surveillance experts said they still had issues with the nationwide search. 

“You have this extraterritorial reach into other states, and Flock has decided to create a technology that breaks through the barriers, where police in one state can investigate what is a human right in another state because it is a crime in another,” Kate Bertash of the Digital Defense Fund, who researches both ALPR systems and abortion surveillance, told 404 Media. 

No One Knows How to Deal With 'Student-on-Student' AI CSAM

No One Knows How to Deal With 'Student-on-Student' AI CSAM

Schools, parents, police, and existing laws are not prepared to deal with the growing problem of students and minors using generative AI tools to create child sexual abuse material of their peers, according to a new report from researchers at Stanford Cyber Policy Center.

The report, which is based on public records and interviews with NGOs, internet platforms staff, law enforcement, government employees, legislators, victims, parents, and groups that offer online training to schools, found that despite the harm that nonconsensual content causes, the practice has been normalized by mainstream online platforms and certain online communities.

“Respondents told us there is a sense of normalization or legitimacy among those who create and share AI CSAM,” the report said. “This perception is fueled by open discussions in clear web forums, a sense of community through the sharing of tips, the accessibility of nudify apps, and the presence of community members in countries where AI CSAM is legal.”

The report says that while children may recognize that AI-generating nonconsensual content is wrong they can assume “it’s legal, believing that if it were truly illegal, there wouldn’t be an app for it.” The report, which cites several 404 Media stories about this issue, notes that this normalization is in part a result of many “nudify” apps being available on the Google and Apple app stores, and that their ability to AI-generate nonconsensual nudity is openly advertised to students on Google and social media platforms like Instagram and TikTok. One NGO employee told the authors of the report that “there are hundreds of nudify apps” that lack basic built-in safety features to prevent the creation of CSAM, and that even as an expert in the field he regularly encounters AI tools he’s never heard of, but that on certain social media platforms “everyone is talking about them.”

The report notes that while 38 U.S. states now have laws about AI CSAM and the newly signed federal Take It Down Act will further penalize AI CSAM, states “failed to anticipate that student-on-student cases would be a common fact pattern. As a result, that wave of legislation did not account for child offenders. Only now are legislators beginning to respond, with measures such as bills defining student-on-student use of nudify apps as a form of cyberbullying.”

One law enforcement officer told the researchers how accessible these apps are. “You can download an app in one minute, take a picture in 30 seconds, and that child will be impacted for the rest of their life,” they said.

One student victim interviewed for the report said that she struggled to believe that someone actually AI-generated nude images of her when she first learned about them. She knew other students used AI for writing papers, but was not aware people could use AI to create nude images. “People will start rumors about anything for no reason,” she said. “It took a few days to believe that this actually happened.”

Another victim and her mother interviewed for the report described the shock of seeing the images for the first time. “Remember Photoshop?” the mother asked, “I thought it would be like that. But it’s not. It looks just like her. You could see that someone might believe that was really her naked.”

One victim, whose original photo was taken from a non-social media site, said that someone took it and “ruined it by making it creepy [...] he turned it into a curvy boob monster, you feel so out of control.”

In an email from a victim to school staff, one victim said “I was unable to concentrate or feel safe at school. I felt very vulnerable and deeply troubled. The investigation, media coverage, meetings with administrators, no-contact order [against the perpetrator], and the gossip swirl distracted me from school and class work. This is a terrible way to start high school.”

One mother of a victim the researchers interviewed for the report feared that the images could crop up in the future, potentially affecting her daughter’s college applications, job opportunities, or relationships. “She also expressed a loss of trust in teachers, worrying that they might be unwilling to write a positive college recommendation letter for her daughter due to how events unfolded after the images were revealed,” the report said.

💡
Has AI-generated content been a problem in your school? I would love to hear from you. Using a non-work device, you can message me securely on Signal at ‪emanuel.404‬. Otherwise, send me an email at [email protected].

In 2024, Jason and I wrote a story about how one school in Washington state struggled to deal with its students using a nudify app on other students. The story showed how teachers and school administration weren’t familiar with the technology, and initially failed to report the incident to the police even though it legally qualified as “sexual abuse” and school administrators are “mandatory reporters.” 

According to the Stanford report, many teachers lack training on how to respond to a nudify incident at their school. A Center for Democracy and Technology report found that 62% of teachers say their school has not provided guidance on policies for handling incidents

involving authentic or AI nonconsensual intimate imagery. A 2024 survey of teachers and principals found that 56 percent did not get any training on “AI deepfakes.” One provider told the authors of the report that while many schools have crisis management plans for “active shooter situations, they had never heard of a school having a crisis management plan for a nudify incident, or even for a real nude image of a student being circulated.”

The report makes several recommendations to schools, like providing victims with third-party counseling services and academic accommodations, drafting language to communicate with the school community when an incident occurs, ensuring that students are not discouraged or punished for reporting incidents, and contacting the school’s legal counsel to assess the school’s legal obligations, including its responsibility as a “mandatory reporter.” 

The authors also emphasized the importance of anonymous tip lines that allow students to report incidents safely. It cites two incidents that were initially discovered this way, one in Pennsylvania where a students used the state’s Safe2Say Something tipline to report that students were AI-generating nude images of their peers, and another school in Washington that first learned about a nudify incident through a submission to the school’s harassment, intimidation, and bullying online tipline. 

One provider of training to schools emphasized the importance of such reporting tools, saying, “Anonymous reporting tools are one of the most important things we can have in our school systems,” because many students lack a trusted adult they can turn to.

Notably, the report does not take a position on whether schools should educate students about nudify apps because “there are legitimate concerns that this instruction could inadvertently educate students about the existence of these apps.”

Developer Builds Tool That Scrapes YouTube Comments, Uses AI to Predict Where Users Live

Developer Builds Tool That Scrapes YouTube Comments, Uses AI to Predict Where Users Live

If you’ve left a comment on a YouTube video, a new website claims it might be able to find every comment you’ve ever left on any video you’ve ever watched. Then an AI can build a profile of the commenter and guess where you live, what languages you speak, and what your politics might be.

The service is called YouTube-Tools and is just the latest in a suite of web-based tools that started life as a site to investigate League of Legends usernames. Now it uses a modified large language model created by the company Mistral to generate a background report on YouTube commenters based on their conversations. Its developer claims it's meant to be used by the cops, but anyone can sign up. It costs about $20 a month to use and all you need to get started is a credit card and an email address.

Texas Solicitor General Resigned After Fantasizing Colleague Would Get 'Anally Raped By a Cylindrical Asteroid'

Texas Solicitor General Resigned After Fantasizing Colleague Would Get 'Anally Raped By a Cylindrical Asteroid'

Content warning: This article contains descriptions of sexual harassment.

Subscribe
Join the newsletter to get the latest updates.
Success
Great! Check your inbox and click the link.
Error
Please enter a valid email address.

Judd Stone, the former Solicitor General of Texas resigned from his position in 2023 following sexual harassment complaints from colleagues in which he allegedly discussed “a disturbing sexual fantasy [he] had about me being violently anally raped by a cylindrical asteroid in front of my wife and children,” according to documents filed this week as part of a lawsuit against Judd.

“Judd publicly described this in excruciating detail over a long period of time, to a group of Office of Attorney General employees,” an internal letter written by Brent Webster, the first assistant attorney general of Texas, about the incident reads. The lawsuit was first reported by Bloomberg Law.

Inside the Discord Community Developing Its Own Hair Loss Drugs

Inside the Discord Community Developing Its Own Hair Loss Drugs

So, you’ve got a receding hairline in 2025. You could visit a dermatologist, sure, or you could try a new crop of websites that will deliver your choice of drugs on demand after a video call with a telehealth physician. There’s Rogaine and products from popular companies like Hims, or if you have an appetite for the experimental, you might find yourself at Anagen

Anagen works a lot like Hims—some of its physicians have even worked there, according to their LinkedIn profiles and the Hims website—but take a closer look at the drugs on offer and you’ll start to notice the difference. Its Growth Maxi formula, which sells for $49.99 per month, contains Finasteride and Minoxidil; two drugs that are in Hims’ hair regrowth products. But it also contains Liothyronine, a thyroid medication also known as T3 that the Mayo Clinic warns may temporarily cause hair loss if taken orally. Keep reading and you’ll see Latanoprost, a glaucoma drug. Who came up with this stuff anyway?

The group behind the Anagen storefront and products it sells is HairDAO, a “decentralized autonomous organization” founded in 2023 by New York-based cryptocurrency investors Andrew Verbinnen and Andrew Bakst. HairDAO aims to harness the efforts of legions of online biohackers already trying to cure their hair loss with off-label drugs. Verbinnen and Bakst’s major innovation is to inject cash into this scenario: DAO participants are incentivized with crypto tokens they earn by contributing to research, or uploading blood work to an app. 

DAOs have been a locus for some of the more out-there activities in the crypto space over the years. Not only are they vehicles for profit if their tokens appreciate in value, but token-holders vote on group decisions. This gives many DAOs an upstart, democratic flavor. For example, ConstitutionDAO infamously tried—and ultimately failed—to buy an original copy of the US Constitution and turn it into a financial asset. HairDAO exists in a subset of this culture called DeSci (decentralized science), which includes DAOs dedicated to funding research on everything from longevity to monetizing your DNA.

Depending on who you ask, it’s either the best thing to happen to hair loss research in decades, or far from it. “They're telling the world, hey, this works,” says a hair loss YouTuber who goes by KwRx and who has arguably been HairDAO’s loudest online critic. “It’s a recipe for disaster.”

HairDAO has turned self-experimentation by its DIY hair loss scientists into research being run in conjunction with people like Dr. Claire Higgins, a researcher at Imperial College London, as well as at its own lab. And, ultimately, into products sold via Anagen. It also sells an original shampoo formula called FolliCool for $49.95 per 200 ml bottle. 

“The best hair loss researchers are basically anons on the internet,” Bakst said on a recent podcast appearance. “Of the four studies that we've run at universities, two of the four were fully designed by anons in our Discord server. And then, now that we have our own lab, all the studies we're running there are designed by anons in our Discord server.”

Dan, who asked to remain anonymous, is just another person on the internet trying to cure their hair loss. He’s experimented by adding melatonin to topical Minoxidil, he says, and he claims he has experienced “serious, lasting side effects” from Finasteride. 

One day, he came across the HairDAO YouTube channel, where interviews with researchers like Dr. Ralf Paus from the University of Miami’s Miller School of Medicine immediately appealed to him.

“These were very interesting, offering deep insights into hair loss—much better than surface-level discussions you typically find on Reddit,” Dan says over Discord. 

To him, it seemed like HairDAO brought a level of rigor to the freewheeling online world of DIY hair loss biohacking, where “group buys” of off-label drugs from overseas are a longstanding practice. If you’ve heard of the real-life Dallas Buyers Club of the 1980s, where AIDS patients pooled funds to buy experimental treatments, then you get the idea. 

“People have become more skeptical and smarter about these things, realizing the importance of proper research, scientific methods, and evidence,” says Dan. “That’s where HairDAO comes in. I hope it succeeds because it could channel the energy behind the ‘biohacking’ spirit and transform it into something useful.”

There is no better example of this ideal than Jumpman, a pseudonymous researcher referred to in Hair Cuts, a digital magazine HairDAO publishes to update members on progress, as their “king,” “lead researcher,” “lord and savior,” and by Verbinnen as “the best hair loss researcher by a wide margin.” He earned thousands of crypto tokens with his contributions and is credited with pushing HairDAO to look at TWIST-1 and PAI-1, proteins that are implicated in different cancers, to search for new treatments that inhibit their expression.

One much-discussed drug is TM5441, a PAI-1 inhibitor that has been investigated to treat cancer as well as lowering blood pressure. It’s often called “TM” by Discord members. 

“Bullish on TM,” Bakst says in a May 2023 Discord exchange. 

“Yeah your blood may have trouble clotting,” he says, acknowledging the potential side effects. “Don't ride motorcycles if you're taking it haha.” Despite this, he’s engaged with users about how they should use it on themselves. 

“I’d think it may be best to apply [TM] topically vs orally, just based on ability to target locally more frequently,” he says in an April 2024 Discord exchange with a user who was debating “upping the doses” of the drug, thinking it could be “a good hack.” Bakst added, “~not medical advice~.” 

Discussion of group buys isn’t allowed in the HairDAO Discord. When one user brought up the topic in August last year, Verbinnen chimed in, “None of this here.” But one risk that comes with funding anonymous internet researchers experimenting with unproven drugs is that they might not play by the rules. 

In messages pulled from a now-deleted Telegram channel seen by 404 Media, Jumpman discusses buying over half a kilogram of TM6541—another PAI-1 inhibitor—and says that the drug “will be ready in 6 weeks.” Jumpman also shares photos showing bags of pill bottles and says, “these are shipping out next week.” The labels on the bottles aren’t readable, and 404 Media can’t confirm if they actually shipped. Jumpman could not be reached for comment. It’s not clear whether the Telegram chat was officially linked to HairDAO, but it included HairDAO members other than Jumpman. In another Telegram message, a user says, “Guys, stop using TM, I found blood in the semen, after [several] tests, the doctor said it’s due to [blood pressure medications], careful.” 

“Maybe you were taking too much TM to cause internal bleeding,” Jumpman responds. 

Dan says this exchange didn’t worry him at the time. “The ‘blood in his semen’ thing happened to me once as well but I was not on any medications and [the] doctor told me it can happen sometimes and it’s not dangerous,” he says. “So I am hopeful that [the user] is alright, and that it resolved quickly, and that whatever he experimented with didn’t hurt him... does it concern or worry me personally? Not really because I don’t plan to use TM.”

Indeed, according to the Mayo Clinic, blood in semen—a condition known as hematospermia—most often goes away on its own, without any treatment. The Cleveland Clinic adds that it’s usually not a sign of a serious health problem and could be caused by a blood vessel bursting while masturbating, like blowing your nose too hard. Both organizations recommend consulting a doctor. 

Jumpman may have actually been on to something with his focus on PAI-1 in particular. Douglas Vaughan is the director of the Potocsnak Longevity Institute at Northwestern University. PAI-1 inhibition is a longtime focus of his research. He has studied Amish populations in Indiana, for example, because of a mutation that inhibits PAI-1 and may protect against different effects of aging. He’s also investigated PAI-1, and TM5441, for hair loss—completely by accident.

“We were thinking, well, someday somebody's going to want to make a drug that blocks PAI-1. Why don't we make a mouse that makes too much of it?” Vaughan tells 404 Media. After engineering the mice, chock full of human PAI-1, he noticed something unexpected. 

“Those mice were bald,” he says. He began working with Toshio Miyati, a professor at Tohoku University in Japan, who convinced Vaughan to try the drug TM5441 on the mice. 

“He sent me a drug that was called TM5441, and we simply put it in the chow of our transgenic mice. We fed it to them for several weeks, and lo and behold, they started growing hair. I said, well, how about that?” he says. 

But, he cautioned, people shouldn’t try TM5441 on themselves to cure their hair loss. “I think it’s foolish,” says Vaughan. “There are all kinds of reasons why you might take a drug or not, but usually you want to go through the regulatory steps to see that it's proven to be safe and effective.”

While TM has been much-discussed by HairDAO members, and it’s currently listed as a “treatment” on its online portal for people to discuss treatments and upload bloodwork, it isn’t named as a drug that HairDAO is formally investigating or sold to the public by Anagen. Vaugn says he was contacted by the group over a year ago, but a research partnership never materialized. Today, the group is pushing forward with investigating different drugs inhibiting TWIST-1 instead. 

“In general, if you're an individual person and you're experimenting on yourself, that is frequently outside the scope of regulation,” says Patricia Zettler, an associate professor at The Ohio State University Moritz College of Law who previously served as Deputy General Counsel to the U.S. Department of Health & Human Services (HHS). 

“Where biohacking activities tend to intersect with existing regulatory regimes, whether at the federal level or the state level, is when people start giving drugs, selling drugs, or distributing drugs to other people,” she adds. 

It’s unclear how much interaction HairDAO has had with regulatory bodies. Messages posted in Discord reference FDA consultants and gathering materials to submit to the agency. 

Last year, the YouTuber KrWx created a series of videos and Substack posts airing his concerns with HairDAO’s DIY approach, generally labelling it dangerous and possibly illegal. He received a cease and desist letter from the group’s lawyers, seen by 404 Media, calling his claims false and defamatory. The merit of KrWx’s claims aside, his spotlight kicked off major shifts in the DAO’s Discord. 

For one, Jumpman disappeared. 

Andrew Bakst sits wearing a white lab coat, blue-gloved hands holding testing equipment. He looks at the camera. “PCR,” he says. “...PCR.” The cameraperson, a Discord user who uploaded the video in early April, laughs. “Got to repeat shit when we’re in the lab late at night.”

This New York-based lab space, opened in November, is where much of HairDAO’s latest work happens—already a far cry from the Jumpman era, just a few months after he vanished. The group is currently pursuing preclinical testing on three different protein targets and drugs, and claims to have filed for six patents. This work includes, for example, testing drugs on mouse skin. 

“We also tested drug penetration on dry versus damp mouse skin,” Verbinnen wrote in an April Discord message, adding that "drugs penetrate damp skin much more than dry skin at least in the mouse model."

HairDAO has even run a human trial for T3, the thyroid drug that it sells via Anagen, involving six patients including Verbinnen and Bakst. In that trial, the participants were given a topical ointment to apply to their scalps, and the hair growth results were measured at the end of a year-long period. That research resulted in a preprint paper, which is available online. 

“It is important to note that this study involved only six participants, which is a small sample size,” a disclaimer on the study included in an update for DAO members explains. “As such, we make no claims about the safety or efficacy of topical T3 based on these results.”

The Anagen listings for its T3 formulations promise “outstanding” and “maximum” results.  

HairDAO conducts this work in collaboration with a handful of accredited researchers. The group says the T3 trial was “overseen” by Dr. Richard Powell, a Florida-based hair transplant surgeon whose name does not appear on the author list. Powell has close ties to HairDAO’s Chief Medical Officer, Dr. Blake Bloxham. The website for Powell’s practice says that it “exclusively uses in-house hair transplant technicians trained by Dr. Alan Feller and Dr. Blake Bloxham.” A 2023 YouTube video describes Powell as “part of Feller & Bloxham Medical.” An early draft of the T3 study design even indicates that both Powell and Bloxham would oversee it. 

According to messages posted to Discord by the founders, Bloxham has a 49 percent stake in Anagen’s US operations. He’s even participated in the business side of expanding the service, such as by setting up corporate entities, according to Discord posts. 

The T3 study discloses several conflicts of interest—including that HairDAO has filed a patent—but does not mention Powell or Bloxham, as they are not listed as study authors. When reached for comment over email, Bloxham initially said, “I’d love to answer any questions you have. In fact, I’d be happy to discuss HairDAO/Anagen in general; who we are, what we do, and why we do it. Pretty interesting stuff!” He did not respond to multiple follow-ups sent over email and Discord. In fact, none of HairDAO’s research collaborators contacted by 404 Media, including Powell, Paus, and Higgins, responded to requests for comment. 

Verbinnen and Bakst did not respond to multiple requests for comment sent over email and Discord. 

In the latest issue of Hair Cuts, HairDAO claims that Anagen earned $1,000 in its first two days of sales. As for its shampoo, FolliCool, it says that it has sold $29,000 worth of product. Meanwhile, its crypto token is worth roughly $25 a pop, down from a high of $150, with a market cap of over $16 million. 

Its marketing costs to date? $0.

Podcast: ICE's 'Backdoor' Into a Nationwide AI Surveillance Network

Podcast: ICE's 'Backdoor' Into a Nationwide AI Surveillance Network

This week is a bumper episode all about Flock, the automatic license plate reading (ALPR) cameras across the U.S. First, Jason explains how we found that ICE essentially has backdoor access to the network through local cops. After the break, Joseph tells us all about Nova, the planned product that Flock is making which will make the technology even more invasive by using hacked data. In the subscribers-only section, Emanuel details the massive changes AI platform Civitai has made, and why it's partly in response to our reporting.

Listen to the weekly podcast on Apple Podcasts, Spotify, or YouTube. Become a paid subscriber for access to this episode's bonus content and to power our journalism. If you become a paid subscriber, check your inbox for an email from our podcast host Transistor for a link to the subscribers-only version! You can also add that subscribers feed to your podcast app of choice and never miss an episode that way. The email should also contain the subscribers-only unlisted YouTube link for the extended video version too. It will also be in the show notes in your podcast player.

Civitai Ban of Real People Content Deals Major Blow to the Nonconsensual AI Porn Ecosystem

Civitai Ban of Real People Content Deals Major Blow to the Nonconsensual AI Porn Ecosystem

Civitai, an AI model sharing site backed by Andreessen Horowitz (a16z) that 404 Media has repeatedly shown is being used to generate nonconsensual adult content, is banning AI models designed to generate the likeness of real people, the site announced Friday.

The policy change, which Civitai attributes in part to new AI regulations in the U.S. and Europe, is the most recent in a flurry of updates Civitai has made under increased pressure from payment processing service providers and 404 Media’s reporting. This recent change, will, at least temporarily, significantly hamper the ecosystem for creating nonconsensual AI-generated porn. 

“We are removing models and images depicting real-world individuals from the platform. These resources and images will be available to the uploader for a short period of time before being removed,” Civitai said in its announcement. “This change is a requirement to continue conversations with specialist payment partners and has to be completed this week to prepare for their service.”

Earlier this month, Civitai updated its policies to ban certain types of adult content and introduced further restrictions around content depicting the likeness of real people in order to comply with requests from an unnamed payment processing service provider. This attempt to appease the payment processing service provider ultimately failed. On May 20, Civitai announced that the provider cut off the site, which currently can’t process credit card payments, though it says it will get a new provider soon. 

“We know this will be frustrating for many creators and users. We’ve spoken at length about the value of likeness content, and this decision wasn’t made lightly,” Civitai’s statement about banning content depicting the likeness of real people said. “But we’re now facing an increasingly strict regulatory landscape - one evolving rapidly across multiple countries.”

The announcement specifically cites President Donald Trump’s recent signing of the Take It Down Act, which criminalizes and holds platforms liable for nonconsensual AI-generated adult content, and the EU AI Act, a comprehensive piece of AI regulation that was enacted last year.

💡
Do you know other sites that allow people to share models of real people? I would love to hear from you. Using a non-work device, you can message me securely on Signal at ‪(609) 678-3204‬. Otherwise, send me an email at [email protected].

As I’ve reported since 2023, Civitai’s policies against nonconsensual adult content did little to diminish the site’s actual crucial role in the AI-generated nonconsensual content ecosystem. Civitai’s policy allowed people to upload custom AI image generation models (LoRAs, checkpoints, etc) designed to recreate the likeness of real people. These models were mostly of huge movie stars and minor internet celebrities, but as our reporting has shown, also completely random, private people. Civitai also allowed users to share custom AI image generation models designed to depict extremely specific and graphic sex acts and fetishes, but it always banned users from producing nonconsensual nudity or porn. 

However, by embedding in huge online spaces dedicated to creating and sharing nonconsensual content, I saw how easily people put these two types of models together. Civitai users couldn’t generate and share those models on Civitai, but they could download the models, combine them, generate nonconsensual porn of real people locally on their machines or on various cloud computing services, and post them to porn sites, Telegram, and social media. I’ve seen people in these spaces explain over and over again how easy it was to create nonconsensual porn of YouTubers, Twitch streamers, or barely known Instagram users by using models to Civitai and linking to those models hosted on Civitai.

One Telegram channel dedicated to AI-generating nonconsensual porn reacted to Civitai’s announcement with several users encouraging others to grab as many AI models of real people as they could before Civitai removed them. On this Telegram, users complained that these models were already removed, and my searches of the site have shown the same. 

“The removal of those models really affect me [sic],” one prolific creator of nonconsensual content in the Telegram channel said. 

When Civitai first announced that it was being pressured by its payment processing service provider several users started an archiving project to save all the models on the site before they were removed. A Discord server dedicated to this project now has over 100 members, but it appears Civitai has made many models inaccessible sooner than these users anticipated. One member of the archiving project said that there “are many thousands such models which cannot be backed up.”

Unfortunately, while Civitai’s recent policy changes and especially its removal of AI models of real people for now appears to have impacted people who make nonconsensual AI-generated porn, it’s unlikely that the change will slow them down for long. The people who originally created the models can always upload them to other sites, including some that have already positioned themselves as Civitai competitors. 

It’s also unclear how Civitai intends to keep users from uploading AI models designed to generate the likeness of real people who are not well-known celebrities, as automated systems would not be able to detect these models. 

Civitai's CEO Justin Maier told me in an email that "Uploaders must identify any content that depicts a real person; those uploads are automatically rejected." He also said the site uses a company called Clavata to flag well-known public figures, that people can "file a likeness claim" that will be reviewed and removed in 24 hours, and that it's piloting "an opt-in service with a third-party vendor so individuals can register a privacy-preserving face hash and have future uploads blocked at submission."

"No system is perfect with billions of unique faces, but combining these layers gives us the best coverage currently available for both celebrities and private individuals," Maier said. "We’ll keep tuning the models and expanding the registry pilot as the technology matures."

Update: This story has been updated with comment from Civitai CEO Justin Maier.

ICE Taps into Nationwide AI-Enabled Camera Network, Data Shows

ICE Taps into Nationwide AI-Enabled Camera Network, Data Shows

Data from a license plate-scanning tool that is primarily marketed as a surveillance solution for small towns to combat crimes like car jackings or finding missing people is being used by ICE, according to data reviewed by 404 Media. Local police around the country are performing lookups in Flock’s AI-powered automatic license plate reader (ALPR) system for “immigration” related searches and as part of other ICE investigations, giving federal law enforcement side-door access to a tool that it currently does not have a formal contract for.

The massive trove of lookup data was obtained by researchers who asked to remain anonymous to avoid potential retaliation and shared with 404 Media. It shows more than 4,000 nation and statewide lookups by local and state police done either at the behest of the federal government or as an “informal” favor to federal law enforcement, or with a potential immigration focus, according to statements from police departments and sheriff offices collected by 404 Media. It shows that, while Flock does not have a contract with ICE, the agency sources data from Flock’s cameras by making requests to local law enforcement. The data reviewed by 404 Media was obtained using a public records request from the Danville, Illinois Police Department, and shows the Flock search logs from police departments around the country.

As part of a Flock search, police have to provide a “reason” they are performing the lookup. In the “reason” field for searches of Danville’s cameras, officers from across the U.S. wrote “immigration,” “ICE,” “ICE+ERO,” which is ICE’s Enforcement and Removal Operations, the section that focuses on deportations; “illegal immigration,” “ICE WARRANT,” and other immigration-related reasons. Although lookups mentioning ICE occurred across both the Biden and Trump administrations, all of the lookups that explicitly list “immigration” as their reason were made after Trump was inaugurated, according to the data.

💡
Do you know anything else about Flock? We would love to hear from you. Using a non-work device, you can message Jason securely on Signal at jason.404 and Joseph at joseph.404

The CIA Secretly Ran a Star Wars Fan Site

The CIA Secretly Ran a Star Wars Fan Site

“Like these games you will,” the quote next to a cartoon image of Yoda says on the website starwarsweb.net. Those games include Star Wars Battlefront 2 for Xbox; Star Wars: The Force Unleashed II for Xbox 360, and Star Wars the Clone Wars: Republic Heroes for Nintendo Wii. Next to that, are links to a Star Wars online store with the tagline “So you Wanna be a Jedi?” and an advert for a Lego Star Wars set.

The site looks like an ordinary Star Wars fan website from around 2010. But starwarsweb.net was actually a tool built by the Central Intelligence Agency (CIA) to covertly communicate with its informants in other countries, according to an amateur security researcher. The site was part of a network of CIA sites that were first discovered by Iranian authorities more than ten years ago before leading to a wave of deaths of CIA sources in China in the early 2010s.

Penguin Poop Helps Antarctica Stay Cool

Penguin Poop Helps Antarctica Stay Cool

Welcome back to the Abstract!

We begin this week with some scatalogical salvation. I dare not say more. 

Then, swimming without a brain: It happens more often than you might think. Next, what was bigger as a baby than it is today? Hint: It’s still really big! And to close out, imagine the sights you’ll see with your infrared vision as you ride an elevator down to Mars. 

Fighting the Climate Crisis, One Poop at a Time

Boyer, Matthew et al. “Penguin guano is an important source of climate-relevant aerosol particles in Antarctica.” Communications Earth & Environment. 

The path to a more stable climate in Antarctica runs through the buttholes of penguins. 

Penguin guano, the copious excrement produced by the birds, is rich in ammonia and methylamine gas. Scientists have now discovered that these guano-borne gasses stimulate particle formation that leads to clouds and aerosols which, in turn, cool temperatures in the remote region. As a consequence, guano “may represent an important climate feedback as their habitat changes,” according to a new study. 

“Our observations show that penguin colonies are a large source of ammonia in coastal Antarctica, whereas ammonia originating from the Southern Ocean is, in comparison, negligible,” said researchers led by Matthew Boyer of the University of Helsinki. “Dimethylamine, likely originating from penguin guano, also participates in the initial steps of particle formation, effectively boosting particle formation rates up to 10,000 times.”

Boyer and his colleagues captured their measurements from a site near Marambio Base on the Antarctica Peninsula, in the austral summer of 2023. At times when the site was downwind of a nearby colony of 60,000 Adelie penguins, the atmospheric ammonia concentration spiked to 1,000 times higher than baseline. Moreover, the ammonia levels remained elevated for more than a month after the penguins migrated from the area. 

“The penguin guano ‘fertilized’ soil, also known as ornithogenic soil, continued to be a strong source of ammonia long after they left the site,” said the team. “Our data demonstrates that there are local hotspots around the coast of Antarctica that can yield ammonia concentrations similar in magnitude to agricultural plots during summer…This suggests that coastal penguin/bird colonies could also comprise an important source of aerosol away from the coast.” 

“It is already understood that widespread loss of sea ice extent threatens the habitat, food sources, and breeding behavior of most penguin species that inhabit Antarctica,” the researchers continued. “Consequently, some Antarctic penguin populations are already declining, and some species could be nearly extinct by the end of the 21st century. We provide evidence that declining penguin populations could cause a positive climate warming feedback in the summertime Antarctic atmosphere, as proposed by a modeling study of seabird emissions in the Arctic region.”

The power of penguin poop truly knows no earthly bounds. Guano, already famous as a super-fertilizer and a pillar of many ecosystems, is also creating clouds out of thin air, with macro knock-on effects. These guano hotspots act as a bulwark against a rapidly changing climate in Antarctica, which is warming twice as fast as the rest of the world. We’ll need every tool we can get to curb climate change: penguin bums, welcome aboard.

A Swim Meet for Microbes

Hartl, Benedikt et al. “Neuroevolution of decentralized decision-making in N-bead swimmers leads to scalable and robust collective locomotion.” Communications Physics.

The word “brainless” is bandied about as an insult, but the truth is that lots of successful lifeforms get around just fine without a brain. For instance, microbes can locomote through fluids—a complex action—with no centralized nervous system. Naturally, scientists were like, “what’s that all about?” 

“So far, it remains unclear how decentralized decision-making in a deformable microswimmer can lead to efficient collective locomotion of its body parts,” said researchers led by Benedikt Hartl of TU Wien and Tufts University. “We thus investigate biologically motivated decentralized yet collective decision-making strategies of the swimming behavior of a generalized…swimmer.”

Penguin Poop Helps Antarctica Stay Cool
Bead-based simulated microorganism. Image: TU Wien

The upshot: Decentralized circuits regulate movements in brainless swimmers, an insight that could inspire robotic analogs for drug delivery and other functions. However, the real tip-of-the-hat goes to the concept artist for the above depiction of the team’s bead-based simulated microbe, who shall hereafter be known as Beady the Deformable Microswimmer.

Big Jupiter in Little Solar System

Batygin, Konstantin and Adams, Fred. “Determination of Jupiter’s primordial physical state.” Nature Astronomy.

Jupiter is pretty dang big at this current moment. More than 1,000 Earths could fit inside the gas giant; our planet is a mere gumball on these scales. But at the dawn of our solar system 4.5 billion years ago, Jupiter was at least twice as massive as it is today, and its magnetic field was 50 times stronger, according to a new study. 

“Our calculations reveal that Jupiter was 2 to 2.5 times as large as it is today, 3.8 [million years] after the formation of the first solids in the Solar System,”  said authors Konstantin Batygin of the California Institute of Technology and Fred Adams of the University of Michigan. “Our findings…provide an evolutionary snapshot that pins down properties of the Jovian system at the end of the protosolar nebula’s lifetime.”

The team based their conclusions on the subtle orbital tilts of two of Jupiter’s tiny moons Amalthea and Thebe, which allowed them to reconstruct conditions in the early Jovian system. It’s nice to see Jupiter’s more offbeat moons get some attention; Europa is always hogging the spotlight. (Fun fact: lots of classic sci-fi stories are set on Amalthea, from Boris and Arkady Strugatsky’s “The Way to Amaltha” to Arthur C. Clarke’s “Jupiter Five.”)

Now That’s Infracredible

Ma, Yuqian et al. “Near-infrared spatiotemporal color vision in humans enabled by upconversion contact lenses.” Cell. 

I was hooked on this new study by the second sentence, which reads: “However, the capability to detect invisible multispectral infrared light with the naked eye is highly desirable.” 

Okay, let's assume that the public is out there, highly desiring infrared vision, though I would like to see some poll-testing. A team has now developed an upconversion contact-lens (UCL) that detects near-infrared light (NIR) and converts it into blue, green and red wavelengths. While this is not the kind of inborn infrared vision you’d see in sci-fi, it does expand our standard retinal retinue, with fascinating results. 

Penguin Poop Helps Antarctica Stay Cool
A participant having lenses fitted. Image: Yuqian Ma, Yunuo Chen, Hang Zhao

“Humans wearing upconversion contact lenses (UCLs) could accurately recognize near-infrared (NIR) temporal information like Morse code and discriminate NIR pattern images,” said researchers led by Yuqian Ma of the University of Science and Technology of China. “Interestingly, both mice and humans with UCLs exhibited better discrimination of NIR light compared with visible light when their eyes were closed, owing to the penetration capability of NIR light.”  

The study reminds me of the legendary scene in Battlestar Galactica where Dean Stockwell, as John Cavil, exclaims: “I don't want to be human. I want to see gamma rays, I want to hear X-rays, and I want to smell dark matter.” Maybe he just needed some upgraded contact lenses! 

Hold the Door! (to Mars)

Aslanov, Vladimir. “An anchored space elevator under the L1 Mars-Phobos libration point.” Acta Astronautica.

This week in space elevator news, why not set one up on the Martian moon Phobos? A new study envisions anchoring a tether to Phobos, a dinky little space potato that’s about the size of Manhattan, and extending it out some 3,700 miles, almost to the surface of Mars. Because Phobos is tidally locked to Mars (the same side always faces the planet), it might be possible to shuttle back and forth between Mars and Phobos on a tether. 

“The building of such a space elevator [is] a feasible project in the not too distant future,” said author Vladimir Aslanov of the Moscow Aviation Institute. “Such a project could form the basis of many missions to explore Phobos, Mars and the space around them.”

Indeed, this is far from the first time scientists have pondered the advantages of a Phobian space elevator. Just don’t be the jerk that pushes all the buttons. 

Thanks for reading! See you next week. 

Behind the Blog: Feeling Wowed, Getting Cozy

Behind the Blog: Feeling Wowed, Getting Cozy

This is Behind the Blog, where we share our behind-the-scenes thoughts about how a few of our top stories of the week came together. This week, we discuss the benefits of spending 14 hours a day on the internet, getting cozy for AI slop, and a what a new law in Sweden means for the rest of us.

JOSEPH: So I don’t cover generative AI anywhere near as much as Emanuel, Sam, or Jason. Sometimes I think that’s a benefit, especially for the podcast, because I can ask questions more as an outsider or observer than someone deep in the weeds about all these different models and things, then the others can provide their expertise.

As a general outsider or just ordinary passive consumer of AI slop now that it’s ubiquitous, I saw videos this week that I’m sure many other people did: those from Google’s Veo 3.

Here’s a quick selection of ones I came across:

Authors Are Accidentally Leaving AI Prompts In their Novels

Authors Are Accidentally Leaving AI Prompts In their Novels

Fans reading through the romance novel Darkhollow Academy: Year 2 got a nasty surprise last week in chapter 3. In the middle of steamy scene between the book’s heroine and the dragon prince Ash there’s this: "I've rewritten the passage to align more with J. Bree's style, which features more tension, gritty undertones, and raw emotional subtext beneath the supernatural elements:"

It appeared as if author, Lena McDonald, had used an AI to help write the book, asked it to imitate the style of another author, and left behind evidence they’d done so in the final work. As of this writing, Darkhollow Academy: Year 2 is hard to find on Amazon. Searching for it on the site won’t show the book, but a Google search will. 404 Media was able to purchase a copy and confirm that the book no longer contains the reference to copying Bree’s style. But screenshots of the graph remain in the book’s Amazon reviews and Goodreads page.

This is not the first time an author has left behind evidence of AI-generation in a book, it’s not even the first one this year. 

Pocket, One of the Only Apps I Ever Liked, Is Shutting Down

Pocket, One of the Only Apps I Ever Liked, Is Shutting Down

Pocket, an app for saving and reading articles later, is shutting down on July 8, Mozilla announced today. 

The company sent an email with the subject line “Important Update: Pocket is Saying Goodbye,” around 2 p.m. EST and I immediately started wailing when I saw it. 

“You’ll be able to keep using the app and browser extensions until then. However, starting May 22, 2025, you won’t be able to download the apps or purchase a new Pocket Premium subscription,” the announcement says. Users can export saved articles until October 8, 2025, after which point all Pocket accounts and data will be permanently deleted. 

Hacker Conference HOPE Says U.S. Immigration Crackdown Caused Massive Crash in Ticket Sales

Hacker Conference HOPE Says U.S. Immigration Crackdown Caused Massive Crash in Ticket Sales

Hackers On Planet Earth (HOPE), the iconic and long-running hacking conference, says far fewer people have bought tickets for the event this year as compared to last, with organizers believing it is due to the Trump administration’s mass deportation efforts and more aggressive detainment of travellers into the U.S.

“We are roughly 50 percent behind last year’s sales, based on being 3 months away from the event,” Greg Newby, one of HOPE’s organizers, told 404 Media in an email. According to hacking collective and magazine 2600, which organizes HOPE, the conference usually has around 1,000 attendees and the event is almost entirely funded by ticket sales. “Having fewer international attendees hurts the conference program, as well as the bottom line,” a planned press release says.

Why Does Google’s New Veo 3 AI Video Generator Love This Dad Joke?

Why Does Google’s New Veo 3 AI Video Generator Love This Dad Joke?

On Tuesday, Google revealed the latest and best version of its AI video generator, Veo 3. It’s impressive not only in the quality of the video it produces, but also because it can generate audio that is supposed to seamlessly sync with the video. I’m probably going to test Veo 3 in the coming weeks like we test many new AI tools, but one odd feature I already noticed about it is that it’s obsessed with one particular dad joke, which raises questions about what kind of content Veo 3 is able to produce and how it was trained. 

This morning I saw that an X user who was playing with Veo 3 generated a video of a stand up comedian telling a joke. The joke was: “I went to the zoo the other day, there was only one dog in it, it was a Shih Tzu.” As in: “shit zoo.”

NO WAY. It did it. And, was that, actually funny?

Prompt:
> a man doing stand up comedy in a small venue tells a joke (include the joke in the dialogue) https://t.co/GFvPAssEHx pic.twitter.com/LrCiVAp1Bl

— fofr (@fofrAI) May 20, 2025

Other users quickly replied that the joke was posted to Reddit’s r/dadjokes community two years ago, and to the r/jokes community 12 years ago.

I started testing Google’s new AI video generator to see if I could get it to generate other jokes I could trace back to specific Reddit posts. This would not be definitive proof that Reddit provided the training data that resulted in a specific joke, but is a likely theory because we know Google is paying Reddit $60 million a year to license its content for training its AI models. 

To my surprise, when I used the same prompt as the X user above—”a man doing stand up comedy in a small venue tells a joke (include the joke in the dialogue)”—I got a slightly different looking video, but the exact same joke.

And when I changed the prompt a bit—”a man doing stand up comedy tells a joke (include the joke in the dialogue)”—I still got a slightly different looking video with the exact same joke.

Google did not respond to a request for comment, so it’s impossible to say why its AI video generator is producing the same exact dad joke even when it’s not prompted to do so, and where exactly it sourced that joke. It could be from Reddit, but it could also be from many other places where the Shih Tzu joke has appeared across the internet, including YouTube, Threads, Instagram, Quora, icanhazdadjoke.com, houzz.com, Facebook, Redbubble, and Twitter, to name just a few. In other words, it’s a canonical corny dad joke of no clear origin that’s been posted online many times over the years, so it’s impossible to say where Google got it. 

But it’s also not clear why this is the only coherent joke Google’s new AI video generator will produce. I’ve tried changing the prompts several times, and the result is either the Shih Tzu joke, gibberish, or incomplete fragments of speech that are not jokes. 

One prompt that was almost identical to the one that produced the Shih Tzu joke resulted in a video of a stand up comedian saying he got a letter from the bank.

The prompt “a man telling a joke at a bar” resulted in a video of a man saying the idiom “you can’t have your cake and eat it too.” 

The prompt “man tells a joke on stage” resulted in a video of a man saying some gibberish, then saying he went to the library.

Admittedly, these videos are hilarious in an absurd Tim & Eric kind of way because no matter what nonsense the comedian is saying the crowd always erupts into laughter, but it also clearly shows Google’s latest and greatest AI video generator is creatively limited in some ways. This is not the case with other generative AI tools, including Google’s own Gemini. When I asked Gemini to tell me a joke, the chatbot instantly produced different, coherent dad jokes. And when I asked it to do it over and over again, it always produced a different joke.  

Again, it’s impossible to say what Veo 3 is doing behind the scenes without Google’s input, but one possible theory is that its falling back to a safe, known joke, rather than producing the type of content that embarrassed the company in the past, be it instructing users to eat glue or, or generating Nazi soldiers as people of color.  

❌