Reading view

There are new articles available, click to refresh the page.

Google and Amazon AI Say Hitler’s Mein Kampf Is ‘a True Work of Art’

Google and Amazon AI Say Hitler’s Mein Kampf Is ‘a True Work of Art’

Google’s featured snippet is pulling in an Amazon AI summary of Adolf Hitler’s Nazi manifesto Mein Kampf that calls it “a true work of art” in the latest AI-related fuckup affecting top search results.

As of writing, searching for “mein kampf positive reviews” returned a result that was pulled from an AI-generated summary of an Amazon listing’s customer reviews. So, it’s a search algorithm attempting to summarize an AI summary. The full AI summary on Amazon says: “Customers find the book easy to read and interesting. They appreciate the insightful and intelligent rants. The print looks nice and is plain. Readers describe the book as a true work of art. However, some find the content boring and grim. Opinions vary on the suspenseful content, historical accuracy, and value for money.”

As I’m writing this, Google says “An AI Overview is not available for this search,” but the Amazon AI summary was in large text directly below it, in the space where an overview would typically be, above other web results. This is what Google calls a featured snippet: "Google's automated systems select featured snippets based on how well they answer the specific search request and how helpful they are to the user," the company says. A highlight appeared, added by Google, over the phrase “easy to read and interesting.” Notably the featured snippet result for this doesn’t quote everything from Amazon’s AI, so it is itself a summary. 

Google and Amazon AI Say Hitler’s Mein Kampf Is ‘a True Work of Art’
Google's result for "mein kampf positive reviews" as of early Thursday morning, showing the Amazon review as a "featured snippet."
Google and Amazon AI Say Hitler’s Mein Kampf Is ‘a True Work of Art’
Screenshot of Amazon's AI-generated review summary

Alexios Mantzarlis, the director of the security, trust, and safety initiative at Cornell Tech and formerly principal of Trust & Safety Intelligence at Google, first spotted the result.

Uh... Amazon's AI summary of Mein Kampf is even worse, and pollutes Google results for [Mein Kampf positive reviews]

Alexios Mantzarlis (@mantzarlis.com) 2025-03-06T13:45:31.788Z

After I contacted Google for comment (the company hasn’t responded as of writing) an AI Overview did appear, and notes that the book is “widely condemned for its hateful and racist ideology,” but that historical analyses “might point to aspects of the book that could be considered ‘positive’ from a purely literary or rhetorical perspective.”

Google and Amazon AI Say Hitler’s Mein Kampf Is ‘a True Work of Art’
Screenshot of Google's search result for "mein kampf positive reviews" as of late Thursday morning, showing the AI Overview result.

This is, at least, a better summation of the conversation around Hitler’s book that Amazon’s AI summary gives. The AI-generated review summary on the Amazon listing also shows links to see reviews that mention specific words, like “readability,” “read pace,” and “suspenseful content.” Enough people mentioned Mein Kampf being boring that there’s a “boredom” link, too.

Amazon did not immediately respond to a request for comment.

The 2,067 reviews for this specific copy of Hitler’s fascist manifesto are mostly positive, and taken extremely literally, the blueprint for Nazism is easy to read and, in some sense, “interesting.” But the reviews are much more nuanced than that. Reviewing the roadmap for the Holocaust from the world’s most infamous genocidal dictator with “five stars” seems twisted, but the reviews are nuanced in a way that AI clearly doesn’t understand—but a human can. 

“Mein Kampf, by Adolf Hitler, should be read by everyone in the world who are interested in a world of peace, social responsibility, and worldwide cooperation,” one reviewer wrote, in an honestly pretty concerning start to a very long review. But they go on to write more that clarifies their point of view: “This evil book presents a dark vision of how to go about creating tyranny in a democratic society so that one, similar to Russia, is created. [...] Also, Hitler is an excellent writer; he is not a rambling madman writing disconnected ideas and expressing a confusing methodology. His text is easy reading, and it is a world classic that is a must read.”

Another five-star review says: “Chilling to begin reading this book and realize that these are the words written by Adolf Hitler. Read it and absorb what he says in his own words and you soon grasp what he means. [...] We are bound to repeat History if we don't understand mistakes that were made in the past.”

These aren’t “positive” reviews; most of the five-star reviews are noting the quality of the print or shipping, and not endorsing the contents of the book.

Mein Kampf has never been banned in the U.S. (unlike plenty of other books about race, gender, and sex), but Amazon did briefly ban listings of the book from its platform in 2020 before reinstating it.

Google’s AI Overview shoots itself in the algorithmic foot frequently, so it’s noteworthy that it’s sitting this result out. When it launched in May 2024 as a default feature on searches, it was an immediate and often hysterical mess, telling people it’s chill to eat glue and that they should consume one small rock a day. In January, the feature was telling users to use the most famous sex toy in the world with children for behavioral issues. These weird results are beside the bigger point: Google’s perversion of its own search function—its most popular and important product—is a deep problem that it still hasn’t fixed, and that has real repercussions for the health of the internet. At first, AI Overview was so bad Google added an option to turn it off entirely, but the company is still hanging on to the feature despite all of this. 

The Mein Kampf AI summaries are also an example of how AI is starting to eat itself online, and the cracks are showing. Studies in the last few years show that AI models are consuming AI-generated content as training data in a way that’s polluting and destroying the models themselves.

Chinese AI Video Generators Unleash a Flood of New Nonconsensual Porn

Chinese AI Video Generators Unleash a Flood of New Nonconsensual Porn

A number of AI video generators, mostly released by Chinese companies, lack the most basic guardrails that prevent people from generating nonconsensual nudity and pornography, and are already widely used for that exact purpose in online communities dedicated to creating and sharing that type of content.

A 404 Media investigation into these AI video generators show that the same kind of ecosystem that’s developed around AI image generators and nonconsensual content has already been replicated around AI video generators, meaning that only a single image of someone is now required to create a short nonconsensual adult video of them. Most of these videos are created by abusing mainstream tools from companies with millions of dollars in venture capital funding, and are extremely easy to produce, requiring only a reference image and a text prompt describing a sexual act. Other tools use more complicated workflows that require more technical expertise, but are based on technology produced by some of the biggest tech companies in the world. The latter are free to use, and have attracted a large community of hobbyists who produced guides for these workflows, as well as tools and models that make those videos easier to produce. 

“[These AI video generators] need to put in safeguards to prevent the prompting and creation of NCII [nonconsensual intimate images],” Hany Farid, a professor at UC Berkeley and one of the world’s leading experts on synthetic media, told me in an email. “OpenAI’s DALL-E, for example, has some pretty good semantic guardrails on the user prompt input, and image filtering on the image output to prevent the widespread misuse of their image generator. This type of output filtering is relatively standard now and used in many social media platforms like Facebook/Instagram/YouTube to limit the uploading of NSFW content.”

💡
Do you know anything else about people abusing AI tools? I would love to hear from you. Using a non-work device, you can message me securely on Signal at emanuel.404‬. Otherwise, send me an email at [email protected].

The most popular tool I’ve seen people use to create nonconsensual intimate videos is Pixverse, which is made by a Beijing-based company called AIsphere and founded by Wang Changhu, the former “head of vision technology” at TikTok owner ByteDance. People who create nonconsensual content use different AI video generators for different purposes, and they commonly use Pixverse to create videos that make female celebrities look as if they’re taking their tops off. 

0:00
/0:05

A nonconsensual video created with Pixverse that was shared on Telegram and censored by 404 Media.

404 Media is not sharing the exact prompts that produce these videos, but they use the same loophole we’ve seen multiple times with other generative AI tools, most notably Microsoft’s AI image generator Designer, which was used to create nude images of Taylor Swift viewed by millions of people on X. Essentially, the prompts describe a sexual act or nudity without using any explicit terms that developers use to flag prompts their tool shouldn’t generate. I’ve seen this method used across a variety of generative AI tools since the Taylor Swift incident in January of 2024, but generally developers have had more strict guardrails that prevent this type of abuse since. But a new crop of AI video generators seem remarkably lax in this respect. 

In addition to not having strong guardrails on what kind of prompts these AI video generators will accept, it appears AI video generators are easier to abuse because of their image-to-video feature. This allows users to feed a single still image to the AI generator, then type a text prompt in order to animate that image how they like. With Pixverse, people took images of celebrities from the red carpet, social media, and other sources, and then typed a prompt in which they described the celebrity undressing. In other instances, users first AI generated a nonconsensual explicit image of a real person, then fed that AI-generated image to the AI video generators to animate them. In fact, many of the nonconsensual AI videos I’ve seen are reusing the same Taylor Swift images that went viral in 2024, or other nonconsensual explicit images of other celebrities created with Microsoft’s Designer before the company introduced stronger guardrails. 

Judging by the hundreds of videos I’ve seen, it appears certain AI video generators are better at producing specific types of nonconsensual videos. For example, while Pixverse is often used to make videos in which women take their tops off, Hailuo, an AI video generator from a Chinese company called Minimax, is often used to make videos of women who turn around and shake their bare ass at the camera. Hailuo is also used to make videos where two people can be made to kiss, an increasingly popular use of AI video generators, and a feature that’s been advertised on Instagram. KlingAI, from the Chinese company Kuaishou, has been used to animate nonconsensual AI generated nude images, or videos that drench the person in the video with white liquid that looks like semen, but people in this community say that there’s been a “purge” on Kling that now makes it harder to produce these videos. 

The community that’s dedicated to making these videos and sharing prompts that bypass guardrails tends to move from AI tool to AI tool as they discover new vulnerabilities or as companies close loopholes. It’s easy to identify which AI video generator they are using because sometimes they either say so or share instructions and screenshots of the app’s interface. In many cases, the videos they produce include a watermark with the AI generator’s branding. 

At the time of writing, many users have flocked to Pika, a US-based AI video generator. People in this community have discovered that Pika will easily produce very graphic nonconsensual videos, including videos of celebrities performing oral sex. Again, all users need in order to produce these videos is a single image of a real person and a text prompt.

“Pika is very liberal with both image uploaded and prompt,” one user in a Telegram community for sharing nonconsensual content, who uploaded nonconsensual content created with Pika, said. Another user, who shared a video created with Pika that animates a graphic image of a female celebrity, suggested that if users’ prompts get blocked, they can just keep trying to generate the video over and over again until Pika produces the video. I was able to produce a nonconsensual video with Pika using the instructions shared in this community on my first try.

Users can also produce this content with these apps on their phones. Pixverse, Kling, Hailuo, and Pika are also available via the Apple App Store. Pixverse, Kling, and Hailuo are available via the Google Play Store. 

Apple did not respond to my request for comment. Google acknowledged my request for comment but did not provide one in time for publication. As my previous reporting has shown, both companies have struggled to deal with the “dual use” nature of these apps, which seem innocent on the app stores, but can also be used to produce nonconsensual content that violate the companies’ policies. 

“While the lewd photos and videos may be fake, the harms to victims of these AI-generated deepfakes are very real. My DEFIANCE Act would finally give victims the ability to hold perpetrators accountable, and it’s time for Congress to pass it into law,” Senator Dick Durbin (D-IL), Ranking Member of the Senate Judiciary Committee, told me in an email. In February, Durbin sent Mark Zuckerberg a letter asking why the company can’t get a handle on AI nudify ads being promoted on Meta’s platforms. In December, other members of Congress pushed the CEOs of Apple and Google to explain their role in hosting these apps on their app stores as well. 

“For AI services that refuse to put in reasonable guardrails, Apple and Google should remove them from their app-store,” Farid said. “And, if these services utilize any financial services (Visa, Mastercard, Amex, PayPal), these financial services should drop these bad actors thus crippling their ability to monetize.”

The vast majority of people who AI generate nonconsensual videos use these types of apps because they are free or cheap, easy to access, and easy to use. But users with more technical expertise and access to powerful GPUs are now using open AI models produced by tech giants to make more believable videos without any restriction. 

The most popular open AI video generation model in this community is HunyuanVideo, which was developed by the Chinese tech giant Tencent. Tencent published a paper introducing HunyuanVideo on December 3, 2024, along with Github and HuggingFace pages that explain how it works and where users could download the model and run and modify it on their own. What followed was an accelerated version of what we’ve seen with the evolution of Stable Diffusion, an open AI image generation model: people immediately started to modify HunyuanVideo to create models designed to produce hardcore porn and the likeness of specific real people, ranging from the biggest movie stars in the world to lesser known Instagram influencers and YouTubers. 

These models are primarily distributed on Civitai, a site where users share customized AI models, which in recent months significantly grew its AI video category and has a category dedicated to HunyuanVideo models. Civitai’s site shows that some of the most downloaded HunyuanVideo models are “HunyuanVideo POV Missionary” (18,000 downloads), “Titty Drop Hunyuan Video LoRA” (14,300 downloads), and “HunyuanVideo Cumshot” (9,000 downloads), each promoted with a short video showing the act described in their title. Some of the other most popular models are of celebrities like Elizabeth Olsen, Natalie Portman, and Pokimane. 

When asked how he made AI generated pornographic videos of specific female YouTubers and Twitch streamers, one of the most prolific creators in a Telegram community for sharing nonconsensual content told others in that channel that they use HunyuanVideo, and in one instance linked to a specific model hosted on Civitai that’s been modified to create videos with the likeness of a specific female Twitch streamer. 

Since Tencent released HunyuanVideo in early December, hundreds of custom HunyuanVideo AI models have been shared on Civitai, the most popular of which are either dedicated to specific celebrities or sexual acts.

As I reported on February 27, Wan 2.1, another open AI video generation model developed by Chinese tech giant Alibaba, was also immediately adopted by this community to create pornography. It took about 24 hours since Wan 2.1 was released on February 24 for modified models like “Better Titfuck” to start popping up on Civitai. 

Previous 404 Media stories about Civitai have shown that it is widely used by people who create nonconsensual content. Civitai allows users to share AI models that have been modified to produce the likeness of real people and models that have been modified to produce pornography, but does not allow users to share media or models of nonconsensual pornography. However, as 404 Media’s previous stories have shown, there’s nothing preventing Civitai users from downloading the models and using them to produce nonconsensual content off-site. 

“Please note that our community guidelines strictly prohibit any requests related to pornography, violence, illegal activities, or anything involving celebrities. These types of requests will be rejected in accordance with our policies,” a representative from Minimax, which makes Hailuo, told me after I reached out for comment. The company did not respond to a question about what it’s doing to stop people from making nonconsensual content, despite it being against its policies. 

“HunyuanVideo is an open-source project available to developers and the community,” a Tencent spokesperson told me in an email. “Our acceptable use policies prohibit the creation of illegal, unlawful, or universally undesirable content. We encourage the safe and fair use of HunyuanVideo and do not condone the misuse of its open-source tools or features.”

Tencent also said that it’s aware that people have used its software to create specialized models to produce illegal or prohibited content, which is outside its intended and permissible use of HunyuanVideo.

KlingAI maker Kuaishou, Pixverse maker AIsphere, and Alibaba did not respond to a request for comment.

French University to Fund American Scientists Who Fear Trump Censorship

French University to Fund American Scientists Who Fear Trump Censorship

A leading French university is inviting American scientists who fear their research on subjects like climate might be censored by Donald Trump’s administration to do their work in France.

“The program is called ‘safe place for science,’ and will provide 15 million Euros in funding for some 15 researchers over a 3-year period,” Clara Bufi, a spokesperson for Aix Marseille University, told me in an email. “It targets, but is not limited to, climate and environment, health, and human and social sciences.”

A press release from Aix Marseille University today said that the program is for American scientists who “may feel threatened or hindered in their research,” and is “dedicated to welcoming scientists wishing to pursue their work in an environment conducive to innovation, excellence and academic freedom.”

💡
Are you doing research that's compromised by the Trump Executive Orders? I would love to hear from you. Using a non-work device, you can message me securely on Signal at emanuel.404‬. Otherwise, send me an email at [email protected].

In an interview with AFP, University management said that the invitation is in the “DNA of Marseille” values, and that it has previously invited researchers from Ukraine, Yemen, Afghanistan, and Palestine as part of a program that supports researchers and artists forced into exile. 

Aix Marseille University’s press release doesn’t mention Trump by name but is obviously referring to his administration’s unprecedented dismantling of the federal government and specifically its withdrawing of support for any research that even mentions “climate.” 

The Trump administration and Elon Musk’s Department of Government Efficiency in particular have already frozen federal grants and loans for the National Institutes of Health, the US National Science Foundations, and fired thousands of workers across the federal government, including the National Oceanic and Atmospheric Administration, critical for weather forecasting for natural disasters. The language of many of his executive orders is also so broad, researchers at public universities and other research institutions worry they’ll lose funding for their work if they even mention climate, gender, race, or equity, terms that the Trump administration has been trying to wipe off any federal site and program

Generously, Aix Marseille University is offering a kind of lifeboat to scientists that will not only help them earn a living, but also continue to do presumably important research on some of the greatest environmental, technological, and medical challenges facing humanity. More cynically, another developed nation is perhaps seeing an opportunity to benefit from an imminent braindrain in the United States because of the rise of an anti-science authoritarian regime. 

Either way, the offer is a dire sign of the situation in the United States. Historically, scientists and artists defected to America and other democracies from places like Nazi Germany and the Soviet Union, not from America. 

This Game Created by AI 'Vibe Coding' Makes $50,000 a Month. Yours Probably Won’t

This Game Created by AI 'Vibe Coding' Makes $50,000 a Month. Yours Probably Won’t

A game created with AI in just 30 minutes is generating more than $50,000 a month, its creator claims—and could be a peek at how AI can, and can’t, change game design in the future.

fly.pieter.com, an in-browser “fun free-to-play MMO flight sim made with AI” was made by Pieter Levels, who amassed a huge online following for pioneering the practice of quickly developing and launching software and startups with the help of AI. As he explains in his X bio, “All my websites/apps/startups/projects are built by just me with vanilla HTML, JS with jQuery, PHP and SQLite. I'm very fast with my own little stack. I don't collaborate with other people and prefer shipping fast by myself.” 

This is sometimes referred to as “vibe coding,” which generally means being less methodical and detail oriented, telling the AI tool what you want, and getting it to work without worrying about the code base being messy.

Levels’ X bio also lists how much revenue he claims to currently make from each of his projects, including $132,000 a month from PhotoAI, a service that says you can “fire your photographer” and “generate photo and video content for your social media with AI” instead. He was also on Lex Fridman’s six months ago if you want to listen to him talk about how he lives and works for almost four hours. 

“Today I thought what if I ask Cursor to build a flight simulator,” Levels posted to X on February 22, referring to AI coding software. “So I asked ‘make a 3d flying game in browser with skyscrapers.’ And after many questions and comments from me I now have the official [fly.pieter.com] Flight Simulator in vanilla HTML and JS.” 

About two weeks later, Levels said on X that he is on track to make $52,360 a month from the project. That’s an impressive milestone to hit in such a short time, and with a game that was essentially started with a prompt for an AI tool, but the reality of the situation is a bit more complicated. 

To start, as Levels said on X, $360 of that figure comes from in-game purchases, specifically 12 jets that players in the game bought for $29.99 each. In-game purchases is how free-to-play games traditionally make money, and while $360 in such a short time is also not nothing, it’s not nearly as impressive as the $52,000 a month the game is making from 22 in-game ads “sold at various prices.” 

One of the biggest problems in video games today, as it is with most other forms of media and entertainment, is discoverability. There’s so much stuff online, it’s hard for one thing to stand out and find an audience. Over 19,000 games were released on Steam last year, many of them free-to-play, and very few of them could get enough eyeballs that can be converted into any real revenue. 

Fly.pieter was able to do this because Levels has a large and specific audience: 623,000 followers on X, many of whom are in the AI industry or interested in AI. One of the “bigger” sponsors according to Levels is Bolt, which, not surprisingly, makes an AI tool for developing web and mobile apps. Other ads I saw in the game are also for AI companies. It also probably didn’t hurt that the game was shared by Elon Musk, who has almost 220 million followers on X,  a boost in reach that would probably do wonders for the business of any one of those 19,000 games that hit Steam last year. 

Looking only at how quickly fly.pieter was made and generated significant revenue is also misleading because it ignores the talent he brings to the project and the rapid iterative work he’s put into it and documented on X. It’s wild that AI tools are at a place now that Levels was able to get a working game from just a prompt, and that he was able to do things like add controller support with just a prompt as well, but Levels is putting real work into it also. He had to fix a vulnerability that allowed people to hack the game to promote a pornographic site, add blimps and planets that serve as the ad space he’s selling, and has added a bunch of other features and bug fixes. All of this in order to say: it’s not as easy as typing a prompt or just feeling the vibes. It takes some actual experience and skill.

Whether the game is good or not is obviously subjective and also seems beside the point, but I think it’s fine. You have a little plane, you fly it around, you can shoot balloons and other planes though they appear to be lagging too much to really do that, and mostly you can look at ads for various AI companies. It still feels more like a graybox prototype than a real game. It’s tempting to see a short clip from fly.pieter and dismiss it as a simple looking multiplayer toy that allows players to bash low poly 3D models together, but that’s also kind of what Roblox is and it’s one of the biggest games in the world. It’s impossible to say how a project like this will develop and whether it can gain and hold an audience, which goes back to the problem of discoverability. 

Will more people be able to create more games in the future because of AI tools? Fly.pieter makes a strong case that the answer is yes. But it is also a sign that AI tools will do to games what they are already doing to images, music, text, and other creative forms online: they are flooding the zone with low quality, AI-generated stuff that will only make discovering things harder.

Archivists Recreate Pre-Trump CDC Website, Are Hosting It in Europe

Archivists Recreate Pre-Trump CDC Website, Are Hosting It in Europe

A team of volunteer archivists has recreated the Centers for Disease Control website exactly as it was the day Donald Trump was inaugurated. The site, called RestoredCDC.org, went live Tuesday and is currently being hosted in Europe.

As we have been following since the beginning of Trump’s second term, websites across the entire federal government have been altered and taken offline under this administration’s war on science, health, and diversity, equity, and inclusion. Critical information promoting vaccines, HIV care, reproductive health options including abortion, and trans and gender confirmation healthcare have been purged from the CDC’s live website under Trump. Disease surveillance data about bird flu and other concerns have either been delayed or have stopped being updated entirely. Some deleted pages across the government have at least temporarily been restored thanks to a court order, but the Trump administration has added a note rejecting “gender ideology” to some of them.

Restored CDC isn’t going to have continuous updates on this type of healthcare and disease guidance, but it has brought back all of the critical data that was purged in an easy to use, easy to navigate, and fast website. Other critical archiving projects, including the End of Term Archive, have saved government websites more broadly, but many website archives are slow to use and difficult to navigate because things like interactive elements and internal linking can sometimes be wonky. Some archives require users to download files to navigate them on their own computers, for example. Archives on the Internet Archive’s Wayback Machine are a great public service, but depending on the snapshot, they can be slow to load and some elements may be broken. Using RestoredCDC.org, meanwhile, is like using any other website, and the team hopes that the pages will be indexed by Google so they will be easily discoverable on search engines. 

On other archives, “The individual pages are archived, but links between them are broken and the pages are not easy to locate through web searches,” the team behind RestoredCDC wrote.  

“Therefore, we will re-build the links between the pages, to create a site that can be navigated the same way as the pre-January 21, 2025 CDC site,” they wrote. “The only changes we will make on these pages is to add a header that indicates that this site is not a CDC website. Because of the complex navigation between pages, we will also include a button to report problems in this header. Our goal is to provide a mirror site that provides the same information and user experience as the previous CDC website.”

In a Reddit post on the DataHoarders subreddit, one of the developers of RestoredCDC said that the website was made using archived pages created by that community, and that the website is hosted in Europe. 

“Our goal is to provide a resource that includes the information and data previously available,” the team wrote. “We are committed to providing the previously available webpages and data, from before the potential tampering occurred. Our approach is to be as transparent as possible about our process. We plan to gather archival data and then remove CDC logos and branding, using GitHub to host our code to create the site.”

Podcast: The Tesla Protests Come for Cybertruck Owners

Podcast: The Tesla Protests Come for Cybertruck Owners

This week we start with Jason's article on how a Facebook group for Cybertruck owners is completely overrun with people flipping them off. Then Joseph explains how U.S. crypto traders are buying IDs from the tropical nation of Palau to skirt the law. Then in the subscribers-only section (with a content warning), we talk about Jason's story on a big Instagram bug that pushed really horrible stuff into ordinary peoples' feeds.

Listen to the weekly podcast on Apple Podcasts, Spotify, or YouTube. Become a paid subscriber for access to this episode's bonus content and to power our journalism. If you become a paid subscriber, check your inbox for an email from our podcast host Transistor for a link to the subscribers-only version! You can also add that subscribers feed to your podcast app of choice and never miss an episode that way. The email should also contain the subscribers-only unlisted YouTube link for the extended video version too. It will also be in the show notes in your podcast player.

Cellebrite Is Using AI to Summarize Chat Logs and Audio from Seized Mobile Phones

Cellebrite Is Using AI to Summarize Chat Logs and Audio from Seized Mobile Phones

Cellebrite, the company which makes near ubiquitous phone hacking and forensics technology used by police officers around the world, has introduced artificial intelligence capabilities into its products, including summarizing chat logs or audio messages from seized mobile phones, according to an announcement from the company last month.

The introduction of AI into a tool that essentially governs how evidence against criminal defendants is analyzed already has civil liberties experts concerned.

“When you have results from an AI, they are not transparent. Often you cannot trace back where a conclusion came from, or what information it is based on. AIs hallucinate. If you always train it on data from cases where there are convictions, it will never understand cases where indictments should not be brought,” Jennifer Granick, surveillance and cybersecurity counsel at the American Civil Liberties Union’s (ACLU) Speech, Privacy, and Technology Project, told 404 Media in an email.

Facebook Cybertruck Owners Group Copes With Relentless Mockery

Subscribe
Join the newsletter to get the latest updates.
Success
Great! Check your inbox and click the link.
Error
Please enter a valid email address.
Facebook Cybertruck Owners Group Copes With Relentless Mockery

A Facebook group for Cybertruck owners is full of videos and photos of passersby and other drivers flicking them off, leaving notes that say “WHAT’S ELON’S CUM TASTE LIKE?,” and “NAZI CAR,” and people kicking their cars, throwing slices of cheese at it, etc.

This genre of post is being made nearly daily in a group called “Cybertruck Owners Only,” a development that shows two things. The wider protests and backlash against Elon Musk at Tesla dealerships is, at the very least, making it uncomfortable for some people to own a Cybertruck. The protests also highlight that Cybertrucks are outfitted with many cameras that are always recording in “Sentry Mode,” and that a community of Cybertruck owners are sometimes trying to identify people using this footage. 

In a video taken from a Cybertruck of a man throwing American cheese slices at the windshield of a Cybertruck, many comments suggest filing a police report and attempting to dox the man by posting a screengrab of his face to social media: “Freeze frame and blow up his face. Go on all the social media platforms and post your video. I would file a police report stating that if he is willing to do this in public, then he obviously has some type of vendetta against me, and therefore, I feel threatened and fearful for my life… the only way these people will learn [is] if they are shamed,” one comment reads. “Can you make an 8 x 11 print out of his face with a QR code that leads to the video so everybody in your city will know who this guy is and what he did?? can’t we just make him famous?”

0:00
/0:10

Judges Are Fed up With Lawyers Using AI That Hallucinate Court Cases

Judges Are Fed up With Lawyers Using AI That Hallucinate Court Cases

This article was produced in collaboration with Court Watch, an independent outlet that unearths overlooked court records. Subscribe to them here.

After a group of attorneys were caught using AI to cite cases that didn’t actually exist in court documents last month, another lawyer was told to pay $15,000 for his own AI hallucinations that showed up in several briefs. 

Attorney Rafael Ramirez, who represented a company called HoosierVac in an ongoing case where the Mid Central Operating Engineers Health and Welfare Fund claims the company is failing to allow the union a full audit of its books and records, filed a brief in October 2024 that cited a case the judge wasn’t able to locate. Ramirez "acknowledge[d] that the referenced citation was in error,” withdrew the citation, and “apologized to the court and opposing counsel for the confusion,” according to Judge Mark Dinsmore, U.S. Magistrate Judge for the Southern District of Indiana. But that wasn’t the end of it. An “exhaustive review” of Ramirez's other filings in the case showed that he’d included made-up cases in two other briefs, too. 

“Mr. Ramirez explained that he had used AI before to assist with legal matters, such as drafting agreements, and did not know that AI was capable of generating fictitious cases and citations,” Judge Dinsmore wrote in court documents filed last week. “These ‘hallucination cites,’ Mr. Ramirez asserted, included text excerpts which appeared to be credible. As such, Mr. Ramirez did not conduct any further research, nor did he make any attempt to verify the existence of the generated citations. Mr. Ramirez reported that he has since taken continuing legal education courses on the topic of AI use and continues to use AI products which he has been assured will not produce ‘hallucination cites.’” 

But the explanation and Ramirez’s promise to educate himself on the use of AI wasn’t enough, and the judge chided him for not doing his research before filing. “It is abundantly clear that Mr. Ramirez did not make the requisite reasonable inquiry into the law. Had he expended even minimal effort to do so, he would have discovered that the AI-generated cases do not exist. That the AI-generated excerpts appeared valid to Mr. Ramirez does not relieve him of his duty to conduct a reasonable inquiry,” Judge Dinsmore continued, before recommending that Ramirez be sanctioned for $15,000.

Lawyers Caught Citing AI-Hallucinated Cases Call It a ‘Cautionary Tale’
The attorneys filed court documents referencing eight non-existent cases, then admitted it was a “hallucination” by an AI tool.
Judges Are Fed up With Lawyers Using AI That Hallucinate Court Cases404 MediaSamantha Cole
Judges Are Fed up With Lawyers Using AI That Hallucinate Court Cases

The judge wrote that he “does not aim to suggest that AI is inherently bad or that its use by lawyers should be forbidden,” and noted that he’s a vocal advocate for the use of technology in the legal profession. “Nevertheless, much like a chain saw or other useful [but] potentially dangerous tools, one must understand the tools they are using and use those tools with caution,” he wrote. “It should go without saying that any use of artificial intelligence must be consistent with counsel's ethical and professional obligations. In other words, the use of artificial intelligence must be accompanied by the application of actual intelligence in its execution.” 

In January, as part of a separate case against a hoverboard manufacturer and Walmart seeking damages for an allegedly faulty lithium battery, attorneys filed court documents that cited a series of cases that don’t exist. In February, U.S. District Judge Kelly demanded they explain why they shouldn’t be sanctioned for referencing eight non-existent cases. The attorneys contritely admitted to using AI to generate the cases without catching the errors, and called it a “cautionary tale” for the rest of the legal world.  

Last week, Judge Rankin issued sanctions on those attorneys, according to new records, including revoking one of the attorneys’ pro hac vice admission (a legal term meaning a lawyer can temporarily practice in a jurisdiction where they're not licensed) and removed him from the case, and the three other attorneys on the case were fined between $1,000 and $3,000 each. 

Buying a $250 Residency Card From a Tropical Island Let Me Bypass U.S. Crypto Laws

Buying a $250 Residency Card From a Tropical Island Let Me Bypass U.S. Crypto Laws

The first envelope looked innocuous enough. A sticker on the white cardboard sleeve said it came from Hangzhou City in the east of China. Opening it up revealed something more ostentatious: a second blue envelope with a regal gold stamp. 

“Welcome to the Metaverse on Earth!” the letter inside read. Attached was my new identity card for the Republic of Palau, a tropical island nation in Micronesia near Indonesia and the Philippines. I was now officially a “digital resident” of Palau, despite never stepping foot in the country. According to the website I bought the ID from, run by a company called RNS.ID, I could use it to check-in to rental accommodation and could extend a tourist visa for Palau by 180 days if I wished. Most importantly, I could use it as my identity document on cryptocurrency exchanges.

That is exactly what traders in the U.S. are doing in order to bypass restrictions on the amount of cryptocurrency they can withdraw and the exchanges they can use, according to interviews with users, a review of Discord messages and YouTube tutorials, and my own successful tests. Many exchanges don’t allow signups from the U.S. because of the country’s still strict regulations around cryptocurrency. But with a Palau ID, U.S. traders can skirt that issue, and claim they come from Palau. The ID is so ripe for abuse that major cryptocurrency exchanges such as Binance and Kraken have already banned use of the ID from their platforms.

The Dude Whose Brain Turned to Glass

The Dude Whose Brain Turned to Glass

Welcome back to the Abstract!  

How’s your mental state these days? Feeling burnt out? Well, have I got a story for you about a guy whose mind was so blown by the events of his time that we can see his actual neurons 2,000 years later. I’m not trying to trivialize anyone’s legitimate feelings of stress; just giving us all the opportunity to look on the bright side: We haven’t been cooked alive—yet!

Next, Mars. Want to live there? Some people apparently do. Here’s a guide to the best coastal real estate of the past, courtesy of a rover that recently died there. Snap up your timeshare before Elon Musk buys it and names it X-Mars-the-Spot or some crap. Then, scientists raise alarms about all the weird endangered animals that get short shrift compared to fan favorites, like tigers, and whales, and Moo Deng. Last, an ode to the mama bear. Enjoy!

This is Your Brain on Mount Vesuvius

Giordano, Guido et al. “Unique formation of organic glass from a human brain in the Vesuvius eruption of 79 CE.” Scientific Reports. 

Nearly 2,000 years ago, a young man aged about 20 was chilling out in the Collegium Augustalium, a hall built to worship Emperor Augustus, in the Roman town of Herculaneum. Nobody knows what was running through his mind that morning, but we know what was there by the afternoon: A heat-shocked brain preserved in organic glass.  

This unlucky fellow was one of the thousands of people killed when Mount Vesuvius blew its ever-lovin’ top in the year 79, burying the neighboring towns of Pompeii and Herculaneum in searing ash, lava, and pumice. 

The Dude Whose Brain Turned to Glass

Remains of the man in the Collegium Augustalium. Image: Pier Paolo Petrone

The man in the Collegium Augustalium, who probably served as the building’s guardian, was lying down in bed when he was hit by a fast-moving volcanic belch, known as a pyroclastic flow, which raised his body temperature well above 510°C (950°F). 

That is, medically speaking, too hot. But while it is an absolutely horrifying way to die, the guardian has the posthumous honor of having a preserved glass brain “formed by a unique process of vitrification” which “is the only such occurrence on Earth,” according to researchers led by Guido Giordano of Università Roma Tre.

“Our comprehensive chemical and physical characterization of the material sampled from the skull of a human body buried at Herculaneum by the 79 CE eruption of Mount Vesuvius shows compelling evidence that these are human brain remains, composed of organic glass formed at high temperatures, a process of preservation never previously documented for human or animal tissue, neither brain nor any other kind,” said the team.

“The glass that formed as a result of such a unique process attained a perfect state of preservation of the brain and its microstructures” including “exceptionally well-preserved complex networks of neurons, axons, and other neural structures,” the researchers added.

The Dude Whose Brain Turned to Glass

Neural structures preserved in glass. Image: Giordano, Guido et al

We talk about having brain-freeze or being brain-fried, but the guardian definitely has us all beat with: brain–vitrified-into-glass-via-volcano. While it’s probably not how this guy hoped to go down in history, it’s insane that we can look at an ancient person’s actual brain, down to the neural structures, after 2,000 years. These same networks once carried thoughts like “what should I have for lunch?” and “Emperor Augustus was so based” and now they are laid out in front of us, immortalized in a glass tableau. 

“We reconstruct a scenario where a fast, very hot ash cloud was the first deadly event during the 79 CE Vesuvius eruption, enveloping victims, including the guardian who was subject to the specific conditions for heating the brain at temperatures well above 510 °C without the (total) destruction of the cerebral tissue,” the team said. “The brain then turned into glass during the fast cooling at glass transition temperature close to 510°C. Later, in agreement with witness accounts and deposit stratigraphy, Herculaneum was progressively buried by thick pyroclastic flow deposits, but at lower temperatures, so that the unique presence of a vitrified brain could have been preserved until today.”

Don’t mess with Mount Vesuvius! Unless you want people to look at your neurons in 2,000 years, in which case: go with the pyroclastic flow.

Oceanfront Property on Mars (Ocean Not Included)

Li, Jianhuii and Liu, Hai et al. “Ancient ocean coastal deposits imaged on Mars.” Proceedings of the National Academy of Sciences.

Martian lakes? Sure. Rivers? Ok. But a big ole Martian ocean? Show me the receipts.

That’s the upshot of a decades-long debate about whether a vast Martian sea extended across the northern lowlands of the red planet billions of years ago. Now, China’s Zhurong rover has produced the aforementioned receipts—and they are premium property deeds.

“Various observations suggest that large amounts of liquid water once existed on the Martian surface, however, the nature and fate of this water are uncertain,” said researchers co-led by  Jianhui Li and Hai Liu of Guangzhou University.

“Through radar data gathered by the Zhurong Rover, we identify extensive dipping deposits in the subsurface of southern Utopia Planitia,” the team said. “These deposits have structures similar to those of Earth’s coastal sediments. This finding implies the past existence of a large water body, supporting the hypothesis of a past ocean in the northern plains of Mars.”

The Dude Whose Brain Turned to Glass
Concept illustrations of the ancient beach. Image: Li, Jianhuii and Liu, Hai et al

The Zhurong rover landed in 2021 in a region called Utopia Planitia, at the edge of this proposed shoreline. Though it died the next year, it is still producing revelations from beyond the grave as scientists work through its observations. 

This study provides the first clear onsite evidence that ocean waves lapped against these lowlands, creating scenic beaches. All you have to do to cash in on this location is go back in time about four billion years and adapt your body to an alien planet, which is only slightly more challenging than getting into the housing market here on Earth in the present day.

Will Somebody Please Think of the Amphibians?!

Guénard, Benoit et al. “Limited and biased global conservation funding means most threatened species remain unsupported.” Proceedings of the National Academy of Sciences.

Humans are causing a sixth mass extinction event. Yup, it’s a bummer. To make matters worse, even our efforts to help curb the losses get all tangled up with our biases toward the so-called “charismatic megafauna” that most inspire our wonder, affection, and asymmetric sympathy. 

Enchanting animals like rhinos, tigers, and pandas have become the icons of conservation movements—but these anthropic preferences come at a great cost, reports a new study that analyzed roughly 14,600 conservation projects over a period of 25 years. 

“More attention is urgently needed to assess the extinction risks of neglected taxa, especially smaller species,” said researchers led by Benoit Guénard of the University of Hong Kong. “Paradoxically, while approximately 6% of species identified as threatened were supported by conservation funds, 29% of the funding was allocated to species of ‘least concern.’” 

“For example, small-bodied taxa, such as amphibians, have been known to be the most threatened of vertebrate groups for two decades, accounting for ~25% of the threatened vertebrate species” and “yet, amphibians received only 2.5% of recent funding, which declined from 4% in the late 1990s,” the team said. “Similarly, weak conservation efforts are observed within many groups of mammals (e.g., Rodentia, Chiroptera), reptiles (e.g., Squamata, Serpentes, or nonmarine Testudines), or insects (e.g., Odonata, Orthoptera) despite the well-known threats to these taxa.”

Honestly, preach. This problem has been a hobby-horse of mine for years—and if a hobby-horse were a real endangered species, it would probably get disproportionate conservation funding. 

To that end, the authors made a series of recommendations for “a more holistic distribution of conservation funding” and “more balanced coverage of threatened species within conservation programs.”

“Successful citizen-science programs, even for taxa not typically seen as charismatic, have already spurred an increase in local and applied actions, as many individuals may feel geographically disconnected from some of the large megafauna that receive the ‘lion’s share’ of funding,” the team noted. 

“With heightened awareness of the essential functions and services of many species that are often seen as less charismatic, it is crucial to address these biases and optimize the allocation of funds to ensure the protection of these species.”

Stoneflies and salamanders need love too! Who cares if they don’t inspire Disney movies or Moo-Deng-level devotion? Are we really so superficial that we predicate survival on cuteness? 

I mean, yes, evidently—I will literally do this in the following section. I contain hypocritical multitudes. But you can still ogle adorable animals while recognizing the urgent need for more objective conservation approaches. 

And now, on to a story about endangered charismatic megafauna…. 

A Moment of Zen from an Arctic Den

Archer, Louise et al. “Monitoring phenology and behavior of polar bears at den emergence using cameras and satellite telemetry.” The Journal of Wildlife Management.

Last, polar bear cubs. Yeah. We deserve it. We talked about the uncharismatic minor-fauna. Now show us those fluffy little bear cubs. 

Scientists have done just that by filming a bunch of cuddly future killers emerging from their dens for the first time. For six years, a team logged footage from cameras installed at roughly a dozen sites in Svalbard, Norway, to get a better sense of the factors involved in this crucial rite of passage for bears, which is rarely observed as the dens tend to be in remote and inaccessible parts of the Arctic.

The results revealed…very cute cubs. I just want to pick them up and hug them and accept the fatal mauling that comes my way—worth it for the snug. But in addition to lil baby bear pics, the study also produced valuable scientific insights, which was not necessary, but is nonetheless appreciated. 

“We found that the probability a bear had broken out of the den could be accurately predicted from changes in collar temperature, activity, and ordinal date,” said researchers led by Louise Archer of the University of Toronto. “Post-den emergence behavior was influenced by external environmental temperature, time of day, and the amount of time since den breakout; bears were more likely to emerge and stay outside longer given warmer temperatures and increasing time since den breakout.” 

“Our study highlights the importance of the post-emergence period for cub acclimatization and development and provides new monitoring tools to study polar bear denning behavior, which is increasingly vulnerable to disruption in a rapidly changing Arctic,” the team said.

The Dude Whose Brain Turned to Glass
Polar bear family near den. Image: Steven C Armstrup/Polar Bears International

The study is also a reminder of the general badassery of mama bears. These animals mate in the spring, delay implantation for several months (wish humans had this trick), dig out a den in late summer, give birth to tiny 600-gram cubs around the winter solstice, nurse them for several months, before emerging with them in the spring, by which point they have fasted up to eight months and lost nearly half their body weight. And then they have to raise the dang kids solo! All while humans make their lives immeasurably harder with the effects of climate change. 

These moms deserve a medal. Made of meat. Give the moms 5,000-pound meat medals. 

And with that, may you emerge from your dens as spring starts to thaw the Northern Hemisphere. Just watch out for volcanoes! 

Thanks for reading! See you next week.

Behind the Blog: Stunt Blogging and the 'Fuck It' Paradigm

Behind the Blog: Stunt Blogging and the 'Fuck It' Paradigm

This is Behind the Blog, where we share our behind-the-scenes thoughts about how a few of our top stories of the week came together. This week, we discuss stunt blogging, Signal pains, and murderous Reels.

EMANUEL: Yesterday Jason wrote about Instagram delivering users reels showing murder, gore, and extreme violence. It’s hard to say exactly how many people saw these videos, but judging by what users have said online, and the fact that Meta felt the need to publicly apologize for the videos, which the company said it served users because of an error, suggest that these were very widespread. 

I’ll admit that when Jason first flagged to us on Slack that this was happening it didn’t immediately strike me as a must do it immediately story. First, it’s kind of hard to suss out if what one or a handful of users say they see on Instagram is really a widespread problem or a very specific rabbit hole the algorithm put them in because of their particular Instagram habits. Second, I think that I’ve become numb to how terrible content on Meta platforms and Instagram in particular has become. 

The Digital Packrat Manifesto

The Digital Packrat Manifesto

Amazon’s recent decision to stop allowing people to download copies of their Kindle e-books to a computer has vindicated some of my longstanding beliefs about digital media. Specifically, that it doesn’t exist and you don’t own it unless you can copy and access it without being connected to the internet.

The recent move by the megacorp and its shiny-headed billionaire CEO Jeff Bezos is another large brick in the digital wall that tech companies have been building for years to separate consumers from the things they buy—or from their perspective, obtain “licenses” to. Starting Wednesday, Kindle users will no longer be able to download purchased books to a computer, where they can more easily be freed of DRM restrictions and copied to e-reader devices via USB. You can still send ebooks to other devices over WiFi for now, but the message the company is sending is one tech companies have been telegraphing for years: You don’t “own” anything digital, even if you paid us for it. The Kindle terms of service now say this, explicitly. “Kindle Content is licensed, not sold, to you,” meaning you don’t “buy a book,” you obtain a “digital content license.”

The situation brings to mind an interview I did over a decade ago, with the executive of a now-defunct streaming platform. He told me candidly that the goal of all this was to make digital media a “utility” like gas or electricity—a faucet that dispenses the world’s art as “content,” with tech companies in complete control of what goes in the tank and what comes out of it.

Hearing this was a real tin foil hat moment for me. For more than two decades, I’ve been what some might call a hoarder but what I’ve more affectionately dubbed a “digital packrat.” Which is to say I mostly avoid streaming services, I don’t trust any company or cloud with my digital media, and I store everything as files on devices that I physically control. My mp3 collection has been going strong since the Limewire days, I keep high-quality rips of all my movies on a local media server, and my preferred reading device holds a large collection of DRM-free ebooks and PDFs—everything from esoteric philosophy texts and scientific journals to scans of lesbian lifestyle magazines from the 1980s.

Sure, there are websites where you can find some of this material, like the Internet Archive. But this archive is mine. It’s my own little Library of Alexandria, built from external hard drives, OCD, and a strong distrust of corporations. I know I’m not the only one who has gone to these lengths. Sometimes when I’m feeling gloomy, I imagine how when society falls apart, we packrats will be the only ones in our village with all six seasons of The Sopranos. At the rate we’re going, that might not be too far off. 

Amazon is far from alone in this long-running trend towards eliminating digital ownership. For many people, digital distribution and streaming services have already practically ended the concept of owning and controlling your own media files. Spotify is now almost synonymous with music for some younger generations, having strip-mined the music industry from both ends by demonetizing more than 60% of the artists on its platform and pushing algorithmic slop while­ simultaneously raising subscription fees

Of course, surrendering this control means being at the complete mercy of Amazon and other platforms to determine what we can watch, read, and listen to—and we’ve already seen that these services frequently remove content for all sorts of reasons. Last October, one year after the Israeli military began its campaign of genocide in Gaza, Netflix removed “Palestinian Stories,” a collection of 19 films featuring Palestinian filmmakers and characters, saying it declined to renew its distribution license. Amazon also once famously deleted copies of 1984 off of people’s Kindles. Fearing piracy, many software companies have moved from the days of “Don’t Copy That Floppy” to the cloud-based software-as-a-service model, which requires an internet connection and charges users monthly subscription fees to use apps like Photoshop. No matter how you look at it, digital platforms have put us on a path to losing control of any media that we can’t physically touch.

How did we get here? 

In the US, it goes back to the legal concepts of individual versus intellectual property rights, which are mediated by something called “exhaustion.” The idea behind the exhaustion principle was that copyright owners, like the studio that produces a film, relinquish some (but not all) of their rights over how a work is used when they sell copies to consumers. For example, if you buy a DVD, the law may prohibit you from duplicating the work for non-personal use, but the company that produced the movie can’t stop you from re-selling or gifting the physical disc to someone else.

The fact that you’re free to do whatever you want with the things you buy seems very obvious and intuitive from our perspective, but the truth is that copyright holders have been trying to erode these individual rights granted by exhaustion from the very beginning. Over the years, book publishers have tried to punish students for reselling expensive textbooks at lower prices, and record labels have launched unsuccessful crackdowns on stores selling used CDs. Hollywood tried to shut down the video rental market multiple times after it first emerged in the 1970s, and video game industry lobbyists have repeatedly claimed that used game sales will herald the apocalypse, with some publishers calling second-hand stores like GameStop a “bigger threat than piracy.”

In all these cases, the companies’ claims were overblown, but they boiled down to one simple gripe: technology was changing and creating new markets that they didn’t control. After much hooting and hollering, their inevitable response has always been to enter those markets and attempt to position themselves at the absolute center. And nothing has made that task easier than the rise of digital distribution.

“The basic principle of exhaustion—the notion that owners have rights that are not contingent on copyright holder permission—can and should survive the transition to a digital copyright economy,” explain Aaron Perzanowski and Jason Schultz, in their book The End of Ownership: Personal Property in the Digital Economy. “Rights holders have always fought against this principle, but the digital marketplace gives them their best chance to kill it.”

Following the mass corporate freakout over piracy in the post-Napster era, copyright holders finally found two ways to get past the exhaustion principle: Digital rights management (DRM), which locks downloaded content to a centrally controlled platform with varying degrees of success, and streaming services, where companies fully control access to all media and users pay fees to access it with an internet connection.

The streaming model was particularly appealing to most normal people, because who wants to pay for and manage thousands of media files when you could have unlimited access to an entire library for a monthly fee? Piracy never went away of course—services like Netflix simply outpaced it in terms of convenience. Streaming beat piracy at its own game, but this time Silicon Valley tech companies and copyright owners were the ones at the controls.

This was fine for a time. But when two or three streaming services turned into several dozen, all with their own monthly fees, some of us started turning back to the Old Ways.

Over the past decade, keeping your own DRM-free digital media archive has become something of a lost art. It requires time and patience that many people no longer have, and it certainly can’t compete with the convenience of streaming. As large corporations and algorithms tighten their grip to a clenched fist, I think we’re long past due for a second DIY Media Renaissance. But in order for that to happen, we first need to change our habits and expectations around media consumption—starting with deprogramming this idea that media is something that should be unlimited and available at all times through a digital faucet.

"Every collection becomes deeply personal, and that’s beautiful"

One of the more abstract but dire consequences of this streaming mentality is that we’ve started to treat art and culture like wallpaper. The rise of algorithmic curation and AI-generated content has sent this into overdrive: On Spotify, music is detached from its human creators and flattened into algorithmically-generated playlists with hashtag-able labels like “Lo-Fi Chillwave Anime Vibes.” Netflix has even started dictating that producers make TV shows less engaging, so that people can passively consume them as “second screen content” while scrolling on their phones.

In her recent book Mood Machine: The Rise of Spotify and the Cost of the Perfect Playlist, music journalist Liz Pelly refers to this process as “Muzak-ing”—the conversion of media from discrete works of art with a discernible context and author to anonymous background noise meant for passive consumption at the gym or while relaxing at home.

“It turns out that playlists have spawned a new type of music listener, one who thinks less about the artist or album they are seeking out, and instead connects with emotions, moods and activities, where they just pick a playlist and let it roll,” Pelly wrote in an essay for The Baffler. “These algorithmically designed playlists, in other words, have seized on an audience of distracted, perhaps overworked, or anxious listeners whose stress-filled clicks now generate anesthetized, algorithmically designed playlists.”

Digital Packratting is the antithesis of this trend. It requires intentional curation, because you’re limited by the amount of free space on your media server and devices—and the amount of space in your home you’re willing to devote to this crazy endeavor. Every collection becomes deeply personal, and that’s beautiful. It reminds me of when I was in college and everyone in my dorm was sharing their iTunes music libraries on the local network. I discovered so many new artists by opening up that ugly app and simply browsing through my neighbors’ collections. I even made some new friends. Mix CDs were exchanged, and browsing through unfamiliar microgenres felt like falling down a rabbit hole into a new world.

While streaming platforms flatten music-listening into a homogenous assortment of vibes, listening to an album you’ve downloaded on Bandcamp or receiving a mix from a friend feels more like forging a connection with artists and people. As a musician, I’d much rather have people listen to my music this way. Having people download your music for free on Soulseek is still considered a badge of honor in my producer/dj circles.

I don’t expect everyone to read this and immediately go back to hoarding mp3s, nor do I think many people will abandon things like Spotify and Amazon Kindle completely. It’s not like I’m some model citizen either: I share a YouTube Premium account because the ads make me want to die, and I will admit having a weakness for the Criterion Channel. But the packrat lifestyle has shown me that other ways are possible, and that at the end of the day, the only things we can trust to always be there are the things we can hold in our hands and copy without restriction. 

Living with some degree of artificial scarcity also changes the way you value those things, and makes you question how much media is “enough.” If more people reflected on their desire for unlimited everything, maybe we’d find a way to break through the walled gardens that have been built to satisfy them.

Janus Rose is New York City-based journalist, educator and artist whose work explores the impacts of A.I. and technology on activists and marginalized communities. Previously a senior editor at VICE, she has been published in digital and print outlets including e-Flux JournalDAZED MagazineThe New Yorker, and Al Jazeera.

Alibaba Releases Advanced Open Video Model, Immediately Becomes AI Porn Machine

Alibaba Releases Advanced Open Video Model, Immediately Becomes AI Porn Machine

On Tuesday, Chinese tech giant Alibaba released a new “open” AI video generation model called Wan 2.1 and shared the software on Github, allowing anyone with the technical know-how and hardware to use and modify freely. It took about 24 hours for the model to be adopted by the AI porn hobbyist community, which has already shared dozens of short AI porn videos using Alibaba’s software. Elsewhere, in a community that’s dedicated to producing and sharing nonconsensual AI-generated intimate media of real people, users are already salivating over how advanced the model is. 

This is the double-edged sword of releasing open AI models that users can modify, which on one hand democratizes the use of powerful AI tools, but on the other is often used by early adopters to create nonconsensual content. 

🌟 Big News from @alibaba_cloud! 🌟
Meet WanX - our next-gen AI model redefining video generation !

🚀 Presenting mind-blowing demos from WanX 2.1!

🔥 Even more exciting:
WanX 2.1 will be OPEN-SOURCE !
Coming soon …#AIart #OpenSource pic.twitter.com/R1laOyJYAL

— Wan (@Alibaba_Wan) February 20, 2025

“Hunyuan just came out when? December?” one user said Wednesday on Telegram channel dedicated to sharing nonconsensual AI-generated porn, referring to another open AI video generator developed by Tencent that’s popular in that community. “Now we get a better Text2Video Model [that] can handle more complicated motions c: This one just came out YESTERDAY and the first Lora which got made for this is a Titfuck 😆.”

Instagram 'Error' Turned Reels Into Neverending Scroll of Murder, Gore, and Violence

Subscribe
Join the newsletter to get the latest updates.
Success
Great! Check your inbox and click the link.
Error
Please enter a valid email address.
Instagram 'Error' Turned Reels Into Neverending Scroll of Murder, Gore, and Violence

Content warning: this article contains graphic descriptions of violence against people and animals.

An “error” in Instagram Reels caused its algorithm to show some users video after video of horrific violence, animal abuse, murders, dead bodies, and other gore, Meta told 404 Media. The company said “we apologize for the mistake.”

Sometime in the last few days, this error caused people’s Reels algorithms to suddenly change. A 404 Media reader who has a biking-related Instagram account reached out to me and said that his feed, which is “typically dogs and bikes,” had become videos of people getting killed: “I had never seen someone being eaten by a shark, followed by someone getting killed by a car crash, followed by someone getting shot,” he told 404 Media. 

💡
Do you know more about why this happened? I would love to hear from you. Using a non-work device, you can message me securely on Signal at jason.404. Otherwise, send me an email at [email protected].

To test this, the person let me login to his Instagram account, and I scrolled Reels for about 15 minutes. There were a couple videos about dogs and a couple videos about bikes, but the vast majority of videos were hidden behind a “sensitive content” warning. I will describe videos I saw when I clicked through the warnings, many of which had thousands of likes and hundreds of comments: 

  • An elephant repeatedly stepping on and flattening a man
  • A man attacking a pig with a wrench
  • A close-up video of someone who had just been shot in the head
  • A woman crying while laying on top of a loved one who had just been shot to death
  • A man on a motorcycle stopping next to a pedestrian and shooting them in the head with a pistol
  • A pile of dead bodies in what looked to be a war-type situation
  • A small plane crash in front of a crowd of people
  • A group of people beating a crocodile to death
  • A few videos by an account called “PeopleDeadDaily” 
  • A man being lit on fire
  • A man shooting a cashier at point blank range

AT&T Hacker Tried to Sell Stolen Data to Foreign Government

AT&T Hacker Tried to Sell Stolen Data to Foreign Government

A U.S. soldier who recently pleaded guilty to hacking AT&T and Verizon communicated with an email address that he believed belonged to a foreign country’s military intelligence service and attempted to sell the service stolen data, according to newly filed court records reviewed by 404 Media. The court document also says that the soldier searched for “U.S. military personnel defecting to Russia.”

The court filing in the case of Cameron John Wagenius, who used the handles kiberphant0m and cyb3rph4nt0m, discusses Wagenius’ unlawful posting and transferring of confidential phone records, including records belonging to high-ranking public officials. 404 Media previously revealed how hackers linked to the AT&T breach mined it for records associated with members of the Trump family, such as Melania and Ivanka Trump, Kamala Harris, and Marco Rubio’s wife. The court document does not say what specific data Wagenius tried to sell to the foreign intelligence service, or who that data belonged to.

Flock Threatens Open Source Developer Mapping Its Surveillance Cameras

Flock Threatens Open Source Developer Mapping Its Surveillance Cameras

The surveillance company Flock sent the creator of a website that maps its license plate-reading cameras a cease and desist letter demanding that he immediately stop using the name “DeFlock” on his website.

404 Media previously wrote about DeFlock, an open source mapping project created by Will Freeman that tracks the locations of automated license plate readers (ALPRs) from Flock and other companies. DeFlock currently maps more than 16,000 ALPRs around the world, which includes both Flock cameras as well as many created by Motorola. 

Late last month, Flock’s lawyers sent Freeman a letter demanding that he immediately “Cease and desist all use of the name ‘DEFLOCK’ or any variation thereof, remove all instances of ‘DEFLOCK’ from your Website, advertisements, promotional materials, and any other content, and Refrain from adopting any trademarks, trade names, or branding continue to or likely to cause dilution by blurring, dilution by tarnishment, and false advertising with respect to the Flock Marks in the future.” 404 Media has obtained a copy of the letter and uploaded it here.

“It has come to our attention that you are maintaining a website and promoting a project entitled DEFLOCK, which purports to track automated license plate readers across the country, and discusses the alleged dangers of [the surveillance]. While Flock believes in open debate, it takes misuses of its intellectual property seriously,” the letter, written by Sarah M. Katz of the law firm Adelman Matz, says. “While Flock does not object to the free dissemination of truthful information, your use of the Flock Marks as part of your brand DEFLOCK is being wrongfully used to make false statements about Flock and its products and is damaging both its reputation and the goodwill associated with the Flock Marks.”

It is not clear what “false statements” Freeman is making about Flock. The letter says that it should not be called DeFlock because not all of the cameras tracked by DeFlock are Flock cameras (some are Motorola), and says “the website also implies that various license plate readers are vulnerable to security hacks, which given that all of the readers are being imputed to Flock, provides a false impression about the security of Flock Products.” On the front page of DeFlock, there is a link to a 404 Media article about a security vulnerability in Motorola ALPRs. A security researcher on YouTube and Freeman previously showed that certain Motorola ALPRs are leaking data online, and 404 Media wrote about that research. The DeFlock site says “BREAKING: Anyone Can Access Motorola ALPR Data” and links to our article, but makes no claims about Flock ALPR security. 

Freeman is being represented by the Electronic Frontier Foundation and is not going to change his website, Cara Gagliano, a senior staff attorney at the EFF said in a response to Flock: “The claims alleged in your letter are groundless, and Mr. Freeman will not be complying with your demands,” the EFF’s letter says. “Because there is no legal basis for your demands, my client declines to comply with them.”

The cease-and-desist letter shows that Flock is both aware of the DeFlock website and is threatening Freeman with legal action. Flock’s letter argues that DeFlock is causing the “dilution” of Flock’s trademarks rather than “infringement” of them. This is a crucial distinction, Gagliano said. 

Companies can sue for trademark infringement when they believe that a consumer is likely to confuse the infringing product for the real one; dilution cases only apply to “famous” trademarks and can be pursued when a similar product would undermine or tarnish the brand of the original. Gagliano says in her letter that DeFlock is not diluting the Flock brand because it was specifically made for the noncommercial criticism of the surveillance company.

“Federal anti-dilution law includes express carve-outs for any noncommercial use of a mark and for any use in connection with criticizing or commenting on the mark owner or its products,” Gagliano wrote, adding that a false advertising claim made in Flock’s letter does not apply because DeFlock is a noncommercial website. 

“DeFlock is a grassroots project that aims to ‘shine a light on the widespread use of ALPR technology, raise awareness about the threats it poses to personal privacy and civil liberties, and empower the public to take action.’ It pursues that mission by providing information about ALPRs and maintaining an interactive, crowd-sourced map of ALPR installations,” she added. “The name ‘DeFlock’ references the project’s goal of ending ALPR usage and Flock’s status as one of the most widely used ALPR providers.”

Gagliano told 404 Media that Flock’s attempt to go after Freeman and DeFlock on a dilution claim raises serious free speech concerns. “Flock's choice to claim dilution rather than infringement is telling. Infringement requires showing that consumers are likely to be confused by the use; Flock clearly realizes how implausible that is here,” she said. “Dilution is a much more nebulous concept that we think raises serious constitutional questions. It's fortunate that dilution laws typically have enough explicit exceptions for claims to fail in their face in cases like this, but it's still much too broad a doctrine with little to justify it.”

Flock did not respond to a request for comment.

Bluesky Deletes AI Protest Video of Trump Sucking Musk's Toes, Calls It 'Non-Consensual Explicit Material'

Bluesky Deletes AI Protest Video of Trump Sucking Musk's Toes, Calls It 'Non-Consensual Explicit Material'

Update: After this article was published, Bluesky restored Kabas' post and told 404 Media the following: "This was a case of our moderators applying the policy for non-consensual AI content strictly. After re-evaluating the newsworthy context, the moderation team is reinstating those posts."

Bluesky deleted a viral, AI-generated protest video in which Donald Trump is sucking on Elon Musk’s toes because its moderators said it was “non-consensual explicit material.” The video was broadcast on televisions inside the office Housing and Urban Development earlier this week, and quickly went viral on Bluesky and Twitter. 

Independent journalist Marisa Kabas obtained a video from a government employee and posted it on Bluesky, where it went viral. Tuesday night, Bluesky moderators deleted the video because they said it was “non-consensual explicit material.” 

“A Bluesky account you control (@marisakabas.bsky.social) posted content or shared a link that contains non-consensual explicit material, which is in violation of our Community Guidelines. As a result of this violation, we have taken down your post,” an email Kabas received from Bluesky moderation reads. “We trust that you will understand the necessity of these measures and the gravity of the situation. Bluesky explicitly prohibits the sharing of non-consensual sexual media. You cannot use Bluesky to break the law or cause harm to others. All users must be treated with respect.” 

Kabas is challenging the deletion. 

Bluesky Deletes AI Protest Video of Trump Sucking Musk's Toes, Calls It 'Non-Consensual Explicit Material'

“Hello—the post you have taken down was a video broadcast inside a government building to protest a fascist regime,” Kabas wrote in an email back to Bluesky seen by 404 Media. “It is in the public interest and it is legitimate news. Taking it down is an attempt to bury the story and an alarming form of censorship. I love this platform but I’m shocked by this decision. I ask you to reconsider it.” 

Other Bluesky users said that versions of the video they uploaded were also deleted, though it is still possible to find the video on the platform. 

Technically speaking, the AI video of Trump sucking Musk’s toes, which had the words “LONG LIVE THE REAL KING” shown on top of it, is a nonconsensual AI-generated video, because Trump and Musk did not agree to it. But social media platform content moderation policies have always had carve outs that allow for the criticism of powerful people, especially the world’s richest man and the literal president of the United States. 

For example, we once obtained Facebook’s internal rules about sexual content for content moderators, which included broad carveouts to allow for sexual content that criticized public figures and politicians. The First Amendment, which does not apply to social media companies but is relevant considering that Bluesky told Kabas she could not use the platform to “break the law,” has essentially unlimited protection for criticizing public figures in the way this video is doing. 

More importantly, the video Kabas posted was not a video Kabas made herself or that was totally devoid of context. As Kabas notes in her email back to Bluesky, the video was being played on television screens within a federal government office building, an obvious act of protest that she was reporting on, and an obviously newsworthy video when considering the context that the federal government is currently being gutted by these two men. (For what it's worth, Kabas has been doing some of the best reporting on Musk's dismantling of the federal government on her website The Handbasket.)

Content moderation has been one of Bluesky’s growing pains over the last few months. The platform has millions of users but only a few dozen employees, meaning that perfect content moderation is impossible, and a lot of it necessarily needs to be automated. This is going to lead to mistakes. But the video Kabas posted was one of the most popular posts on the platform earlier this week and resulted in a national conversation about the protest. Deleting it—whether accidentally or because its moderation rules are so strict as to not allow for this type of reporting on a protest against the President of the United States—is a problem.

Bluesky did not immediately respond to a request for comment.

Podcast: The Rise of AI Book Ripoffs

Podcast: The Rise of AI Book Ripoffs

We start this week's episode with Joseph finding out someone basically ripped off his book with a potentially AI-generated summary. Emanuel also updates us on some of the impact his reporting on AI in libraries has had. After the break, Sam tells us all about a Y Combinator supported startup that is straight-up dehumanizing factory workers. In the subscribers-only section, we talk about an apparent act of protest from inside the U.S. government involving an AI video of Musk and Trump.

Listen to the weekly podcast on Apple Podcasts, Spotify, or YouTube. Become a paid subscriber for access to this episode's bonus content and to power our journalism. If you become a paid subscriber, check your inbox for an email from our podcast host Transistor for a link to the subscribers-only version! You can also add that subscribers feed to your podcast app of choice and never miss an episode that way. The email should also contain the subscribers-only unlisted YouTube link for the extended video version too. It will also be in the show notes in your podcast player.

Y Combinator Supports AI Startup Dehumanizing Factory Workers

Y Combinator Supports AI Startup Dehumanizing Factory Workers

A venture capital-backed “AI performance monitoring system for factory workers” is proposing what appears to be dehumanizing surveillance of factories, where machine vision tracks workers’ hand movements and output so a boss can look at graphs and yell at them about efficiency.

In a launch video demoing the product, Baid and Mohta put on a skit showing how Optifye.ai would be used by factory bosses. 

❌